Connect with us

Tech

Niantic’s Peridot, the Augmented Reality Alien Dog, Is Now a Talking Tour Guide

Published

on

Niantic’s Peridot, the Augmented Reality Alien Dog, Is Now a Talking Tour Guide


Imagine you’re walking your dog. It interacts with the world around you—sniffing some things, relieving itself on others. You walk down the Embarcadero in San Francisco on a bright sunny day, and you see the Ferry Building in the distance as you look out into the bay. Your dog turns to you, looks you in the eye, and says, “Did you know this waterfront was blocked by piers and a freeway for 100 years?”

OK now imagine your dog looks like an alien and only you can see it. That’s the vision for a new capability created for the Niantic Labs AR experience Peridot.

Niantic, also the developer of the worldwide AR behemoth Pokémon Go, hopes to build out its vision of extending the metaverse into the real world by giving people the means to augment the space around them with digital artifacts. Peridot is a mobile game that lets users customize and interact with their own little Dots—dog-sized digital companions that appear on your phone’s screen and can look like they’re interacting with the world objects in the view of your camera lens. They’re very cute, and yes, they look a lot like Pokémon. Now, they can talk.

Peridot started as a mobile game in 2022, then got infused with generative AI features. The game has since moved into the hands of Niantic Spatial, a startup created in April that aims to turn geospatial data into an accessible playground for its AR ambitions. Now called Peridot Beyond, it has been enabled in Snap’s Spectacles.

Hume AI, a startup running a large language model that aims to make chatbots seem more empathetic, is now partnering with Niantic Spatial to bring a voice to the Dots on Snap’s Spectacles. The move was initially announced in September, but now it’s ready for the public and will be demonstrated at Snap’s Lens Fest developer event this week.

Snap’s latest Spectacles, its augmented reality smart glasses.

Courtesy of Snap



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Paper industry could become more energy-efficient with a new measurement method

Published

on

Paper industry could become more energy-efficient with a new measurement method


Fossil consumption calculation process for use of fossil fuel in an integrated kraft pulp mill. Credit: Applied Energy (2025). DOI: 10.1016/j.apenergy.2025.126685

The pulp and paper industry consumes large amounts of energy. But despite stricter EU requirements for efficiency improvements, there has been no way to measure and compare energy consumption between different companies in a fair way. In collaboration with the Swedish Environmental Protection Agency, researchers at Linköping University, Sweden, now present a solution that has great potential to be used throughout the EU.

“Even if this would contribute to increasing efficiency by one or a few percent only, this involves so much energy that it can make a huge difference,” says Kristina Nyström, Ph.D. student at the Department of Management and Engineering at Linköping University.

Globally, the pulp and paper industry accounts for 4% of energy used by the industrial sector. Through its Industrial Emissions Directive, the EU has set efficiency requirements for the industrial sector to reduce climate impact. An important tool for this is to make comparisons between factories within an industry—so-called benchmarking.

“But this has not been possible in the paper industry, because the mills have been so different that comparable results have not been achieved,” Kristina Nyström explains.

Therefore, the Swedish Environmental Protection Agency, assisted by Linköping University and Chalmers Industriteknik and in consultation with the paper industry, has developed a calculation method to enable comparisons. The method, which is presented in an article published in the journal Applied Energy, has great potential to be used throughout the EU, according to Olof Åkesson, former Swedish Environmental Protection Agency employee, who initiated the project.

The solution is to divide paper production into standardized processes such as actual pulp production, dissolution of purchased pulp, drying of pulp or paper production. These processes are common to enough mills for comparisons to be meaningful. In this way, companies can discover what in their processes works less efficiently compared to others, where improvements can be made and which actions would be most beneficial.

In addition, this method allows for more measures to be included in the energy efficiency efforts. One example is that companies are credited with the residual heat from manufacturing that is used in the surrounding community, such as the heating of homes or greenhouses.

Should this method gain ground, it could contribute to a changed approach to energy efficiency. At present, public agencies’ demands for energy audits often focus on details, which risks significant efficiency measures being overlooked.

“The benefit of making the pulp and paper industry more efficient is that this can reduce the use of fossil fuels and release , biofuels and electricity for other purposes,” says Åkesson.

With the involvement of researchers, public agencies and companies in the pulp and paper industry, chances are high that the method was designed in a way that is useful in practice. The collaboration between organizations can serve as a model for other industries wanting to develop their own measurement methods, according to Nyström.

Several companies that tested the measurement method have been positive, and it now needs to be spread and tested on a larger scale, the researchers say. The Swedish Environmental Protection Agency is working to develop the model, now also in dialog with public agencies and the and in Finland.

More information:
Olof Åkesson et al, A calculation method enabling energy benchmarking in the pulp and paper industry: Adopting a methodology that bridge the research–policy implementation gap, Applied Energy (2025). DOI: 10.1016/j.apenergy.2025.126685

Citation:
Paper industry could become more energy-efficient with a new measurement method (2025, October 16)
retrieved 16 October 2025
from https://techxplore.com/news/2025-10-paper-industry-energy-efficient-method.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Tech

Researchers chart path for investors to build a cleaner aviation industry

Published

on

Researchers chart path for investors to build a cleaner aviation industry


Credit: Pixabay/CC0 Public Domain

Cutting planet-warming pollution to near-zero will take more than inventing new clean technologies—it will require changing how the world invests in them. That’s especially true for industries like aviation, where developing and adopting greener solutions is risky and expensive, according to a University of California San Diego commentary piece in Science.

The paper calls for smarter ways of managing investment risk that could help speed up the shift toward cleaner air travel and other hard-to-decarbonize sectors.

“The aviation sector—a fast-growing source of greenhouse gases—illustrates the broader challenge of industrial decarbonization: too little investment in technologies that could yield the biggest climate benefits,” said the paper’s co-author David G. Victor, professor of innovation and at the UC San Diego School of Global Policy and Strategy and co-director of the Deep Decarbonization Initiative.

The piece outlines a new approach that could help guide a coalition of research and development (R&D) programs alongside investors and airlines seeking to deploy new technologies to curb from the .

“Despite all the chaos in global geopolitics and climate policies these days, there are large and growing pools of capital willing to take risks on clean ,” Victor said. “What’s been missing is a framework to guide that capital to the riskiest but most transformative investments.”

He added that investors and research managers tend to focus on familiar, lower-risk projects like next-generation jet engines or recycled-fuel pathways.

“But getting aviation and other hard-to-abate sectors to near-zero emissions means taking on bigger risks with technologies and new lines of business that will be highly disruptive to the existing industry. Investors and airlines need to find smarter ways to encourage and manage these disruptive investments,” Victor said.

In the article, Victor and co-authors call for a more realistic framework to guide both and private .

They propose a tool called an Aviation Sustainability Index (ASI)—a quantitative method to assess how different technologies or investments could help decouple emissions from growth in air travel.

The approach is designed to help investors distinguish between projects that only modestly improve efficiency and those that could significantly transform the sector’s climate impact.

The authors note that while roughly $1 trillion is expected to flow into aviation over the next decade, most of that money will simply make aircraft slightly more efficient. Few investors, they argue, have clear incentives to back the kind of breakthrough technologies—such as hydrogen propulsion, advanced aircraft designs, or large-scale sustainable fuel systems—that could substantially reduce emissions.

“Cleaner flight is possible, but it requires changing how we think about both risk and return,” Victor said. “We need new institutions, incentives, and partnerships that reward innovation, not just incrementalism.”

The commentary, written by a multinational team of scholars, also highlights a broader lesson for : global decarbonization goals such as “net zero by 2050” sound bold and ambitious. But when it becomes clear that they can’t be met, these goals make it harder to focus on the practical steps needed today to drive change in real-world markets.

Ultimately, the paper argues for action that begins now. By developing better tools to evaluate climate-friendly investments and by rewarding companies willing to take calculated risks on breakthrough technologies, governments, and industry leaders can accelerate real progress toward decarbonization.

The paper was co-authored by Thomas Conlon of University College Dublin, Philipp Goedeking of Johannes Gutenberg University of Mainz (Germany) and Andreas W. Schäfer of University College London.

More information:
David G. Victor et al, Mobilizing capital and technology for a clean aviation industry, Science (2025). DOI: 10.1126/science.adu2458. www.science.org/doi/10.1126/science.adu2458

Citation:
Researchers chart path for investors to build a cleaner aviation industry (2025, October 16)
retrieved 16 October 2025
from https://techxplore.com/news/2025-10-path-investors-cleaner-aviation-industry.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Tech

Method teaches generative AI models to locate personalized objects

Published

on

Method teaches generative AI models to locate personalized objects


In-context personalized localization involves localizing object instances present in a scene (or query image) similar to the object presented as an in-context example. In this setting, the input to the model is a category name, in-context image, bounding box coordinates, and a query image. The model is tasked with localizing the same category of interest (presented as an in-context example) in the query image. Here, we visualize a few inputs and outputs from various VLMs highlighting that our fine-tuned model better captures the information in the in-context image. Credit: arXiv (2024). DOI: 10.48550/arxiv.2411.13317

Say a person takes their French Bulldog, Bowser, to the dog park. Identifying Bowser as he plays among the other canines is easy for the dog owner to do while onsite.

But if someone wants to use a generative AI model like GPT-5 to monitor their pet while they are at work, the model could fail at this basic task. Vision-language models like GPT-5 often excel at recognizing general objects, like a dog, but they perform poorly at locating personalized objects, like Bowser the French Bulldog.

To address this shortcoming, researchers from MIT and the MIT-IBM Watson AI Lab have introduced a new training method that teaches vision-language models to localize personalized objects in a scene.

Their method uses carefully prepared video-tracking data in which the same object is tracked across multiple frames. They designed the dataset so the model must focus on contextual clues to identify the personalized object, rather than relying on knowledge it previously memorized.

When given a few example images showing a personalized object, like someone’s pet, the retrained model is better able to identify the location of that same pet in a new image.

Models retrained with their method outperformed state-of-the-art systems at this task. Importantly, their technique leaves the rest of the model’s general abilities intact.

This new approach could help future AI systems track specific objects across time, like a child’s backpack, or localize objects of interest, such as a species of animal in ecological monitoring. It could also aid in the development of AI-driven assistive technologies that help visually impaired users find certain items in a room.

“Ultimately, we want these models to be able to learn from context, just like humans do. If a model can do this well, rather than retraining it for each new task, we could just provide a few examples and it would infer how to perform the task from that context. This is a very powerful ability,” says Jehanzeb Mirza, an MIT postdoc and senior author of a paper on this technique posted to the arXiv preprint server.

Mirza is joined on the paper by co-lead authors Sivan Doveh, a graduate student at Weizmann Institute of Science; and Nimrod Shabtay, a researcher at IBM Research; James Glass, a senior research scientist and the head of the Spoken Language Systems Group in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL); and others. The work will be presented at the International Conference on Computer Vision (ICCV 2025), held Oct 19–23 in Honolulu, Hawai’i.

An unexpected shortcoming

Researchers have found that large language models (LLMs) can excel at learning from context. If they feed an LLM a few examples of a task, like addition problems, it can learn to answer new addition problems based on the context that has been provided.

A vision-language model (VLM) is essentially an LLM with a visual component connected to it, so the MIT researchers thought it would inherit the LLM’s in-context learning capabilities. But this is not the case.

“The has not been able to find a black-and-white answer to this particular problem yet. The bottleneck could arise from the fact that some is lost in the process of merging the two components together, but we just don’t know,” Mirza says.

The researchers set out to improve VLMs abilities to do in-context localization, which involves finding a specific object in a new image. They focused on the data used to retrain existing VLMs for a new task, a process called fine-tuning.

Typical fine-tuning data are gathered from random sources and depict collections of everyday objects. One image might contain cars parked on a street, while another includes a bouquet of flowers.

“There is no real coherence in these data, so the model never learns to recognize the same object in multiple images,” he says.

To fix this problem, the researchers developed a new dataset by curating samples from existing video-tracking data. These data are video clips showing the same object moving through a scene, like a tiger walking across a grassland.

They cut frames from these videos and structured the dataset so each input would consist of multiple images showing the same object in different contexts, with example questions and answers about its location.

“By using multiple images of the same object in different contexts, we encourage the model to consistently localize that object of interest by focusing on the context,” Mirza explains.

Forcing the focus

But the researchers found that VLMs tend to cheat. Instead of answering based on context clues, they will identify the object using knowledge gained during pretraining.

For instance, since the model already learned that an image of a tiger and the label “tiger” are correlated, it could identify the tiger crossing the grassland based on this pretrained knowledge, instead of inferring from context.

To solve this problem, the researchers used pseudo-names rather than actual object category names in the dataset. In this case, they changed the name of the tiger to “Charlie.”

“It took us a while to figure out how to prevent the model from cheating. But we changed the game for the model. The model does not know that ‘Charlie’ can be a tiger, so it is forced to look at the context,” he says.

The researchers also faced challenges in finding the best way to prepare the data. If the frames are too close together, the background would not change enough to provide data diversity.

In the end, finetuning VLMs with this new dataset improved accuracy at personalized localization by about 12% on average. When they included the dataset with pseudo-names, the performance gains reached 21%.

As model size increases, their technique leads to greater performance gains.

In the future, the researchers want to study possible reasons VLMs don’t inherit in-context learning capabilities from their base LLMs. In addition, they plan to explore additional mechanisms to improve the performance of a VLM without the need to retrain it with new data.

“This work reframes few-shot personalized object localization—adapting on the fly to the same object across new scenes—as an instruction-tuning problem and uses video-tracking sequences to teach VLMs to localize based on visual context rather than class priors. It also introduces the first benchmark for this setting with solid gains across open and proprietary VLMs.

“Given the immense significance of quick, instance-specific grounding—often without finetuning—for users of real-world workflows (such as robotics, augmented reality assistants, creative tools, etc.), the practical, data-centric recipe offered by this work can help enhance the widespread adoption of vision-language foundation models,” says Saurav Jha, a postdoc at the Mila-Quebec Artificial Intelligence Institute, who was not involved with this work.

More information:
Sivan Doveh et al, Teaching VLMs to Localize Specific Objects from In-context Examples, arXiv (2025). DOI: 10.48550/arxiv.2411.13317

Journal information:
arXiv


This story is republished courtesy of MIT News (web.mit.edu/newsoffice/), a popular site that covers news about MIT research, innovation and teaching.

Citation:
Method teaches generative AI models to locate personalized objects (2025, October 16)
retrieved 16 October 2025
from https://techxplore.com/news/2025-10-method-generative-ai-personalized.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Trending