Tech
Teaching robots to map large environments
A robot searching for workers trapped in a partially collapsed mine shaft must rapidly generate a map of the scene and identify its location within that scene as it navigates the treacherous terrain.
Researchers have recently started building powerful machine-learning models to perform this complex task using only images from the robot’s onboard cameras, but even the best models can only process a few images at a time. In a real-world disaster where every second counts, a search-and-rescue robot would need to quickly traverse large areas and process thousands of images to complete its mission.
To overcome this problem, MIT researchers drew on ideas from both recent artificial intelligence vision models and classical computer vision to develop a new system that can process an arbitrary number of images. Their system accurately generates 3D maps of complicated scenes like a crowded office corridor in a matter of seconds.
The AI-driven system incrementally creates and aligns smaller submaps of the scene, which it stitches together to reconstruct a full 3D map while estimating the robot’s position in real-time.
Unlike many other approaches, their technique does not require calibrated cameras or an expert to tune a complex system implementation. The simpler nature of their approach, coupled with the speed and quality of the 3D reconstructions, would make it easier to scale up for real-world applications.
Beyond helping search-and-rescue robots navigate, this method could be used to make extended reality applications for wearable devices like VR headsets or enable industrial robots to quickly find and move goods inside a warehouse.
“For robots to accomplish increasingly complex tasks, they need much more complex map representations of the world around them. But at the same time, we don’t want to make it harder to implement these maps in practice. We’ve shown that it is possible to generate an accurate 3D reconstruction in a matter of seconds with a tool that works out of the box,” says Dominic Maggio, an MIT graduate student and lead author of a paper on this method.
Maggio is joined on the paper by postdoc Hyungtae Lim and senior author Luca Carlone, associate professor in MIT’s Department of Aeronautics and Astronautics (AeroAstro), principal investigator in the Laboratory for Information and Decision Systems (LIDS), and director of the MIT SPARK Laboratory. The research will be presented at the Conference on Neural Information Processing Systems.
Mapping out a solution
For years, researchers have been grappling with an essential element of robotic navigation called simultaneous localization and mapping (SLAM). In SLAM, a robot recreates a map of its environment while orienting itself within the space.
Traditional optimization methods for this task tend to fail in challenging scenes, or they require the robot’s onboard cameras to be calibrated beforehand. To avoid these pitfalls, researchers train machine-learning models to learn this task from data.
While they are simpler to implement, even the best models can only process about 60 camera images at a time, making them infeasible for applications where a robot needs to move quickly through a varied environment while processing thousands of images.
To solve this problem, the MIT researchers designed a system that generates smaller submaps of the scene instead of the entire map. Their method “glues” these submaps together into one overall 3D reconstruction. The model is still only processing a few images at a time, but the system can recreate larger scenes much faster by stitching smaller submaps together.
“This seemed like a very simple solution, but when I first tried it, I was surprised that it didn’t work that well,” Maggio says.
Searching for an explanation, he dug into computer vision research papers from the 1980s and 1990s. Through this analysis, Maggio realized that errors in the way the machine-learning models process images made aligning submaps a more complex problem.
Traditional methods align submaps by applying rotations and translations until they line up. But these new models can introduce some ambiguity into the submaps, which makes them harder to align. For instance, a 3D submap of a one side of a room might have walls that are slightly bent or stretched. Simply rotating and translating these deformed submaps to align them doesn’t work.
“We need to make sure all the submaps are deformed in a consistent way so we can align them well with each other,” Carlone explains.
A more flexible approach
Borrowing ideas from classical computer vision, the researchers developed a more flexible, mathematical technique that can represent all the deformations in these submaps. By applying mathematical transformations to each submap, this more flexible method can align them in a way that addresses the ambiguity.
Based on input images, the system outputs a 3D reconstruction of the scene and estimates of the camera locations, which the robot would use to localize itself in the space.
“Once Dominic had the intuition to bridge these two worlds — learning-based approaches and traditional optimization methods — the implementation was fairly straightforward,” Carlone says. “Coming up with something this effective and simple has potential for a lot of applications.
Their system performed faster with less reconstruction error than other methods, without requiring special cameras or additional tools to process data. The researchers generated close-to-real-time 3D reconstructions of complex scenes like the inside of the MIT Chapel using only short videos captured on a cell phone.
The average error in these 3D reconstructions was less than 5 centimeters.
In the future, the researchers want to make their method more reliable for especially complicated scenes and work toward implementing it on real robots in challenging settings.
“Knowing about traditional geometry pays off. If you understand deeply what is going on in the model, you can get much better results and make things much more scalable,” Carlone says.
This work is supported, in part, by the U.S. National Science Foundation, U.S. Office of Naval Research, and the National Research Foundation of Korea. Carlone, currently on sabbatical as an Amazon Scholar, completed this work before he joined Amazon.
Tech
The Pixel 10 Family Is Marked Down on Amazon
If you’re a part of the Pixel crew like I am, you know that discounts on the latest generation are few and far between. That’s why I’m pleased to share that the entire family of Pixel 10 phones, from the regular Pixel 10 all the way up to the recently-released Pixel 10 Pro Fold, are all marked down by various amounts on Amazon.
Starting with the base model Pixel 10, you’ll save $200 on both the 128 GB and 256 GB models in all four colors, bringing the prices down to $599 and $699, respectively. The base version of the Pixel 10 makes a few compromises to bring the price down, like foregoing the Pro model’s vapor chamber for cooling, and opting for a smaller camera sensor. It’s still an excellent choice for casual Android enjoyers, particularly at the price, but power users and mobile gamers may want to think about upgrading to the Pro.
Like the regular 10, the Pixel 10 Pro is marked down by $250 across all sizes, but color availability does change a bit, particularly on the 1 TB model. The biggest difference between the two models are the higher-resolution screen, more memory, and the bigger and better camera sensors. You can also get the higher storage models, while the regular Pixel 10 only goes up to 256 GB. The Pixel 10 Pro XL, which has the same specs as the 10 Pro but with a larger screen, is marked down by $300, again with some varying availability between color and storage size.
Finally, we have the Pixel 10 Pro Fold, which just recently became available for purchase, and is already marked down by a not-insignificant $300 for both the 256 GB and 512 GB models, and I even spotted both colors in stock at both sizes. It has not one, but two excellent displays, and feels premium and sturdy, even if it is missing some of the features found on the 10 Pro.
With discounts on a variety of Pixel 10 series phones, you might need a little more help deciding which one is for you. We have a handy guide that compares all the currently available Pixel phones, including the Pixel 9a, which is currently discounted as well. We also have an in-depth review comparing the Pixel 10, Pixel 10 Pro, and Pixel 10 Pro XL specifically, which is worth a read for the extra details.
Tech
Why people don’t demand data privacy, even as governments and corporations collect more personal information
When the Trump administration gave Immigration and Customs Enforcement access to a massive database of information about Medicaid recipients in June 2025, privacy and medical justice advocates sounded the alarm. They warned that the move could trigger all kinds of public health and human rights harms.
But most people likely shrugged and moved on with their day. Why is that? It’s not that people don’t care. According to a 2023 Pew Research Center survey, 81% of American adults said they were concerned about how companies use their data, and 71% said they were concerned about how the government uses their data.
At the same time, though, 61% expressed skepticism that anything they do makes much difference. This is because people have come to expect that their data will be captured, shared and misused by state and corporate entities alike. For example, many people are now accustomed to instinctively hitting “accept” on terms of service agreements, privacy policies and cookie banners regardless of what the policies actually say.
At the same time, data breaches have become a regular occurrence, and private digital conversations exposing everything from infidelity to military attacks have become the stuff of public scrutiny. The cumulative effect is that people are loath to change their behaviors to better protect their data − not because they don’t care, but because they’ve been conditioned to think that they can’t make a difference.
As scholars of data, technology and culture, we find that when people are made to feel as if data collection and abuse are inevitable, they are more likely to accept it—even if it jeopardizes their safety or basic rights.
Where regulation falls short
Policy reforms could help to change this perception, but they haven’t yet. In contrast to a growing number of countries that have comprehensive data protection or privacy laws, the United States offers only a patchwork of policies covering the issue.
At the federal level, the most comprehensive data privacy laws are nearly 40 years old. The Privacy Act of 1974, passed in the wake of federal wiretapping in the Watergate and the Counterintelligence Program scandals, limited how federal agencies collected and shared data. At the time government surveillance was unexpected and unpopular.
But it also left open a number of exceptions—including for law enforcement—and did not affect private companies. These gaps mean that data collected by private companies can end up in the hands of the government, and there is no good regulation protecting people from this loophole.
The Electronic Communications Privacy Act of 1986 extended protections against telephone wire tapping to include electronic communications, which included services such as email. But the law did not account for the possibility that most digital data would one day be stored on cloud servers.
Since 2018, 19 U.S. states have passed data privacy laws that limit companies’ data collection activities and enshrine new privacy rights for individuals. However, many of these laws also include exceptions for law enforcement access.
These laws predominantly take a consent-based approach—think of the pesky banner beckoning you to “accept all cookies”—that encourages you to give up your personal information even when it’s not necessary. These laws put the onus on individuals to protect their privacy, rather than simply barring companies from collecting certain kinds of information from their customers.
The privacy paradox
For years, studies have shown that people claim to care about privacy but do not take steps to actively protect it. Researchers call this the privacy paradox. It shows up when people use products that track them in invasive ways, or when they consent to data collection, even when they could opt out. The privacy paradox often elicits appeals to transparency: If only people knew that they had a choice, or how the data would be used, or how the technology works, they would opt out.
But this logic downplays the fact that options for limiting data collection are often intentionally designed to be convoluted, confusing and inconvenient, and they can leave users feeling discouraged about making these choices, as communication scholars Nora Draper and Joseph Turow have shown. This suggests that the discrepancy between users’ opinions on data privacy and their actions is hardly a contradiction at all. When people are conditioned to feel helpless, nudging them into different decisions isn’t likely to be as effective as tackling what makes them feel helpless in the first place.
Resisting data disaffection
The experience of feeling helpless in the face of data collection is a condition we call data disaffection. Disaffection is not the same as apathy. It is not a lack of feeling but rather an unfeeling—an intentional numbness. People manifest this numbness to sustain themselves in the face of seemingly inevitable datafication, the process of turning human behavior into data by monitoring and measuring it.
It is similar to how people choose to avoid the news, disengage from politics or ignore the effects of climate change. They turn away because data collection makes them feel overwhelmed and anxious—not because they don’t care.
Taking data disaffection into consideration, digital privacy is a cultural issue—not an individual responsibility—and one that cannot be addressed with personal choice and consent. To be clear, comprehensive data privacy law and changing behavior are both important. But storytelling can also play a powerful role in shaping how people think and feel about the world around them.
We believe that a change in popular narratives about privacy could go a long way toward changing people’s behavior around their data. Talk of “the end of privacy” helps create the world the phrase describes. Philosopher of language J.L. Austin called those sorts of expressions performative utterances. This kind of language confirms that data collection, surveillance and abuse are inevitable so that people feel like they have no choice
Cultural institutions have a role to play here, too. Narratives reinforcing the idea of data collection as being inevitable come not only from tech companies’ PR machines but also mass media and entertainment, including journalists. The regular cadence of stories about the federal government accessing personal data, with no mention of recourse or justice, contributes to the sense of helplessness.
Alternatively, it’s possible to tell stories that highlight the alarming growth of digital surveillance and frame data governance practices as controversial and political rather than innocuous and technocratic. The way stories are told affects people’s capacity to act on the information that the stories convey. It shapes people’s expectations and demands of the world around them.
The ICE-Medicaid data-sharing agreement is hardly the last threat to data privacy. But the way people talk and feel about it can make it easier—or more difficult—to ignore data abuses the next time around.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Citation:
Why people don’t demand data privacy, even as governments and corporations collect more personal information (2025, November 5)
retrieved 5 November 2025
from https://techxplore.com/news/2025-11-people-dont-demand-privacy-corporations.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
Zohran Mamdani Just Inherited the NYPD Surveillance State
Mamdani’s campaign did not respond to a request for comment.
The NYPD’s turn toward mass surveillance was begun in earnest by Commissioner Raymond Kelly during the immediate aftermath of September 11, buoyed by hundreds of millions of dollars in federal anti-terrorism grants. However, Ferguson says Kelly’s rival, former commissioner William Bratton, was a key architect behind the NYPD’s reliance on “big data,” by implementing the CompStat data analysis system to map and electronically collate crime data during the mid-1990s and again during his return to New York City in 2014 under Mayor Bill de Blasio. Bratton was also a mentor to Jessica Tisch and has spoken admiringly of her since leaving the NYPD.
Tisch was a main architect of the NYPD’s Domain Awareness System, an enormous, $3 billion, Microsoft-based surveillance network of tens of thousands of private and public surveillance cameras, license plate readers, gunshot detectors, social media feeds, biometric data, cryptocurrency analysis, location data, bodyworn and dashcam livestreams, and other technology that blankets the five boroughs’ 468-square-mile territory. Patterned off London’s 1990s CCTV surveillance network, the “ring of steel” was initially developed under Kelly as an anti-terrorism surveillance system for Lower and Midtown Manhattan before being rebranded as the DAS and marketed to other police departments as a potential for-profit tool. Several dozen of the 17,000 cameras in New York City public housing developments were also linked through backdoor methods by the Eric Adams administration last summer with thousands more in the pipeline, according to NY Focus.
Though the DAS has been operational for more than a decade and survived prior challenges over data retention and privacy violations from civil society organizations like the New York Civil Liberties Union, it remains controversial. In late October, a Brooklyn couple filed a civil suit along with Surveillance Technology Oversight Project (STOP), a local privacy watchdog, against the DAS, alleging violations of New York State’s constitutional right to privacy by the NYPD’s persistent mass surveillance and data retention. NYPD officers, the suit claims, can “automatically track an individual across the city using computer vision software, which follows a person from one camera to the next based on descriptors as simple as the color of a piece of clothing.” The technology, they allege, “transforms every patrol officer into a mobile intelligence unit, capable of conducting warrantless surveillance at will.”
-
Tech1 week agoOpenAI says a million ChatGPT users talk about suicide
-
Tech1 week agoUS Ralph Lauren partners with Microsoft for AI shopping experience
-
Tech1 week agoHow digital technologies can support a circular economy
-
Sports1 week agoBilly Bob Thornton dishes on Cowboys owner Jerry Jones’ acting prowess after ‘Landman’ cameo
-
Tech1 week agoAI chatbots are becoming everyday tools for mundane tasks, use data shows
-
Fashion1 week agoITMF elects new board at 2025 Yogyakarta conference
-
Fashion1 week agoCalvin Klein launches Re-Calvin take-back programme across the US
-
Business1 week agoTransfer test: Children from Belfast low income families to be given free tuition







