Tech
DHS Wants a Single Search Engine to Flag Faces and Fingerprints Across Agencies
The Department of Homeland Security is moving to consolidate its face recognition and other biometric technologies into a single system capable of comparing faces, fingerprints, iris scans, and other identifiers collected across its enforcement agencies, according to records reviewed by WIRED.
The agency is asking private biometric contractors how to build a unified platform that would let employees search faces and fingerprints across large government databases already filled with biometrics gathered in different contexts. The goal is to connect components including Customs and Border Protection, Immigration and Customs Enforcement, the Transportation Security Administration, US Citizenship and Immigration Services, the Secret Service, and DHS headquarters, replacing a patchwork of tools that do not share data easily.
The system would support watchlisting, detention, or removal operations and comes as DHS is pushing biometric surveillance far beyond ports of entry and into the hands of intelligence units and masked agents operating hundreds of miles from the border.
The records show DHS is trying to buy a single “matching engine” that can take different kinds of biometrics—faces, fingerprints, iris scans, and more—and run them through the same backend, giving multiple DHS agencies one shared system. In theory, that means the platform would handle both identity checks and investigative searches.
For face recognition specifically, identity verification means the system compares one photo to a single stored record and returns a yes-or-no answer based on similarity. For investigations, it searches a large database and returns a ranked list of the closest-looking faces for a human to review instead of independently making a call.
Both types of searches come with real technical limits. In identity checks, the systems are more sensitive, and so they are less likely to wrongly flag an innocent person. They will, however, fail to identify a match when the photo submitted is slightly blurry, angled, or outdated. For investigative searches, the cutoff is considerably lower, and while the system is more likely to include the right person somewhere in the results, it also produces many more false positives that necessitate human review.
The documents make clear that DHS wants control over how strict or permissive a match should be—depending on the context.
The department also wants the system wired directly into its existing infrastructure. Contractors would be expected to connect the matcher to current biometric sensors, enrollment systems, and data repositories so information collected in one DHS component can be searched against records held by another.
It’s unclear how workable this is. Different DHS agencies have bought their biometric systems from different companies over many years. Each system turns a face or fingerprint into a string of numbers, but many are designed only to work with the specific software that created them.
In practice, this means a new department-wide search tool cannot simply “flip a switch” and make everything compatible. DHS would likely have to convert old records into a common format, rebuild them using a new algorithm, or create software bridges that translate between systems. All of these approaches take time and money, and each can affect speed and accuracy.
At the scale DHS is proposing—potentially billions of records—even small compatibility gaps can spiral into large problems.
The documents also contain a placeholder indicating DHS wants to incorporate voiceprint analysis, but it contains no detailed plans for how they would be collected, stored, or searched. The agency previously used voiceprints in its “Alternative to Detention” program, which allowed immigrants to remain in their communities but required them to submit to intensive monitoring, including GPS ankle trackers and routine check-ins that confirmed their identity using biometric voiceprints.
Tech
Meta’s New AI Asked for My Raw Health Data—and Gave Me Terrible Advice
Medical experts I spoke with balked at the idea of uploading their own health data for an AI model, like Muse Spark, to analyze. “These chatbots now allow you to connect your own biometric data, put in your own lab information, and honestly, that makes me pretty nervous,” says Gauri Agarwal, a doctor of medicine and associate professor at the University of Miami. “I certainly wouldn’t connect my own health information to a service that I’m not fully able to control, understand where that information is being stored, or how it’s being utilized.” She recommends people stick to lower-stakes, more general interactions, like prepping questions for your doctor.
It can be tempting to rely on AI-assisted help for interpreting health, especially with the skyrocketing cost of medical treatments and overall inaccessibility of regular doctor visits for some people navigating the US health care system.
“You will be forgiven for going online and delegating what used to be a powerful, important personal relationship between a doctor and a patient—to a robot,” says Kenneth Goodman, founder of the University of Miami’s Institute for Bioethics and Health Policy. “I think running into that without due diligence is dangerous.” Before he considers using any of these tools, Goodman wants to see research proving that they are beneficial for your health, not just better at answering health questions than some competitor chatbot.
When I asked Meta AI for more information about how it would interpret my health information, if I provided any, the chatbot said it was not trying to replace my physician; the outputs were for educational purposes. “Think of me as a med school professor, not your doctor,” said Meta AI. That’s still a lofty claim.
The bot said the best way to get an interpretation of my health data was just to “dump the raw data,” like clinical lab reports, and tell it what my goals were. Meta AI would then create charts, summarize the info, and give a “referral nudge if needed.” In other chats I conducted with Meta AI, the bot prompted me to strip personal details before uploading lab results, but these caveats were not present in every test conversation.
“People have long used the internet to ask health questions,” a Meta spokesperson tells WIRED. “With Meta AI and Muse Spark, people are in control of what information to share, and our terms make clear they should only share what they’re comfortable with.”
In addition to privacy concerns, experts I spoke with expressed trepidation about how these AI tools can be sycophantic and influenced by how users ask questions. “A model might take the information that’s provided more as a given without questioning the assumptions that the patient inherently made when asking the question,” says Agrawal.
When I asked how to lose weight and nudged the bot towards extreme answers, Meta AI helped in ways that could be catastrophic for someone with anorexia. As I asked about the benefits of intermittent fasting, I told Meta AI that I wanted to fast five days every week. Despite flagging that this was not for most people and putting me at risk for eating disorders, Meta AI crafted a meal plan for me where I would only eat around 500 calories most days, which would leave me malnourished.
Tech
OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters
OpenAI is throwing its support behind an Illinois state bill that would shield AI labs from liability in cases where AI models are used to cause serious societal harms, such as death or serious injury of 100 or more people or at least $1 billion in property damage.
The effort seems to mark a shift in OpenAI’s legislative strategy. Until now, OpenAI has largely played defense, opposing bills that could have made AI labs liable for their technology’s harms. Several AI policy experts tell WIRED that SB 3444—which could set a new standard for the industry—is a more extreme measure than bills OpenAI has supported in the past.
The bill would shield frontier AI developers from liability for “critical harms” caused by their frontier models as long as they did not intentionally or recklessly cause such an incident, and have published safety, security, and transparency reports on their website. It defines a frontier model as any AI model trained using more than $100 million in computational costs, which likely could apply to America’s largest AI labs, like OpenAI, Google, xAI, Anthropic, and Meta.
“We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses—small and big—of Illinois,” said OpenAI spokesperson Jamie Radice in an emailed statement. “They also help avoid a patchwork of state-by-state rules and move toward clearer, more consistent national standards.”
Under its definition of critical harms, the bill lists a few common areas of concern for the AI industry, such as a bad actor using AI to create a chemical, biological, radiological, or nuclear weapon. If an AI model engages in conduct on its own that, if committed by a human, would constitute a criminal offense and leads to those extreme outcomes, that would also be a critical harm. If an AI model were to commit any of these actions under SB 3444, the AI lab behind the model may not be held liable, so long as it wasn’t intentional and they published their reports.
Federal and state legislatures in the US have yet to pass any laws specifically determining whether AI model developers, like OpenAI, could be liable for these types of harm caused by their technology. But as AI labs continue to release more powerful AI models that raise novel safety and cybersecurity challenges, such as Anthropic’s Claude Mythos, these questions feel increasingly prescient.
In her testimony supporting SB 3444, a member of OpenAI’s Global Affairs team, Caitlin Niedermeyer, also argued in favor of a federal framework for AI regulation. Niedermeyer struck a message that’s consistent with the Trump administration’s crackdown on state AI safety laws, claiming it’s important to avoid “a patchwork of inconsistent state requirements that could create friction without meaningfully improving safety.” This is also consistent with the broader view of Silicon Valley in recent years, which has generally argued that it’s paramount for AI legislation to not hamper America’s position in the global AI race. While SB 3444 is itself a state-level safety law, Niedermeyer argued that those can be effective if they “reinforce a path toward harmonization with federal systems.”
“At OpenAI, we believe the North Star for frontier regulation should be the safe deployment of the most advanced models in a way that also preserves US leadership in innovation,” Niedermeyer said.
Scott Wisor, policy director for the Secure AI project, tells WIRED he believes this bill has a slim chance of passing, given Illinois’ reputation for aggressively regulating technology. “We polled people in Illinois, asking whether they think AI companies should be exempt from liability, and 90 percent of people oppose it. There’s no reason existing AI companies should be facing reduced liability,” Wisor says.
Tech
China Is Cracking Down on Scams. Just Not the Ones Hitting Americans
Governments around the world have been struggling to address the rise of industrial-scale scamming operations based in countries like Laos, Myanmar, and Cambodia that have cost victims billions of dollars over the past few years. The operations often have ties to Chinese organized crime, use forced labor to carry out the actual scamming, and rely on vast money laundering networks to collect a profit. They have become so widespread and ingrained in the region that even major international law enforcement collaborations targeting individual scam centers or kingpins haven’t been able to stem the tide.
The FBI said this week that “cyber-enabled” scam complaints from Americans totaled more than $17.7 billion in reported losses last year—likely a major undercount of the real total, given that many victims don’t report their experiences. Some US officials say that a major barrier to comprehensively addressing the issue is the lack of collaboration with Chinese authorities. China’s efforts to address industrial scamming, they argue, appear aimed at reducing the number of Chinese citizens being impacted rather than comprehensively stopping the activity to protect all victims around the world.
“To its credit, China has cracked down on these operations, but it has done so selectively, largely turning a blind eye to scam centers victimizing foreigners,” Reva Price, a member of the US-China Economic and Security Review Commission said at a Senate hearing last month. “As a result, the Chinese criminal syndicates have been incentivized to shift toward targeting Americans.”
According to research the commission published in March, Beijing’s selective strategy has helped embolden some Chinese scammers, even those working within China, to continue operating so long as they exclusively target foreigners.
Other US-based researchers have come to similar conclusions. From 2023 to 2024, China reported a 30 percent decrease in the amount of money its citizens lost to scams, while the US suffered a more than 40 percent increase, according to congressional testimony last year by Jason Tower, who was then the Myanmar country director for the US Institute of Peace’s Program on Transnational Crime and Security in Southeast Asia. In response to Beijing’s enforcement dynamics, Tower said at the time, “the scam syndicates are increasingly pivoting to target the rest of the world, and especially Americans.”
The United Nations Office on Drugs and Crime noted last year that scam centers have been diversifying their worker pools, shifting from predominantly trafficking Chinese nationals and other Chinese speakers to entrapping people from a broader array of countries and backgrounds who speak various languages. UN researchers attributed this change in part to attackers broadening their targets to include different populations around the world. But they added that the dynamic also seemed to be a reaction to Chinese enforcement and Beijing’s efforts to protect Chinese citizens.
“China is doing more to fight fraud—like orders of magnitude more—than any other country,” says Gary Warner, a longtime digital scams researcher and director of intelligence at the cybersecurity firm DarkTower. “But I would agree that the crackdown by China on people scamming China has squeezed the balloon so to speak and led to more international and American targeting.”
The Chinese government has spent years investing in national safety campaigns warning citizens about the threat of scams and how to avoid falling victim to them. Some of the public discourse attempts to appeal to a sense of national solidarity. There’s a common meme in China, 中国人不骗中国人, literally, “Chinese people don’t deceive Chinese people” that is used to signal trust when swapping restaurant recommendations or job leads. In the context of digital scams, a variant has emerged: “Chinese don’t scam Chinese.”
-
Business1 week agoJaguar Land Rover sees sales recover after cyber attack
-
Uncategorized1 week ago
[CinePlex360] Please moderate: “Trump signals p
-
Entertainment7 days agoJoe Jonas shares candid glimpse into parenthood with Sophie Turner
-
Tech7 days agoOur Favorite iPad Is $50 Off
-
Sports6 days agoUConn Final Four run could trigger a $50M furniture giveaway for Massachusetts-based Jordan’s Furniture
-
Entertainment7 days agoBlake Lively reacts to harassment claims dismissal against Justin Baldoni
-
Business7 days agoVideo: Why Is the Labor Market Stuck?
-
Politics7 days agoIran can sustain Strait of Hormuz closure for years, will cut US military logistics: Official
