Tech
The iPhone Gets a D– for Repairability
The iPhone is the least fixable phone on the market, according to repairability experts. Phones from Samsung and Google are not far behind.
The latest repairability ratings are from an annual report called “Failing the Fix” put out today by the consumer advocacy group US PIRG. A 2021 French law required products to be labeled with repairability scores, and US PIRG says this is the first report since then that really shows which companies are—or are not—making progress. The answer is that repairability is progressing much more quickly in some places than others.
The results were good for phones made by Motorola, which got a B+. Google’s phones got a C–. The verdict was worse for Samsung phones, which got a D. Last on the list was Apple with a D–. Apple and Samsung did not immediately respond to requests for comment.
Scores were better for laptops than smartphones, with Asus at the top with a B+ and Apple on the bottom with its MacBooks at a C–.
The authors of the report are hoping that publishing these low scores will encourage manufacturers to do better.
“Putting these right incentives in place could push these companies to make innovations that are actually beneficial,” says Nathan Proctor, senior director of the US PIRG campaign for the right to repair. “Instead of coming up with new ways to jam AI down our throats, you can make stuff that lasts and that we can fix.”
Despite many right-to-repair concessions companies have made—like making their tools, parts, and repair instructions publicly available—those rankings are lower than in years past, largely because of the new information that has been gleaned from European laws requiring repair scores to be printed on product packaging.
The French law grades products based on how easily they can be disassembled, whether documentation and tools are provided, and the availability and price of spare parts. In 2023, the European Union passed a law establishing the European Product Registry for Energy Labelling, a process that grades devices on key repairability factors like whether products have easy access and disassembly, battery endurance, ingress protection like waterproofing, and the durability to handle repeated falls. The rankings go from A to F.
To arrive at its own ratings, US PIRG collates the EPREL and France’s repair indexes with other US-specific factors, like whether companies are actively lobbying against the right to repair or are members of trade associations that do so.
“If you’re buying your equipment from a company that’s spending their money to lobby against your right to repair that thing, that doesn’t speak well for their support, for your ability to fix that,” Proctor says. “So we also dock points for some of those legislative activities.”
Apple’s phones are getting better scores than in years past, like when iPhones were assigned an F rating in 2022. (iPhones got a C– in 2025.) The low rating for Apple’s phones comes down to software support, and how the EU laws track the information about what companies enable in their products. Based on the EU laws, companies have to self-report how their devices meet repair requirements. And those rankings tend to score pretty low.
“When we’ve been grading on a curve, Apple has not been a standout in the bad column,” Proctor says. “But why are we grading on a curve? We should just have longer-lasting products.”
The ultimate goal of these rankings, Proctor says, is to bring attention to the importance of repairability, accessibility, and waste reduction.
“This is an emerging, vitally important issue that we need better leadership on from companies and from other public policy officials,” Proctor says. “We should not be trashing all of our internet-connected stuff every couple of years because it’s impossible to use it with the software. It’s totally unsustainable. It’s crazy. Let’s not build that world. That world is a dystopia.”
“I’m actually pretty confident that some of that stuff’s going to get addressed,” Proctor adds. “Apple engineers are good at making stuff. They’re good at solving problems.”
Tech
Meta’s New AI Asked for My Raw Health Data—and Gave Me Terrible Advice
Medical experts I spoke with balked at the idea of uploading their own health data for an AI model, like Muse Spark, to analyze. “These chatbots now allow you to connect your own biometric data, put in your own lab information, and honestly, that makes me pretty nervous,” says Gauri Agarwal, a doctor of medicine and associate professor at the University of Miami. “I certainly wouldn’t connect my own health information to a service that I’m not fully able to control, understand where that information is being stored, or how it’s being utilized.” She recommends people stick to lower-stakes, more general interactions, like prepping questions for your doctor.
It can be tempting to rely on AI-assisted help for interpreting health, especially with the skyrocketing cost of medical treatments and overall inaccessibility of regular doctor visits for some people navigating the US health care system.
“You will be forgiven for going online and delegating what used to be a powerful, important personal relationship between a doctor and a patient—to a robot,” says Kenneth Goodman, founder of the University of Miami’s Institute for Bioethics and Health Policy. “I think running into that without due diligence is dangerous.” Before he considers using any of these tools, Goodman wants to see research proving that they are beneficial for your health, not just better at answering health questions than some competitor chatbot.
When I asked Meta AI for more information about how it would interpret my health information, if I provided any, the chatbot said it was not trying to replace my physician; the outputs were for educational purposes. “Think of me as a med school professor, not your doctor,” said Meta AI. That’s still a lofty claim.
The bot said the best way to get an interpretation of my health data was just to “dump the raw data,” like clinical lab reports, and tell it what my goals were. Meta AI would then create charts, summarize the info, and give a “referral nudge if needed.” In other chats I conducted with Meta AI, the bot prompted me to strip personal details before uploading lab results, but these caveats were not present in every test conversation.
“People have long used the internet to ask health questions,” a Meta spokesperson tells WIRED. “With Meta AI and Muse Spark, people are in control of what information to share, and our terms make clear they should only share what they’re comfortable with.”
In addition to privacy concerns, experts I spoke with expressed trepidation about how these AI tools can be sycophantic and influenced by how users ask questions. “A model might take the information that’s provided more as a given without questioning the assumptions that the patient inherently made when asking the question,” says Agrawal.
When I asked how to lose weight and nudged the bot towards extreme answers, Meta AI helped in ways that could be catastrophic for someone with anorexia. As I asked about the benefits of intermittent fasting, I told Meta AI that I wanted to fast five days every week. Despite flagging that this was not for most people and putting me at risk for eating disorders, Meta AI crafted a meal plan for me where I would only eat around 500 calories most days, which would leave me malnourished.
Tech
OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters
OpenAI is throwing its support behind an Illinois state bill that would shield AI labs from liability in cases where AI models are used to cause serious societal harms, such as death or serious injury of 100 or more people or at least $1 billion in property damage.
The effort seems to mark a shift in OpenAI’s legislative strategy. Until now, OpenAI has largely played defense, opposing bills that could have made AI labs liable for their technology’s harms. Several AI policy experts tell WIRED that SB 3444—which could set a new standard for the industry—is a more extreme measure than bills OpenAI has supported in the past.
The bill would shield frontier AI developers from liability for “critical harms” caused by their frontier models as long as they did not intentionally or recklessly cause such an incident, and have published safety, security, and transparency reports on their website. It defines a frontier model as any AI model trained using more than $100 million in computational costs, which likely could apply to America’s largest AI labs, like OpenAI, Google, xAI, Anthropic, and Meta.
“We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses—small and big—of Illinois,” said OpenAI spokesperson Jamie Radice in an emailed statement. “They also help avoid a patchwork of state-by-state rules and move toward clearer, more consistent national standards.”
Under its definition of critical harms, the bill lists a few common areas of concern for the AI industry, such as a bad actor using AI to create a chemical, biological, radiological, or nuclear weapon. If an AI model engages in conduct on its own that, if committed by a human, would constitute a criminal offense and leads to those extreme outcomes, that would also be a critical harm. If an AI model were to commit any of these actions under SB 3444, the AI lab behind the model may not be held liable, so long as it wasn’t intentional and they published their reports.
Federal and state legislatures in the US have yet to pass any laws specifically determining whether AI model developers, like OpenAI, could be liable for these types of harm caused by their technology. But as AI labs continue to release more powerful AI models that raise novel safety and cybersecurity challenges, such as Anthropic’s Claude Mythos, these questions feel increasingly prescient.
In her testimony supporting SB 3444, a member of OpenAI’s Global Affairs team, Caitlin Niedermeyer, also argued in favor of a federal framework for AI regulation. Niedermeyer struck a message that’s consistent with the Trump administration’s crackdown on state AI safety laws, claiming it’s important to avoid “a patchwork of inconsistent state requirements that could create friction without meaningfully improving safety.” This is also consistent with the broader view of Silicon Valley in recent years, which has generally argued that it’s paramount for AI legislation to not hamper America’s position in the global AI race. While SB 3444 is itself a state-level safety law, Niedermeyer argued that those can be effective if they “reinforce a path toward harmonization with federal systems.”
“At OpenAI, we believe the North Star for frontier regulation should be the safe deployment of the most advanced models in a way that also preserves US leadership in innovation,” Niedermeyer said.
Scott Wisor, policy director for the Secure AI project, tells WIRED he believes this bill has a slim chance of passing, given Illinois’ reputation for aggressively regulating technology. “We polled people in Illinois, asking whether they think AI companies should be exempt from liability, and 90 percent of people oppose it. There’s no reason existing AI companies should be facing reduced liability,” Wisor says.
Tech
China Is Cracking Down on Scams. Just Not the Ones Hitting Americans
Governments around the world have been struggling to address the rise of industrial-scale scamming operations based in countries like Laos, Myanmar, and Cambodia that have cost victims billions of dollars over the past few years. The operations often have ties to Chinese organized crime, use forced labor to carry out the actual scamming, and rely on vast money laundering networks to collect a profit. They have become so widespread and ingrained in the region that even major international law enforcement collaborations targeting individual scam centers or kingpins haven’t been able to stem the tide.
The FBI said this week that “cyber-enabled” scam complaints from Americans totaled more than $17.7 billion in reported losses last year—likely a major undercount of the real total, given that many victims don’t report their experiences. Some US officials say that a major barrier to comprehensively addressing the issue is the lack of collaboration with Chinese authorities. China’s efforts to address industrial scamming, they argue, appear aimed at reducing the number of Chinese citizens being impacted rather than comprehensively stopping the activity to protect all victims around the world.
“To its credit, China has cracked down on these operations, but it has done so selectively, largely turning a blind eye to scam centers victimizing foreigners,” Reva Price, a member of the US-China Economic and Security Review Commission said at a Senate hearing last month. “As a result, the Chinese criminal syndicates have been incentivized to shift toward targeting Americans.”
According to research the commission published in March, Beijing’s selective strategy has helped embolden some Chinese scammers, even those working within China, to continue operating so long as they exclusively target foreigners.
Other US-based researchers have come to similar conclusions. From 2023 to 2024, China reported a 30 percent decrease in the amount of money its citizens lost to scams, while the US suffered a more than 40 percent increase, according to congressional testimony last year by Jason Tower, who was then the Myanmar country director for the US Institute of Peace’s Program on Transnational Crime and Security in Southeast Asia. In response to Beijing’s enforcement dynamics, Tower said at the time, “the scam syndicates are increasingly pivoting to target the rest of the world, and especially Americans.”
The United Nations Office on Drugs and Crime noted last year that scam centers have been diversifying their worker pools, shifting from predominantly trafficking Chinese nationals and other Chinese speakers to entrapping people from a broader array of countries and backgrounds who speak various languages. UN researchers attributed this change in part to attackers broadening their targets to include different populations around the world. But they added that the dynamic also seemed to be a reaction to Chinese enforcement and Beijing’s efforts to protect Chinese citizens.
“China is doing more to fight fraud—like orders of magnitude more—than any other country,” says Gary Warner, a longtime digital scams researcher and director of intelligence at the cybersecurity firm DarkTower. “But I would agree that the crackdown by China on people scamming China has squeezed the balloon so to speak and led to more international and American targeting.”
The Chinese government has spent years investing in national safety campaigns warning citizens about the threat of scams and how to avoid falling victim to them. Some of the public discourse attempts to appeal to a sense of national solidarity. There’s a common meme in China, 中国人不骗中国人, literally, “Chinese people don’t deceive Chinese people” that is used to signal trust when swapping restaurant recommendations or job leads. In the context of digital scams, a variant has emerged: “Chinese don’t scam Chinese.”
-
Business1 week agoJaguar Land Rover sees sales recover after cyber attack
-
Uncategorized1 week ago
[CinePlex360] Please moderate: “Trump signals p
-
Entertainment7 days agoJoe Jonas shares candid glimpse into parenthood with Sophie Turner
-
Tech7 days agoOur Favorite iPad Is $50 Off
-
Sports6 days agoUConn Final Four run could trigger a $50M furniture giveaway for Massachusetts-based Jordan’s Furniture
-
Entertainment7 days agoBlake Lively reacts to harassment claims dismissal against Justin Baldoni
-
Politics7 days agoIran can sustain Strait of Hormuz closure for years, will cut US military logistics: Official
-
Business1 week agoGold prices in Pakistan Today – April 3, 2026 | The Express Tribune
