Tech
Technology Is Reshaping Sleep Apnea Treatment
Inspire therapy—a hypoglossal nerve stimulation implant—has been FDA-approved for more than 11 years, with over 100,000 patients treated across the US, Europe, and Asia. Ruchir Patel, Inspire’s senior medical director, says data show reductions in daytime sleepiness, a 79 percent drop in sleep apnea severity, and a 90 percent reduction in snoring. Early US data report average nightly usage of more than 6.5 hours. “This is an exciting time because there are more treatment options available than in the past,” he says.
Pharmaceutical approaches are also emerging. In 2024, the US Food and Drug Administration approved Zepbound (tirzepatide) for moderate to severe OSA in adults with obesity—the first weight-loss drug to carry a specific sleep apnea indication.
Meanwhile, Cambridge, Massachusetts–based startup Apnimed has developed a nightly pill targeting neuromuscular pathways that influence upper airway tone. Rather than mechanically splinting the airway open, the drug aims to stabilize it biologically.
“For a long time, OSA was understood primarily as an anatomical problem, so the logical solution was mechanical,” says John Cronin, chief medical officer at Apnimed. As understanding evolved, the question became: “Could we design a therapy that targets the biology of the condition directly, rather than relying solely on mechanical support?” The company has completed two phase three trials and plans to submit a New Drug Application to the FDA this year.
For all the innovation, Steier remains pragmatic. “I couldn’t be happier than finding someone who’s got typical sleep apnea and gets CPAP therapy,” he says. Modern machines automatically adjust pressure to airway resistance. “A single night can make all the difference.” Patients return re-energized, telling him they’ve got their lives back.
Sleep medicine is still relatively young, and research is only beginning to capture the diversity of the condition. That complexity also underpins efforts to improve CPAP use rather than abandon it.
Amanda Sathyapala, an associate professor at Imperial College London’s National Heart and Lung Institute, led the research showing 62 percent of patients were not using CPAP enough to make a meaningful health impact. Her team has studied the psychology of adherence, finding that factors such as understanding risk and confidence using the device shape long-term use.
Drawing on behavioral science, she developed CPAP Buddy, an app offering video-based behavioral therapy, peer support, and round-the-clock answers to patient questions. The project has received £2.2 million from the UK’s Medical Research Council, alongside backing from CPAP manufacturer Fisher & Paykel.
“CPAP is likely to be the most effective treatment that you can get because it’s giving air directly into the airway,” Sathyapala says. “[CPAP] is always going to be the most efficacious once the person’s using it, therefore it’s worth trying to get people to use it.”
For her, the problem is not the machine but behavior. “I don’t like to give up if we haven’t tried the right things,” she says. Using CPAP, she adds, is no different from “losing weight, stopping smoking, starting up a long-term physical activity program—it’s a behavior change.”
Tech
Meta Ramps Up Efforts to Disrupt Industrialized Scamming
With organized, industrial-scale scamming causing a multibillion-dollar crisis around the world, Meta announced new account protections on Wednesday aimed at flagging potentially suspicious activity to users as early in a scam interaction as possible. The company also shared details about a recent Thai law enforcement collaboration that resulted in 21 arrests and Meta disabling over 150,000 user accounts associated with Southeast Asian scam compounds.
The disruptive action—a joint effort of the Royal Thai Police, the FBI, the United Kingdom’s National Crime Agency, the Australian Federal Police, and other law enforcement agencies—focused on alleged scammers targeting victims in numerous countries, including the US and UK as well as multiple Asian and Pacific region countries. The account protections Meta debuted on Wednesday include expanding its Messenger scam detection features for more users around the world, introducing warnings about potentially suspicious activity when a user is initiating a new WhatsApp device link, and testing new Facebook alerts to flag potentially suspicious friend requests.
“Transnational scam syndicates continue to exploit digital platforms and operate across multiple jurisdictions,” Gregory Kang, the deputy assistant commissioner of the Singapore Police Force, said in a statement on Wednesday. “Joint operations like this demonstrate the importance of close cooperation between law enforcement agencies and industry partners.”
Mainstream social media and communication platforms are a crucial digital meeting ground where online scammers—who are often forced laborers—and victims from around the world can cross paths. Professionalized “pig butchering”-style investment scamming has expanded in Southeast Asia and proliferated around the world, creating more urgency than ever to block and deter fraudulent activity on consumer platforms.
Meta began speaking publicly about its work focused on scam compounds at the end of 2024. That year, the company said that it had taken down more than 2 million accounts related to scam compounds.
On Wednesday, the company said that in 2025 it took down 10.9 million Facebook and Instagram accounts “associated with criminal scam centers” and removed more than 159 million scam ads across all categories. Meta has increasingly come under fire for not taking enough proactive action against scams across its platforms—with Reuters reporting in December that billions of scam ads appear everyday and internal Meta estimates forecast up to 10 percent of its revenues may come from scam advertising. A company spokesperson at the time disputed the figures. Law enforcement in many regions—including Thai and Cambodian police—have carried out a spate of operations in recent months to intervene in scam compounds, make dozens of arrests, and seize funds. And the crackdowns aren’t limited to Southeast Asia. Meta said in February, for example, that it provided support for a Nigerian Police Force and UK National Crime Agency operation focused on disrupting an alleged scam center in Nigeria.
Meta announced other efforts on Wednesday to combat scamming and abusive behavior on its platforms. The company said it is further expanding advertiser verification with a goal that 90 percent of ad revenue will come from verified advertisers by the end of 2026, which would be a major increase from 70 percent currently. The goal, Meta says, is for the final 10 percent to accommodate small, local businesses and other low-resource, benign entities that just want to run a few ads.
The company also said that its anti-scam specialists have built AI detection systems to help flag more situations where scammers may be impersonating brands, celebrities, or other public figures. These systems are also designed to catch more “deceptive links” that could be used to fool targets into visiting malicious websites.
The scamming ecosystem and industry around the world has expanded and matured to such a degree that no one platform or government can solve the problem. But experts have consistently emphasized to WIRED in recent years that Meta platforms are a key battleground where more detections and defenses could make a difference in the barrier to entry for scammers who are trying to reach new victims.
As Chris Sonderby, Meta vice president and deputy general counsel, put it in a statement on Wednesday, “we will continue to invest in technology and partnerships to stay ahead of these adversaries.”
Tech
My Favorite Piece of Coffee Gear Makes Me Do All the Work, and That’s Why I Love It
Coffee is the original biohack and the nation’s most popular productivity tool. As we adjust to the changeover to daylight saving time, the caffeine-addicted WIRED Reviews team is writing about our favorite coffee brewing routines and devices. Today, contributor Brad Bourque pays homage to his manual espresso maker. Look out for other Java.Base stories about other WIRED writers’ favorite brewing methods.
For me, coffee is as much a nerdy obsession as it is a practical necessity. I dislike maintenance, and I prefer simplicity, but I also need my coffee to be bold and interesting. For years, I used a kettle and Aeropress, which were easy to keep clean and tucked away in a crowded cabinet. My roommates at the time really appreciated that. But when I got a place of my own, I wanted something more substantial, if also still dead simple. The Flair Signature, a manual espresso maker, seemed like an obvious choice. It still sits proudly on my counter in all its stainless steel glory, occupying a permanent spot by my sink.
Where larger, electric espresso machines generate the pressure and heat needed for espresso inside their massive housings, the Flair takes a different approach. A large lever sits atop a small stack of brewing equipment, and you use that lever to create the bars of pressure necessary to get espresso. There’s a chamber for your grounds and another atop it for hot water. Fill them up in the correct order, pull down on the handle, guided by the handy pressure gauge, and watch in delight as thick, crema-topped espresso drips out the bottom.
There are other crucial pieces to this puzzle, and I’ve fully committed to the bit by opting for a simple gooseneck kettle and hand burr grinder, chosen for their simplicity and consistency. Coffee enthusiasts should instantly recognize the Stagg EKG kettle from Fellow, and yes, mine is draped in green and yellow reminiscent of my favorite soccer team, thank you for noticing. The 1ZPresso JX-Pro S isn’t particularly fancy, but it’s easy to clean and consistent, and it came highly recommended by Reddit, though I’ll admit I’ve been tempted by the Comandante C40, a hand grinder that costs more than the rest of my setup combined.
The entire workflow is thankfully almost silent, a blessing on quiet and/or hungover Sunday mornings. I can throw some Steely Dan on the record player, fire up the kettle, and start turning the hand grinder as I take care of my other morning chores. While it seems straightforward, it’s a process that has a surprising number of variables to tweak, and I feel them firsthand every time I pull a shot. Each minor adjustment to the grind or water temperature creates a cascading set of changes to both the process and the end result. It’s a daily chase for unattainable perfection that I’m well familiar with after using the Aeropress for so long, and I find it deeply satisfying when I feel like I’ve nailed it. Knowing I was fully responsible for that great first sip gives me a bigger boost in the morning than any amount of caffeine could.
Tech
A better method for planning complex visual tasks
MIT researchers have developed a generative artificial intelligence-driven approach for planning long-term visual tasks, like robot navigation, that is about twice as effective as some existing techniques.
Their method uses a specialized vision-language model to perceive the scenario in an image and simulate actions needed to reach a goal. Then a second model translates those simulations into a standard programming language for planning problems, and refines the solution.
In the end, the system automatically generates a set of files that can be fed into classical planning software, which computes a plan to achieve the goal. This two-step system generated plans with an average success rate of about 70 percent, outperforming the best baseline methods that could only reach about 30 percent.
Importantly, the system can solve new problems it hasn’t encountered before, making it well-suited for real environments where conditions can change at a moment’s notice.
“Our framework combines the advantages of vision-language models, like their ability to understand images, with the strong planning capabilities of a formal solver,” says Yilun Hao, an aeronautics and astronautics (AeroAstro) graduate student at MIT and lead author of an open-access paper on this technique. “It can take a single image and move it through simulation and then to a reliable, long-horizon plan that could be useful in many real-life applications.”
She is joined on the paper by Yongchao Chen, a graduate student in the MIT Laboratory for Information and Decision Systems (LIDS); Chuchu Fan, an associate professor in AeroAstro and a principal investigator in LIDS; and Yang Zhang, a research scientist at the MIT-IBM Watson AI Lab. The paper will be presented at the International Conference on Learning Representations.
Tackling visual tasks
For the past few years, Fan and her colleagues have studied the use of generative AI models to perform complex reasoning and planning, often employing large language models (LLMs) to process text inputs.
Many real-world planning problems, like robotic assembly and autonomous driving, have visual inputs that an LLM can’t handle well on its own. The researchers sought to expand into the visual domain by utilizing vision-language models (VLMs), powerful AI systems that can process images and text.
But VLMs struggle to understand spatial relationships between objects in a scene and often fail to reason correctly over many steps. This makes it difficult to use VLMs for long-range planning.
On the other hand, scientists have developed robust, formal planners that can generate effective long-horizon plans for complex situations. However, these software systems can’t process visual inputs and require expert knowledge to encode a problem into language the solver can understand.
Fan and her team built an automatic planning system that takes the best of both methods. The system, called VLM-guided formal planning (VLMFP), utilizes two specialized VLMs that work together to turn visual planning problems into ready-to-use files for formal planning software.
The researchers first carefully trained a small model they call SimVLM to specialize in describing the scenario in an image using natural language and simulating a sequence of actions in that scenario. Then a much larger model, which they call GenVLM, uses the description from SimVLM to generate a set of initial files in a formal planning language known as the Planning Domain Definition Language (PDDL).
The files are ready to be fed into a classical PDDL solver, which computes a step-by-step plan to solve the task. GenVLM compares the results of the solver with those of the simulator and iteratively refines the PDDL files.
“The generator and simulator work together to be able to reach the exact same result, which is an action simulation that achieves the goal,” Hao says.
Because GenVLM is a large generative AI model, it has seen many examples of PDDL during training and learned how this formal language can solve a wide range of problems. This existing knowledge enables the model to generate accurate PDDL files.
A flexible approach
VLMFP generates two separate PDDL files. The first is a domain file that defines the environment, valid actions, and domain rules. It also produces a problem file that defines the initial states and the goal of a particular problem at hand.
“One advantage of PDDL is the domain file is the same for all instances in that environment. This makes our framework good at generalizing to unseen instances under the same domain,” Hao explains.
To enable the system to generalize effectively, the researchers needed to carefully design just enough training data for SimVLM so the model learned to understand the problem and goal without memorizing patterns in the scenario. When tested, SimVLM successfully described the scenario, simulated actions, and detected if the goal was reached in about 85 percent of experiments.
Overall, the VLMFP framework achieved a success rate of about 60 percent on six 2D planning tasks and greater than 80 percent on two 3D tasks, including multirobot collaboration and robotic assembly. It also generated valid plans for more than 50 percent of scenarios it hadn’t seen before, far outpacing the baseline methods.
“Our framework can generalize when the rules change in different situations. This gives our system the flexibility to solve many types of visual-based planning problems,” Fan adds.
In the future, the researchers want to enable VLMFP to handle more complex scenarios and explore methods to identify and mitigate hallucinations by the VLMs.
“In the long term, generative AI models could act as agents and make use of the right tools to solve much more complicated problems. But what does it mean to have the right tools, and how do we incorporate those tools? There is still a long way to go, but by bringing visual-based planning into the picture, this work is an important piece of the puzzle,” Fan says.
This work was funded, in part, by the MIT-IBM Watson AI Lab.
-
Sports4 days agoPakistan set for FIH Pro League debut | The Express Tribune
-
Politics4 days agoIndia let Iran warship dock the day US sank another off Sri Lanka, say officials
-
Entertainment4 days agoHarry Styles kicks off new era with ‘One Night Only’ comeback show
-
Business4 days agoRestaurant group changes name after bid to buys pubs across the UK
-
Business5 days agoHome heating oil: ‘Most of my pension has gone on home heating oil’
-
Sports4 days agoWinners and losers of the 2026 NHL trade deadline
-
Entertainment4 days agoKanye ‘Ye’ West trips during trial: ‘Is he asleep?’
-
Tech7 days agoGoogle’s Pixel 10a May Not Be Exciting, but It’s Still an Unbeatable Value

