Tech
The Best Lube Is the One You Have Handy. The Second Best Is One of These
Other Good Lubes
Over the years, we’ve tested dozens of different lubes, and some of them are pretty good if not exactly the best in any particular category. For those, we have this section.
Other Good Lubes
Over the years, we’ve tested dozens of different lubes, and some of them are pretty good if not exactly the best in any particular category. For those, we have this section.
LubeLife Water-Based Lubricant for $8: Not only does LubeLife make a stellar silicone lube, but their water-based lubes are great too. At the moment, I’m really enjoying their most recent water-based lube—they have a long and impressive line of these types of lubes—that’s surprisingly long-lasting for something that’s water-based. It’s also super smooth, feeling 100 percent natural, never gets that awful sticky or tacky texture that some water-based lubes develop over time, and upon tasting it, I noticed it had a very slight sweetness to it. While I haven’t used this lube during oral sex, I can definitely see it being a major asset in my performance.
Playground Free Love Lube for $18: If you’re susceptible to UTIs, bacterial vaginosis (BV), or similar infections, then this is the lube for you, as it’s been scientifically proven to both reduce and prevent such vaginal issues. Free Love is also free of glycerin and fragrance, both of which can lead to yeast infections and general irritations. Although Free Love is extremely smooth and makes for a great complement when trying to avoid friction, the biggest selling point is that it will protect you from infections that some other lubes just can’t.
Dame Arousal Serum for $30: I’m not a huge fan of warming or tingling lubes and have yet to try one that makes me a true believer. But Dame’s Arousal Serum comes close. This is a warming, tingling, water-based lube that uses peppermint oil, cinnamon leaf oil, and ginger oil to provide some extra sensation during sex. If you have sensitive skin, I’d leave these products alone, but if you don’t and want to try a stimulating lube, this is the one I’d recommend. Try it on a non-genital area first to ensure you know how your skin will react.
Maude Shine Water-Based Lube for $25: This used to be our top pick. It offers a silky-smooth texture, though it’s on the thicker side for a water-based lube. Thicker water-based lubes typically last longer between applications. Using the thumb test, this lube gives you a slick but smooth cushion between your fingertips, which is a good indicator that it’s going to keep things nice and slick.
Power up with unlimited access to WIRED. Get best-in-class reporting and exclusive subscriber content that’s too important to ignore. Subscribe Today.
Tech
Here’s What You Should Know About Launching an AI Startup
Julie Bornstein thought it would be a cinch to implement her idea for an AI startup. Her résumé in digital commerce is impeccable: VP of ecommerce at Nordstrom, COO of the startup Stitch Fix, and founder of a personalized shopping platform acquired by Pinterest. Fashion has been her obsession since she was a Syracuse high schooler inhaling spreads in Seventeen and hanging out in local malls. So she felt well-positioned to create a company for customers to discover the perfect garments using AI.
The reality was much harder than she expected. I had breakfast recently with Bornstein and her CTO, Maria Belousova, to learn about her startup, Daydream, funded with $50 million from VCs like Google Ventures. The conversation took an unexpected turn as the women schooled me on the surprising difficulty of translating the magic of AI systems into something people actually find useful.T
Her story helps explain something. My first newsletter of 2025 announced that it would be The Year of the AI App. Though there are indeed many such apps, they haven’t transformed the world as I anticipated. Ever since ChatGPT launched in late 2022, people have been blown away by the tricks performed by AI, but study after study has shown that the technology has not yet delivered a significant boost in productivity. (One exception: coding.) A study published in August found that 19 out of 20 AI enterprise pilot projects delivered no measurable value. I do think that productivity boost is on the horizon, but it’s taking longer than people expected. Listening to the stories of startups like Daydream that are pushing to break through gives some hope that persistence and patience might indeed make those breakthroughs happen.
Fashionista Fail
Bornstein’s original pitch to VCs seemed obvious: Use AI to solve tricky fashion problems by matching customers with the perfect garments, which they’d be delighted to pay for. (Daydream would take a cut.) You’d think the setup would be simple—just connect to an API for a model like ChatGPT and you’re good to go, right? Um, no. Signing up over 265 partners, with access to more than 2 million products from boutique shops to retail giants, was the easy part. It turns out that fulfilling even a simple request like “I need a dress for a wedding in Paris” is incredibly complex. Are you the bride, the mother-in-law, or a guest? What season is it? How formal a wedding? What statement do you want to make? Even when those questions are resolved, different AI models have different views on such things. “What we found was, because of the lack of consistency and reliability of the model—and the hallucinations—sometimes the model would drop one or two elements of the queries,” says Bornstein. A user in Daydream’s long-extended beta test would say something like, “I’m a rectangle, but I need a dress to make me look like an hourglass.” The model would respond by showing dresses with geometric patterns.
Ultimately, Bornstein understood that she had to do two things: postpone the app’s planned fall 2024 launch (though it’s now available, Daydream is still technically in beta until sometime in 2026) and upgrade her technical team. In December 2024 she hired Belousova, the former CTO of Grubhub, who in turn brought in a team of top engineers. Daydream’s secret weapon in the fierce talent war is the chance to work on a fascinating problem. “Fashion is such a juicy space because it has taste and personalization and visual data,” says Belousova. “It’s an interesting problem that hasn’t been solved.”
What’s more, Daydream has to solve this problem twice—first by interpreting what the customer says and then by matching their sometimes quirky criteria with the wares on the catalog side. With inputs like I need a revenge dress for a bat mitzvah where my ex is attending with his new wife, that understanding is critical. “We have this notion at Daydream of shopper vocabulary and a merchant vocabulary, right?” says Bornstein. “Merchants speak in categories and attributes, and shoppers say things like, ‘I’m going to this event, it’s going to be on the rooftop, and I’m going to be with my boyfriend.’ How do you actually merge these two vocabularies into something at run time? And sometimes it takes several iterations in a conversation.” Daydream learned that language isn’t enough. “We’re using visual models, so we actually understand the products in a much more nuanced way,” she says. A customer might share a specific color or show a necklace that they’ll be wearing.
Bornstein says Daydream’s subsequent rehaul has produced better results. (Though when I tried it out, a request for black tuxedo pants showed me beige athletic-fit trousers in addition to what I asked for. Hey, it’s a beta.) “We ended up deciding to move from a single call to an ensemble of many models,” says Bornstein. “Each one makes a specialized call. We have one for color, one for fabric, one for season, one for location.” For instance, Daydream has found that for its purposes, OpenAI models are really good at understanding the world from the clothing point of view. Google’s Gemini is less so, but it is fast and precise.
Tech
MIT researchers “speak objects into existence” using AI and robotics
Generative AI and robotics are moving us ever closer to the day when we can ask for an object and have it created within a few minutes. In fact, MIT researchers have developed a speech-to-reality system, an AI-driven workflow that allows them to provide input to a robotic arm and “speak objects into existence,” creating things like furniture in as little as five minutes.
With the speech-to-reality system, a robotic arm mounted on a table is able to receive spoken input from a human, such as “I want a simple stool,” and then construct the objects out of modular components. To date, the researchers have used the system to create stools, shelves, chairs, a small table, and even decorative items such as a dog statue.
“We’re connecting natural language processing, 3D generative AI, and robotic assembly,” says Alexander Htet Kyaw, an MIT graduate student and Morningside Academy for Design (MAD) fellow. “These are rapidly advancing areas of research that haven’t been brought together before in a way that you can actually make physical objects just from a simple speech prompt.”
Speech to Reality: On-Demand Production using 3D Generative AI, and Discrete Robotic Assembly
The idea started when Kyaw — a graduate student in the departments of Architecture and Electrical Engineering and Computer Science — took Professor Neil Gershenfeld’s course, “How to Make Almost Anything.” In that class, he built the speech-to-reality system. He continued working on the project at the MIT Center for Bits and Atoms (CBA), directed by Gershenfeld, collaborating with graduate students Se Hwan Jeon of the Department of Mechanical Engineering and Miana Smith of CBA.
The speech-to-reality system begins with speech recognition that processes the user’s request using a large language model, followed by 3D generative AI that creates a digital mesh representation of the object, and a voxelization algorithm that breaks down the 3D mesh into assembly components.
After that, geometric processing modifies the AI-generated assembly to account for fabrication and physical constraints associated with the real world, such as the number of components, overhangs, and connectivity of the geometry. This is followed by creation of a feasible assembly sequence and automated path planning for the robotic arm to assemble physical objects from user prompts.
By leveraging natural language, the system makes design and manufacturing more accessible to people without expertise in 3D modeling or robotic programming. And, unlike 3D printing, which can take hours or days, this system builds within minutes.
“This project is an interface between humans, AI, and robots to co-create the world around us,” Kyaw says. “Imagine a scenario where you say ‘I want a chair,’ and within five minutes a physical chair materializes in front of you.”
The team has immediate plans to improve the weight-bearing capability of the furniture by changing the means of connecting the cubes from magnets to more robust connections.
“We’ve also developed pipelines for converting voxel structures into feasible assembly sequences for small, distributed mobile robots, which could help translate this work to structures at any size scale,” Smith says.
The purpose of using modular components is to eliminate the waste that goes into making physical objects by disassembling and then reassembling them into something different, for instance turning a sofa into a bed when you no longer need the sofa.
Because Kyaw also has experience using gesture recognition and augmented reality to interact with robots in the fabrication process, he is currently working on incorporating both speech and gestural control into the speech-to-reality system.
Leaning into his memories of the replicator in the “Star Trek” franchise and the robots in the animated film “Big Hero 6,” Kyaw explains his vision.
“I want to increase access for people to make physical objects in a fast, accessible, and sustainable manner,” he says. “I’m working toward a future where the very essence of matter is truly in your control. One where reality can be generated on demand.”
The team presented their paper “Speech to Reality: On-Demand Production using Natural Language, 3D Generative AI, and Discrete Robotic Assembly” at the Association for Computing Machinery (ACM) Symposium on Computational Fabrication (SCF ’25) to be held at MIT on Nov. 21.
Tech
Cyber teams on alert as React2Shell exploitation spreads | Computer Weekly
A remote code execution (RCE) vulnerability in the React JavaScript library, which earlier today caused disruption across the internet as Cloudflare pushed mitigations live on its network, is now being exploited by multiple threat actors at scale, according to reports.
Maintained by Meta, React is an open source resource designed to enable developers to build user interfaces (UIs) for both native and web applications.
The vulnerability in question, assigned CVE-2025-55182 and dubbed React2Shell by the cyber community, is a critically-scored pre-authentication RCE flaw in versions 19.0.0, 19.1.0, 19.1.1, and 19.2.0 of React Server Components that exploits a flaw in how they decode payloads sent to React Function Endpoints.
This means that by crafting a malicious HTTP request to a Server Function endpoint, this means a threat actor could gain the ability to run arbitrary code on the target server.
It was added to the US’ Cybersecurity and Infrastructure Security Agency’s (CISA’s) catalogue on Friday 5 December, and according to Amazon Web Services (AWS) CISO and vice president of security engineering, C.J. Moses, the chief culprits behind the rapid exploitation are thought to be China-nexus threat actors.
Moses cautioned that China’s habit of running shared, large-scale anonymisation infrastructure for multiple state-backed threat actors made definitive attribution challenging, however, following disclosure on Wednesday 3 December, groups tracked as Earth Lamia and Jackpot Panda were observed taking advantage of React2Shell.
“China continues to be the most prolific source of state-sponsored cyber threat activity, with threat actors routinely operationalising public exploits within hours or days of disclosure,” he wrote.
“Through monitoring in our AWS MadPot honeypot infrastructure, Amazon threat intelligence teams have identified both known groups and previously untracked threat clusters attempting to exploit CVE-2025-55182.”
Earth Lamia is well-known for exploiting web application vulnerabilities against organisations primarily located in Latin America, the Middle East, and Southeast Asia, with a particular focus on educational institutions, financial services organisations, government bodies, IT companies, logistics firms, and retailers.
Jackpot Panda, according to AWS, targets its activity at entities in East and Southeast Asia, with its operations aligning to China’s goals relating to corruption and domestic security.
Massive attack
With reports suggesting that there may be over 950,000 servers running vulnerable frameworks such as React and Next.js, Radware threat researchers warned of a massive potential attack surface.
React and Next.js are both well-used thanks to their efficiency and flexibility, while robust ecosystems make them a default choice for many developers – and as such they are found under the bonnet everywhere, from mobile apps and consumer-facing websites to enterprise-grade platforms, said Radware.
“This widespread reliance means a single critical flaw can have cascading consequences for a significant portion of modern web infrastructure,” the Radware team said. “A substantial number of applications across public and private clouds are immediately exploitable, necessitating urgent and widespread action.”
Michael Bell, founder and CEO of Suzu Labs, a penetration testing and AI security specialist, said that hours from disclosure to active exploitation by nation-state actors was the new normal, and matters would likely get worse.
“China-nexus groups have industrializsd their vulnerability response: they monitor disclosures, grab public PoCs – even broken ones – and spray them at scale before most organisations have finished reading the advisory,” he said.
“AWS’s report showing attackers debugging exploits in real-time against honeypots demonstrates this isn’t automated scanning; it’s hands-on-keyboard operators racing to establish persistence before patches roll out.
“With AI tools increasingly capable of parsing vulnerability disclosures and generating exploit code, expect the window between disclosure and weaponization to shrink from hours to minutes,” said Bell.
He added that the earlier Cloudflare outage in service of an emergency patch “tells you everything about the severity calculus here”.
-
Tech5 days agoGet Your Steps In From Your Home Office With This Walking Pad—On Sale This Week
-
Sports5 days agoIndia Triumphs Over South Africa in First ODI Thanks to Kohli’s Heroics – SUCH TV
-
Fashion5 days agoResults are in: US Black Friday store visits down, e-visits up, apparel shines
-
Entertainment5 days agoSadie Sink talks about the future of Max in ‘Stranger Things’
-
Politics5 days agoElon Musk reveals partner’s half-Indian roots, son’s middle name ‘Sekhar’
-
Tech5 days agoPrague’s City Center Sparkles, Buzzes, and Burns at the Signal Festival
-
Sports5 days agoBroncos secure thrilling OT victory over Commanders behind clutch performances
-
Entertainment5 days agoNatalia Dyer explains Nancy Wheeler’s key blunder in Stranger Things 5
