Connect with us

Tech

The Best Lube Is the One You Have Handy. The Second Best Is One of These

Published

on

The Best Lube Is the One You Have Handy. The Second Best Is One of These


Other Good Lubes

Maude Shine

Photograph: Maude

Over the years, we’ve tested dozens of different lubes, and some of them are pretty good if not exactly the best in any particular category. For those, we have this section.

Other Good Lubes

Over the years, we’ve tested dozens of different lubes, and some of them are pretty good if not exactly the best in any particular category. For those, we have this section.

LubeLife Water-Based Lubricant for $8: Not only does LubeLife make a stellar silicone lube, but their water-based lubes are great too. At the moment, I’m really enjoying their most recent water-based lube—they have a long and impressive line of these types of lubes—that’s surprisingly long-lasting for something that’s water-based. It’s also super smooth, feeling 100 percent natural, never gets that awful sticky or tacky texture that some water-based lubes develop over time, and upon tasting it, I noticed it had a very slight sweetness to it. While I haven’t used this lube during oral sex, I can definitely see it being a major asset in my performance.

Playground Free Love Lube for $18: If you’re susceptible to UTIs, bacterial vaginosis (BV), or similar infections, then this is the lube for you, as it’s been scientifically proven to both reduce and prevent such vaginal issues. Free Love is also free of glycerin and fragrance, both of which can lead to yeast infections and general irritations. Although Free Love is extremely smooth and makes for a great complement when trying to avoid friction, the biggest selling point is that it will protect you from infections that some other lubes just can’t.

Dame Arousal Serum for $30: I’m not a huge fan of warming or tingling lubes and have yet to try one that makes me a true believer. But Dame’s Arousal Serum comes close. This is a warming, tingling, water-based lube that uses peppermint oil, cinnamon leaf oil, and ginger oil to provide some extra sensation during sex. If you have sensitive skin, I’d leave these products alone, but if you don’t and want to try a stimulating lube, this is the one I’d recommend. Try it on a non-genital area first to ensure you know how your skin will react.

Maude Shine Water-Based Lube for $25: This used to be our top pick. It offers a silky-smooth texture, though it’s on the thicker side for a water-based lube. Thicker water-based lubes typically last longer between applications. Using the thumb test, this lube gives you a slick but smooth cushion between your fingertips, which is a good indicator that it’s going to keep things nice and slick.


Power up with unlimited access to WIRED. Get best-in-class reporting and exclusive subscriber content that’s too important to ignore. Subscribe Today.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Huawei: agent-oriented mobile networks to define Agent Verse | Computer Weekly

Published

on

Huawei: agent-oriented mobile networks to define Agent Verse | Computer Weekly


Two years after it proposed the transition from the mobile internet era to the mobile artificial intelligence (AI) era, leading to the rapid adoption of agents in B2B applications and 30 million agents applied over the past 12 months, Huawei has introduced the Agent Verse, predicting a 10,000-fold increase in agent-handled work in networks by 2030.

The proposal of a new paradigm for communications came on the back of the comms tech giant’s Agentic Core Summit at MWC 2026, which centred on the strategic theme of building an agentic network with device-network-service synergy.

At the summit, Huawei revealed that it had worked with global mobile trade association the GSMA and a range of operators and industry organisations across the Middle East, Asia Pacific, Europe, Latin America and other regions to explore AI-driven advancements for the core network. Together, they unanimously agreed that the 5G core network has entered into “a new phase” called the Agentic Core.

Huawei’s Agentic Core system integrates AI into mobile internet, voice, operations and maintenance (O&M) and telco cloud infrastructure to allow networks to evolve and main service offerings to be reshaped. Huawei sees AI as extending a core network with three “transformative” abilities: real-time experience awareness; global experience evaluation and resource coordination; and intelligent interaction and execution.

This architecture is designed to give rise to a “network brain” that drives a closed-loop experience monetisation model where experiences are definable and assessable, service offerings are marketable, quality is guaranteed and exclusive user identities are perceptible.

The intelligent O&M part of the solution is built to  transform network operations into an automated and intelligent ecosystem, driving the core network toward Autonomous Network (AN) L4 Phase 2. Phase 1 focuses on the intelligent assistant, NOEMate, which delivers automated closed-loop management for both faults and changes. Building on this, Phase 2 introduces hierarchical autonomy and builds an unmanned factory, achieving full single-domain autonomy within the core network.

Looking toward the 6G era, Huawei Agentic Core also supports ubiquitous AI agent access, building an agent-based communication network that spans across devices and ecosystems. The Cloud Core Network is designed for an evolving communication infrastructure that will act as an interchange for AI agent network.

And these, said Huawei Eric Zhao, vice-president and CMO of Huawei’s wireless solution, would operate in the Agent Verse: “Mobile AI is sparking a paradigm shift across the communications industry. With a trillion-scale surge in Agent Verse connections on the horizon, mobile networks need an urgent upgrade.

“To unlock the full potential of 5G-Advanced, the industry should accelerate end-to-end upgrades and innovation, building multidimensional network capabilities that can meet the demands ahead.”

At MWC, Huawei argued that agents were reshaping mobile network demands – for example, by evolving into engines of industrial automation and broad societal change. It offered the example of productivity agents making fully automated manufacturing possible through autonomous learning and the precise coordination of thousands of robots. It calculated that by 2030, the global market is expected to reach trillions of intelligent connections worldwide.

Zhao added: “AI’s development has gone wide and far beyond our imagination, and it is now becoming clear that the application of AI will be [through] agents. We believe that in the future, every industry, terminal, organisation and individual will be served by agents – and this is why we propose the Agent Verse. Just in last year alone, there was 30 million agents applied in different industries, significantly improving the productivities of verticals; the adoption pace of agents is incredibly fast.

“It is estimated that by 2030, the amount of work handled by agents will grow by 10,000 times. Agents adoption means the introduction of changes in communication methods and communication objects. That means, in the future, agents will introduce new interactions, agents will interact with people, agents will interact with agents. This is why we think that the time has changed and the wireless industry needs to be prepared to welcome new services.”



Source link

Continue Reading

Tech

Yann LeCun Raises $1 Billion to Build AI That Understands the Physical World

Published

on

Yann LeCun Raises  Billion to Build AI That Understands the Physical World


Advanced Machine Intelligence (AMI), a new Paris-based startup cofounded by Meta’s former chief AI scientist Yann LeCun, announced Monday it has raised more than $1 billion to develop AI world models.

LeCun argues that most human reasoning is grounded in the physical world, not language, and that AI world models are necessary to develop true human-level intelligence. “The idea that you’re going to extend the capabilities of LLMs [large language models] to the point that they’re going to have human-level intelligence is complete nonsense,” he said in an interview with WIRED.

The financing, which values the startup at $3.5 billion, was co-led by investors such as Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Bezos Expeditions. Other notable backers include Mark Cuban, former Google CEO Eric Schmidt, and French billionaire and telecommunications executive Xavier Niel.

AMI (pronounced like the French word for friend) aims to build “a new breed of AI systems that understand the world, have persistent memory, can reason and plan, and are controllable and safe,” the company says in a press release. The startup says it will be global from day one, with offices in Paris, Montreal, Singapore, and New York, where LeCun will continue working as a New York University professor in addition to leading the startup. AMI will be the first commercial endeavor for LeCun since his departure from Meta in November 2025.

LeCun’s startup represents a bet against many of the world’s biggest AI labs like OpenAI, Anthropic, and even his former workplace, Meta, which believe that scaling up LLMs will eventually deliver AI systems with human-level intelligence or even superintelligence. LLMs have powered viral products such as ChatGPT and Claude Code, but LeCun has been one of the AI industry’s most prominent researchers speaking out about the limitations of these AI models. LeCun is well known for being outspoken, but as a pioneer of modern AI that won a Turing award back in 2018, his skepticism carries weight.

LeCun says AMI aims to work with companies in manufacturing, biomedical, robotics, and other industries that have lots of data. For example, he says AMI could build a realistic world model of an aircraft engine and work with the manufacturer to help them optimize for efficiency, minimize emissions, or ensure reliability.

AMI was cofounded by LeCun and several leaders he worked with at Meta, including the company’s former director of research science, Michael Rabbat; former vice president of Europe, Laurent Solly; and former senior director of AI research, Pascale Fung. Other cofounders include Alexandre LeBrun, former CEO of the AI health care startup Nabla, who will serve as AMI’s CEO, and Saining Xie, a former Google DeepMind researcher who will be the startup’s chief science officer.

The Case for World Models

LeCun does not dismiss the overall utility of LLMs. Rather, in his view, these AI models are simply the tech industry’s latest promising trend, and their success has created a “kind of delusion” among the people who build them. “It’s true that [LLMs] are becoming really good at generating code, and it’s true that they are probably going to become even more useful in a wide area of applications where code generation can help,” says LeCun. “That’s a lot of applications, but it’s not going to lead to human-level intelligence at all.”

LeCun has been working on world models for years inside of Meta, where he founded the company’s Fundamental AI Research lab, FAIR. But he’s now convinced his research is best done outside the social media giant. He says it’s become clear to him that the strongest applications of world models will be selling them to other enterprises, which doesn’t fit neatly into Meta’s core consumer business.

As AI world models like Meta’s Joint-Embedding Predictive Architecture (JEPA) became more sophisticated, “there was a reorientation of Meta’s strategy where it had to basically catch up with the industry on LLMs and kind of do the same thing that other LLM companies are doing, which is not my interest,” says LeCun. “So sometime in November, I went to see Mark Zuckerberg and told him. He’s always been very supportive of [world model research], but I told him I can do this faster, cheaper, and better outside of Meta. I can share the cost of development with other companies … His answer was, OK, we can work together.”



Source link

Continue Reading

Tech

Nvidia Is Planning to Launch an Open-Source AI Agent Platform

Published

on

Nvidia Is Planning to Launch an Open-Source AI Agent Platform


Nvidia is planning to launch an open-source platform for AI agents, people familiar with the company’s plans tell WIRED.

The chipmaker has been pitching the product, referred to as NemoClaw, to enterprise software companies. The platform will allow these companies to dispatch AI agents to perform tasks for their own workforces. Companies will be able to access the platform regardless of whether their products run on Nvidia’s chips, sources say.

The move comes as Nvidia prepares for its annual developer conference in San Jose next week. Ahead of the conference, Nvidia has reached out to companies including Salesforce, Cisco, Google, Adobe, and CrowdStrike to forge partnerships for the agent platform. It’s unclear whether these conversations have resulted in official partnerships. Since the platform is open source, it’s likely that partners would get free, early access in exchange for contributing to the project, sources say. Nvidia plans to offer security and privacy tools as part of this new open-source agent platform.

Nvidia did not respond to a request for comment. Representatives from Cisco, Google, Adobe, and CrowdStrike also did not respond to requests for comment. Salesforce did not provide a statement prior to publication.

Nvidia’s interest in agents comes as people are embracing “claws,” or open-source AI tools that run locally on a user’s machine and perform sequential tasks. Claws are often described as self-learning, in that they’re supposed to automatically improve over time. Earlier this year, an AI agent known as OpenClaw—which was first called Clawdbot, then Moltbot—captivated Silicon Valley due to its ability to run autonomously on personal computers and complete work tasks for users. OpenAI ended up acquiring the project and hiring the creator behind it.

OpenAI and Anthropic have made significant improvements in model reliability in recent years, but their chatbots still require hand-holding. Purpose-built AI agents or claws, on the other hand, are designed to execute multiple steps without as much human supervision.

The usage of claws within enterprise environments is controversial. WIRED previously reported that some tech companies, including Meta, have asked employees to refrain from using OpenClaw on their work computers, due to the unpredictability of the agents and potential security risks. Last month a Meta employee who oversees safety and alignment for the company’s AI lab publicly shared a story about an AI agent going rogue on her machine and mass deleting her emails.

For Nvidia, NemoClaw appears to be part of an effort to court enterprise software companies by offering additional layers of security for AI agents. It’s also another step in the company’s embrace of open-source AI models, part of a broader strategy to maintain its dominance in AI infrastructure at a time when leading AI labs are building their own custom chips. Nvidia’s software strategy until now has been heavily reliant on its CUDA platform, a famously proprietary system that locks developers into building software for Nvidia’s GPUs and has created a crucial “moat” for the company.

Last month The Wall Street Journal reported that Nvidia also plans to reveal a new chip system for inference computing at its developer conference. The system will incorporate a chip designed by the startup Groq, which Nvidia entered into a multibillion-dollar licensing agreement with late last year.

Paresh Dave and Maxwell Zeff contributed to this report.



Source link

Continue Reading

Trending