Connect with us

Tech

Why fears of a trillion-dollar AI bubble are growing

Published

on

Why fears of a trillion-dollar AI bubble are growing


Credit: Pixabay/CC0 Public Domain

For almost as long as the artificial intelligence boom has been in full swing, there have been warnings of a speculative bubble that could rival the dot-com craze of the late 1990s that ended in a spectacular crash and a wave of bankruptcies.

Tech firms are spending hundreds of billions of dollars on advanced chips and data centers, not just to keep pace with a surge in the use of chatbots such as ChatGPT, Gemini and Claude, but to make sure they’re ready to handle a more fundamental and disruptive shift of economic activity from humans to machines.

The final bill may run into the trillions. The financing is coming from , debt and, lately, some more unconventional arrangements that have raised eyebrows on Wall Street.

Even some of AI’s biggest cheerleaders acknowledge the market is frothy, while still professing their belief in the technology’s long-term potential. AI, they say, is poised to reshape multiple industries, cure diseases and generally accelerate human progress.

Yet never before has so much money been spent so rapidly on a technology that remains somewhat unproven as a profit-making business model. Tech industry executives who privately doubt the most effusive assessments of AI’s revolutionary potential—or at least struggle to see how to monetize it—may feel they have little choice but to keep pace with their rivals’ investments or risk being out-scaled and sidelined in the future AI marketplace.

Sharp falls in global technology stocks in early November underscored investors’ growing unease over the sector’s sky-high valuations, with Wall Street chief executives warning of an overdue market correction.

What are the warning signs for AI?

When Sam Altman, the chief executive of ChatGPT maker OpenAI, announced a $500 billion AI infrastructure plan known as Stargate alongside other executives at the White House in January, the price tag triggered some disbelief. Since then, other tech rivals have ramped up spending, including Meta’s Mark Zuckerberg, who has pledged to invest hundreds of billions in . Not to be outdone, Altman has since said he expects OpenAI to spend “trillions” on AI infrastructure.

To finance those projects, OpenAI is entering into new territory. In September, chipmaker Nvidia Corp. announced an agreement to invest up to $100 billion in OpenAI’s data center buildout, a deal that some analysts say raises questions about whether the chipmaker is trying to prop up its customers so that they keep spending on its own products.

The concerns have followed Nvidia, to varying degrees, for much of the boom. The dominant maker of AI accelerator chips has backed dozens of companies in recent years, including AI model makers and cloud computing providers. Some of them then use that capital to buy Nvidia’s expensive semiconductors. The OpenAI deal was far larger in scale.

OpenAI has also indicated it could pursue debt financing, rather than leaning on partners such as Microsoft Corp. and Oracle Corp. The difference is that those companies have rock-solid, established businesses that have been profitable for many years. OpenAI expects to burn through $115 billion of cash through 2029, The Information has reported.

Other large tech companies are also relying increasingly on debt to support their unprecedented spending. Meta, for example, turned to lenders to secure $26 billion in financing for a planned data center complex in Louisiana that it says will eventually approach the size of Manhattan. JPMorgan Chase & Co. and Mitsubishi UFJ Financial Group are also leading a loan of more than $22 billion to support Vantage Data Centers’ plan to build a massive data-center campus, Bloomberg News has reported.

So how about the payback?

By 2030, AI companies will need $2 trillion in combined annual revenue to fund the computing power needed to meet projected demand, Bain & Co. said in a report released in September. Yet their revenue is likely to fall $800 billion short of that mark, Bain predicted.

“The numbers that are being thrown around are so extreme that it’s really, really hard to understand them,” said David Einhorn, a prominent hedge fund manager and founder of Greenlight Capital. “I’m sure it’s not zero, but there’s a reasonable chance that a tremendous amount of capital destruction is going to come through this cycle.”

In a sign of the times, there’s also a growing number of less proven firms trying to capitalize on the data center goldrush. Nebius, an Amsterdam-based cloud provider that split off from Russian internet giant Yandex in 2024, recently inked an infrastructure deal with Microsoft worth up to $19.4 billion. And Nscale, a little-known British data center company, is working with Nvidia, OpenAI and Microsoft on build-outs in Europe. Like some other AI infrastructure providers, Nscale previously focused on another frothy sector: cryptocurrency mining.

Are there concerns about the technology itself?

The data center spending spree is overshadowed by persistent skepticism about the payoff from AI technology. In August, investors were rattled after researchers at the Massachusetts Institute of Technology found that 95% of organizations saw zero return on their investment in AI initiatives.

More recently, researchers at Harvard and Stanford offered a possible explanation for why. Employees are using AI to create “workslop,” which the researchers define as “AI-generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.”

The promise of AI has long been that it would help streamline tasks and boost productivity, making it an invaluable asset for workers and one that corporations would pay top dollar for. Instead, the Harvard and Stanford researchers found the prevalence of workslop could cost larger organizations millions of dollars a year in lost productivity.

AI developers have also been confronting a different challenge. OpenAI, Claude chatbot developer Anthropic and others have for years bet on the so-called scaling laws—the idea that more computing power, data and larger models will inevitably pave the way for greater leaps in the power of AI.

Eventually, they say, these advances will lead to artificial general intelligence, a hypothetical form of the technology so sophisticated that it matches or exceeds humans in most tasks.

Over the past year, however, these developers have experienced diminishing returns from their costly efforts to build more advanced AI. Some have also struggled to match their own hype.

After months of touting GPT-5 as a significant leap, OpenAI’s release of its latest AI model in August was met with mixed reviews. In remarks around the launch, Altman conceded that “we’re still missing something quite important” to reach AGI.

Those concerns are compounded by growing competition from China, where companies are flooding the market with competitive, low-cost AI models. While U.S. firms are generally still viewed as ahead in the race, the Chinese alternatives risk undercutting Silicon Valley on price in certain markets, making it harder to recoup the significant investment in AI infrastructure.

There’s also the risk that the AI industry’s vast data center buildout, entailing a huge increase in electricity consumption, will be held back by the limitations of national power networks.

What does the AI industry say in response?

Sam Altman, the face of the current AI boom, has repeatedly acknowledged the risk of a bubble in recent months while maintaining his optimism for the technology. “Are we in a phase where investors as a whole are overexcited about AI? In my opinion, yes,” he said in August. “Is AI the most important thing to happen in a very long time? My opinion is also yes.”

Altman and other tech leaders continue to express confidence in the roadmap toward AGI, with some suggesting it could be closer than skeptics think.

“Developing superintelligence is now in sight,” Zuckerberg wrote in July, referencing an even more powerful form of AI that his company is aiming for. In the near term, some AI developers also say they need to drastically ramp up computing capacity to support the rapid adoption of their services.

Altman, in particular, has stressed repeatedly that OpenAI remains constrained in computing resources as hundreds of millions of people around the world use its services to converse with ChatGPT, write code and generate images and videos.

OpenAI and Anthropic have also released their own research and evaluations that indicate AI systems are having a meaningful impact on work tasks, in contrast to the more damning reports from outside academic institutions. An Anthropic report released in September found that roughly three quarters of companies are using Claude to automate work.

The same month, OpenAI released a new evaluation system called GDPval that measures the performance of AI models across dozens of occupations.

“We found that today’s best frontier models are already approaching the quality of work produced by industry experts,” OpenAI said in a blog post. “Especially on the subset of tasks where models are particularly strong, we expect that giving a task to a model before trying it with a human would save time and money.”

So how much will customers eventually be willing to pay for these services? The hope among developers is that, as AI models improve and field more complex tasks on users’ behalf, they will be able to convince businesses and individuals to spend far more to access the technology.

“I want the door open to everything,” OpenAI Chief Financial Officer Sarah Friar said in late 2024, when asked about a report that the company has discussed a $2,000 monthly subscription for its AI products. “If it’s helping me move about the world with literally a Ph.D.-level assistant for anything that I’m doing, there are certainly cases where that would make all the sense in the world.”

In September, Zuckerberg said an AI bubble is “quite possible,” but stressed that his bigger concern is not spending enough to meet the opportunity. “If we end up misspending a couple of hundred billion dollars, I think that that is going to be very unfortunate, obviously,” he said in a podcast interview. “But what I’d say is I actually think the risk is higher on the other side.”

What makes a market bubble?

Bubbles are economic cycles defined by a swift increase in market values to levels that aren’t supported by the underlying fundamentals. They’re usually followed by a sharp selloff—the so-called pop.

A bubble often begins when investors get swept up in a speculative frenzy—over a new technology or other market opportunity—and pile in for fear of missing out on further gains. American economist Hyman Minsky identified five stages of a market bubble: displacement, boom, euphoria, profit-taking and panic.

Bubbles are sometimes difficult to spot because market prices can become dislocated from real-world values for many reasons, and a sharp price drop isn’t always inevitable. And, because a crash is part of a bubble cycle, they can be hard to pinpoint until after the fact.

Generally, bubbles pop when investors realize that the lofty expectations they had were too high. This usually follows a period of over-exuberance that tips into mania, when everyone is buying into the trend at the very top.

What comes next is usually a slow, prolonged selloff where company earnings start to suffer, or a singular event that changes the long-term view, sending investors dashing for the exits.

There was some fear that an AI bubble had already popped in late January, when China’s DeepSeek upended the market with the release of a competitive AI model purportedly built at a fraction of the amount that top U.S. developers spend. DeepSeek’s viral success triggered a trillion-dollar selloff of technology shares. Nvidia, a bellwether AI stock, slumped 17% in one day.

The DeepSeek episode underscored the risks of investing heavily in AI. But Silicon Valley remained largely undeterred. In the months that followed, tech companies redoubled their costly AI spending plans, and investors resumed cheering on these bets. Nvidia shares charged back from an April low to fresh records. It was worth more than $4 trillion by the end of September, making it the most valuable company in the world.

So is this 1999 all over again?

As with today’s AI boom, the companies at the center of the dot-com frenzy drew in vast amounts of investor capital, often using questionable metrics such as website traffic rather than their actual ability to turn a profit. There were many flawed business models and exaggerated revenue projections.

Telecommunication companies raced to build fiber-optic networks only to find the demand wasn’t there to pay for them. When it all crashed in 2001, many companies were liquidated, others absorbed by healthier rivals at knocked-down prices.

Echoes of the dot-com era can be found in AI’s massive infrastructure build-out, sky-high valuations and showy displays of wealth. Venture capital investors have been courting AI startups with private jets, box seats and big checks.

Many AI startups tout their recurring revenue as a key metric for growth, but there are doubts as to how sustainable or predictable those projections are, particularly for younger businesses. Some AI firms are completing multiple mammoth fundraisings in a single year. Not all will necessarily flourish.

“I think there’s a lot of parallels to the internet bubble,” said Bret Taylor, OpenAI’s chairman and the CEO of Sierra, an AI startup valued at $10 billion. Like the dot-com era, a number of high-flying companies will almost certainly go bust. But in Taylor’s telling, there will also be large businesses that emerge and thrive over the long term, just as happened with Amazon.com Inc. and Alphabet Inc.’s Google in the late 90s.

“It is both true that AI will transform the economy, and I think it will, like the internet, create huge amounts of economic value in the future,” Taylor said. “I think we’re also in a bubble, and a lot of people will lose a lot of money.”

Amazon Chairman Jeff Bezos said the spending on AI resembles an “industrial bubble” akin to the biotech bubble of the 1990s, but he still expects it to improve the productivity of “every company in the world.”

There are also some key differences to the dot-com boom that market watchers point out, the first being the broad health and stability of the biggest businesses that are at the forefront of the trend. Most of the “Magnificent Seven” group of U.S. tech companies are long-established giants that make up much of the earnings growth in the S&P 500 Index. These firms have huge revenue streams and are sitting on large stockpiles of cash.

Despite the skepticism, AI adoption has also proceeded at a rapid clip. OpenAI’s ChatGPT has about 700 million weekly users, making it one of the fastest growing consumer products in history. Top AI developers, including OpenAI and Anthropic, have also seen remarkably strong sales growth. OpenAI previously forecast revenue would more than triple in 2025 to $12.7 billion.

While the company does not expect to be cash-flow positive until near the end of this decade, a recent deal to help employees sell shares gave it an implied valuation of $500 billion—making it the world’s most valuable company never to have turned a profit.

2025 Bloomberg L.P. Distributed by Tribune Content Agency, LLC.

Citation:
Why fears of a trillion-dollar AI bubble are growing (2025, November 6)
retrieved 6 November 2025
from https://techxplore.com/news/2025-11-trillion-dollar-ai.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Tech

This AI Agent Is Designed to Not Go Rogue

Published

on

This AI Agent Is Designed to Not Go Rogue


AI agents like OpenClaw have recently exploded in popularity precisely because they can take the reins of your digital life. Whether you want a personalized morning news digest, a proxy that can fight with your cable company’s customer service, or a to-do list auditor that will do some tasks for you and prod you to resolve the rest, agentic assistants are built to access your digital accounts and carry out your commands. This is helpful—but has also caused a lot of chaos. The bots are out there mass-deleting emails they’ve been instructed to preserve, writing hit pieces over perceived snubs, and launching phishing attacks against their owners.

Watching the pandemonium unfold in recent weeks, longtime security engineer and researcher Niels Provos decided to try something new. Today he is launching an open source, secure AI assistant called IronCurtain designed to add a critical layer of control. Instead of the agent directly interacting with the user’s systems and accounts, it runs in an isolated virtual machine. And its ability to take any action is mediated by a policy—you could even think of it as a constitution—that the owner writes to govern the system. Crucially, IronCurtain is also designed to receive these overarching policies in plain English and then runs them through a multistep process that uses a large language model (LLM) to convert the natural language into an enforceable security policy.

“Services like OpenClaw are at peak hype right now, but my hope is that there’s an opportunity to say, ‘Well, this is probably not how we want to do it,’” Provos says. “Instead, let’s develop something that still gives you very high utility, but is not going to go into these completely uncharted, sometimes destructive, paths.”

IronCurtain’s ability to take intuitive, straightforward statements and turn them into enforceable, deterministic—or predictable—red lines is vital, Provos says, because LLMs are famously “stochastic” and probabilistic. In other words, they don’t necessarily always generate the same content or give the same information in response to the same prompt. This creates challenges for AI guardrails, because AI systems can evolve over time such that they revise how they interpret a control or constraint mechanism, which can result in rogue activity.

An IronCurtain policy, Provos says, could be as simple as: “The agent may read all my email. It may send email to people in my contacts without asking. For anyone else, ask me first. Never delete anything permanently.”

IronCurtain takes these instructions, turns them into an enforceable policy, and then mediates between the assistant agent in the virtual machine and what’s known as the model context protocol server that gives LLMs access to data and other digital services to carry out tasks. Being able to constrain an agent this way adds an important component of access control that web platforms like email providers don’t currently offer because they weren’t built for the scenario where both a human owner and AI agent bots are all using one account.

Provos notes that IronCurtain is designed to refine and improve each user’s “constitution” over time as the system encounters edge cases and asks for human input about how to proceed. The system, which is model-independent and can be used with any LLM, is also designed to maintain an audit log of all policy decisions over time.

IronCurtain is a research prototype, not a consumer product, and Provos hopes that people will contribute to the project to explore and help it evolve. Dino Dai Zovi, a well-known cybersecurity researcher who has been experimenting with early versions of IronCurtain, says that the conceptual approach the project takes aligns with his own intuition about how agentic AI needs to be constrained.



Source link

Continue Reading

Tech

OpenAI Announces Major Expansion of London Office

Published

on

OpenAI Announces Major Expansion of London Office


OpenAI has announced plans to turn its London office into its largest research hub outside of the United States.

The company—which established a UK office in 2023—says it will expand its London-based research team, scooping up talent emerging from leading British universities. It has not indicated how many researchers it will hire.

“The UK brings together world-class talent and leading scientific institutions and universities, making it an ideal place to deliver the important research which will ensure our AI is safe, useful, and benefits everyone,” said Mark Chen, chief research officer at OpenAI, in a statement.

The plans bring OpenAI into direct competition for top research talent with Google DeepMind, the AI lab run by British researcher Demis Hassabis, which is headquartered in London. DeepMind has long-running partnerships with Oxford University and the University of Cambridge, where it sponsors professorships, funds research, and works alongside researchers.

At the latest careers fair at Oxford University, the floor was packed with undergraduates looking for technical roles and recruiters hiring for AI-related positions. “The demand and supply is increasing on both sides, even within a year,” says Jonathan Black, director of the careers service at Oxford University. “To have something like this turn up is a really positive sign.”

OpenAI’s expansion in London could have a sort-of flywheel effect, whereby the researchers it hires early in their careers go on to start new labs in the UK, says Tom Wilson, partner at venture capital firm Seedcamp. “We’ve seen many examples over the years,” he says. “That’s where these kinds of announcements can have even more impact than the initial hires … the second-order effects can be great.”

OpenAI’s team in London will continue to contribute to products like Codex and GPT-5.2, the company says, but will now “own” certain aspects of model development relating to safety, reliability, and performance evaluation.

In a statement, the UK’s science and technology secretary, Liz Kendall, described the announcement as “a huge vote of confidence in the UK’s world-leading position at the cutting edge of AI research.”

The announcement coincides with a push in the UK to scale the nation’s data center and power infrastructure to meet the voracious demand for compute among AI companies, including OpenAI.



Source link

Continue Reading

Tech

Stay Warm in the Lodge or Half-Pipe with the Best Ski Clothes

Published

on

Stay Warm in the Lodge or Half-Pipe with the Best Ski Clothes


Honorable Mentions

During the winter, a whole WIRED crew tests ski clothes almost constantly. Here are a few other items that we like.

Courtesy of REI

Hestra Fall Line 3-Finger Gloves for $190: I’ve long admired Hestra gloves from across the lift line, impressed by the Swedish company’s elegant stitchwork and thoughtful design touches. This was the year I finally got to try a pair for myself, and the Fall Line are exactly what they look like. There are six sizes available so you can get the perfect fit in this glove. The cowhide is buttery smooth and has already broken in a bit with five days’ use. The wrist strap means you never have to fret about dropping your glove from the lift when checking your phone, and they’re very warm without making me sweat. If you do sweat, the lining is removable so you can wash it without damaging the leather. —Martin Cizmar

Image may contain Clothing Glove Baseball Baseball Glove and Sport

Courtesy of Crab Grab

Crab Grab Snuggler Mitts for $89: These mini sleeping bags for your fingers are packed full of Primaloft insulation and benefitting from a sherpa fleece lining, they are toasty warm, and with a 15K membrane, impressively waterproof too. All-season mittens with durable construction for under $100? Yes Please!

Image may contain Clothing Long Sleeve Sleeve Knitwear Sweater and Coat

Courtesy of Mons Royale

Mons Royale Yotei Merino Classic Long Sleeve for $98: As I type this, I’m nowhere near a mountain, but I’m still wearing the Mons Royale Yotei long sleeve top. It is ridiculously comfortable, made from 190-gsm-weight, 100 percent merino wool, and has a mercifully relaxed cut, so I remain warm, but don’t feel like a sausage. On the mountain however, the merino wool works its magic, wicking away sweat—especially on a hike up to some fresh powder—and keeping me comfortable. Paired with a shell and the Patagonia R1 Thermal Hoodie, I’m warm enough during a bitter arctic blast.

Person wearing an orange Seniq Powder Puff Down Jacket and bib while holding an orange snowboard upright with a snowy...

Seniq Powder Puff Down Jacket and Bib

Photograph: Kristin Canning

Seniq Powder Puff Down Jacket for $498 and Bib for $398: Seniq is another all-women’s outdoor brand that launched in 2024. It’s styled a little more Gen Z, leaning into fun color blocking over the monochromatic look. The Seniq Powder Puff Down Jacket has a dry-touch finish. It’s meant for drier days on the mountain, but a PFC-free DWR coating and YKK AquaGuard zippers do provide water resistance. The asymmetric front zipper helps you avoid chin rub when you have the jacket fully zipped. It also features cool asymmetrical quilting lines, side pockets-in-pockets that provide access to your bib (their bibs have a pocket on the front, so you can get in there without unzipping your jacket), an oversized removable hood, a forearm pass pocket, soft and stretchy wrist gaiters, and a large internal pocket that can absolutely handle a sandwich. This jacket was warm, pillowy, and comforting, like a super-soft hug.

The silky shell bibs are slightly barrel cut, which gives them a flattering shape without being fitted. The adjustable racer back-style straps and low back (with a stretchy waist) also provide a nice shape and breathability. There are two pockets on the front chest, pockets on either leg, two-way zip thigh vents, and a butt zipper for bathroom breaks. These fit easily over my boots, and the instep guards were a nice touch. With a durable three-layer membrane and a 20,000-mm waterproof rating, these will hold up against any and all weather the mountain throws at you. When I wore them on a wet snowy day, they beaded and sloughed off moisture well. —Kristin Canning

Person wearing a red Mammut Sender In Hooded Jacket with their hands in the pockets and a white door behind them

Mammut Sender In Hooded Jacket

Photograph: Kristin Canning

Mammut Sender In Hooded Jacket for $259: This puffy hoodie is a great mid-layer for under a shell jacket. The insulation is made from recycled rope scraps, and the outer is coated in wind-resistant PFC-free DWR coating. The hem falls at the hips, and the high collar and tight hood keep most of the face covered. I like wearing this piece under shells for snowboarding, but I know it’ll pull double duty as a comfy hiking and camping jacket, too, so it’s a solid multipurpose investment. It’s exceptionally lightweight and warm, though from a volume standpoint, it is on the bulkier side for a mid-layer and isn’t the most packable piece. —Kristin Canning

Helly Hansen Evolved Air Half Zip for $112: This fleece pullover has a waffle-like texture that traps heat and wicks moisture. With a high zippered collar and cinchable hem, you can adjust the fit to make it more air-tight or breathable. This mid-layer felt wonderfully lightweight while still keeping my toasty. It’s not bulky at all, only a little thicker than a base layer, laid comfortably under my jackets, and moved with me on the mountain. —Kristin Canning

Helly Hansen Lifa Base Layer Long-Sleeve Crew for $115 and Pants for $115: These base layers hit the weight sweet spot; they’re not too thick or thin, but just right. They’re slightly looser than other options on this list, so if you prefer something that isn’t so fitted, these are a great pick (but note that they run long too). These combine merino wool with Helly Hansen’s LIFA fibers, which add more moisture-wicking capabilities. They’re soft, lightweight, warm, and don’t hold onto smells. I love the cute designs and how well they regulate my temperature under insulated jackets and pants. The waist digs in a bit but doesn’t roll, and they stay in place and move well. —Kristin Canning

We have a full guide on how to layer, but here are your essentials.

Base layer: A good set of thermals is essential in the fight against cold, especially when you’re working hard. The best fabrics wick away sweat as you heat up, which helps regulate your temperature. Merino wool is the best at this, but also the most expensive. Synthetic fabrics are getting better, though, and please avoid cotton at all costs, as it gets wet and stays that way, making you cold and uncomfortable.

Mid layer: Whether you choose a hooded fleece or puffer-style jacket, this layer does the bulk of the work in cold conditions. Combined with the base layer, it traps warm air in, while also allowing moisture to be expelled. Synthetic insulation such as Primaloft Gold is brilliant and doesn’t lose its properties if it gets wet. Down jackets offer the best warmth-to-weight ratio, but they don’t pack down as small, and should never get wet. A fleece with an insulated vest is a great option if you really feel the cold.

Jacket: While ski jackets with insulation offer bonus warmth in Arctic-like conditions, for most people a waterproof shell will be enough, as it offers protection from both the snow and the wind. A cold wind will chill you to your bones faster than a bit of wet snow. Ideally choose a jacket with a waterproof membrane such as Gore-Tex (make sure it is free from PFAS, or forever chemicals), but also check for taped seams for added waterproofing, plus plenty of pockets for snacks and lift passes, and wrist cuffs and ski skirts to help keep out the snow.

Socks: As with your base layer, socks keep you warm and maintain your temperature when you’re building up a sweat. Natural fabrics work well, but a blend of merino wool with synthetic stretchy fibers is the way to go, as they stay up better and can be used for more than a day. Avoid cotton again, and never wear two pairs, as you’ll almost certainly get colder feet.

Gloves: You’ll be surprised by how wet ski gloves get when it’s snowing, even if you don’t fall very often. As a result, waterproof options work best in most cases, although well-made leather designs can be almost as waterproof as a pair with Gore-Tex. Mittens are generally warmer than gloves, but what you gain in toasty fingers you lose in dexterity. Check out our Best Ski Gloves and Mittens guide for more information.

Waterproofing and breathability ratings: Waterproofing is measured with a hydrostatic head rating, or HH. That means if you put a 1-inch, endlessly long square tube on top of the fabric, you could pour 20,000 millimeters of water before it would seep through. Breathability is rated in how many grams of vapor per square meter can can pass through the fabric in 24 hours.

I’ve been reviewing winter sports gear for more than 15 years. In that time, I have worn an untold number of jackets, pants, mid-layers, thermals, gloves, and mittens. I called on industry experts and professional skiers, and solicited opinions from fellow winter sport enthusiasts on the WIRED team. While a basic fit check can be done in the office, nothing replaces on-mountain testing in variable conditions. We put in the time on various trips to the French Alps, as well as in resorts in Vermont, Colorado, Arizona, and Oregon.

Power up with unlimited access to WIRED. Get best-in-class reporting and exclusive subscriber content that’s too important to ignore. Subscribe Today.



Source link

Continue Reading

Trending