Connect with us

Tech

Why we should all be worried about the AI bubble | Computer Weekly

Published

on

Why we should all be worried about the AI bubble | Computer Weekly


It’s becoming clearer that we are in a perilous financial situation globally. Fears over an “AI bubble” are being cited by the Bank of England, the International Monetary Fund and the boss of JP Morgan, Jamie Dimon.

If you want a sense of how insane the narrative is around AI investments, consider this: Thinking Machines Lab, an AI startup, recently raised $2bn funding on a valuation of $10bn.

The company has zero products, zero customers and zero revenues. The only thing it made public to its investors was the resume of its founder, Mira Murati, formerly chief technology officer at OpenAI. If that’s not hubris meeting market exuberance, what is?

But narrative is crucial here because it’s what’s driving all this insane investment in the future of AI or so-called artificial general intelligence (AGI), and it’s important to examine which narrative you believe in if you are to protect yourself for what’s to come.

If I were to pick between the views of a politician such as UK prime minister Keir Starmer, and a writer such as Cory Doctorow, I’d put my bet on Doctorow. Contrast these two statements and see which you feel more comfortable with…

Starmer: “Today’s plan mainlines AI into the veins of this enterprising nation”.

Doctorow: “AI is the asbestos we are shovelling into the walls of our society and our descendants will be digging it out for generations”.

Puncture the bubble

Doctorow suggests the AI bubble needs to be punctured as soon as possible to “halt this before it progresses any further and to head off the accumulation of social and economic debt”.

He suggests doing that by taking aim at the basis for the AI bubble – namely, “creating a growth story by claiming that AI can do your job”.

AI is the asbestos we are shovelling into the walls of our society and our descendants will be digging it out for generations
Cory Doctorow

Claims about jobs disappearing to AI have been around since 2019 with Sam Altman, then leader of venture capital (VC) fund Y Combinator, speaking about radiology jobs disappearing in the future: “Human radiologists are already much worse than computer radiologists. If I had to pick a human or an AI to read my scan, I’d pick the AI.”

Fast forward six years to 2025 and look how that worked out. According to a recent report by Works in Progress, despite the fact that radiology combines digital images, clear benchmarks and repeatable tasks, demand for human radiologists is at an all-time high.

The report authors’ conclusions drive a horse and cart through the current AI/AGI narrative that if left unstopped will cause severe global economic pain: “In many jobs, tasks are diverse, stakes are high, and demand is elastic. When this is the case, we should expect software to initially lead to more human work, not less. The lesson from a decade of radiology models is neither optimism about increased output nor dread about replacement. Models can lift productivity, but their implementation depends on behaviour, institutions and incentives. For now, the paradox has held – the better the machines, the busier radiologists have become.”

Across other sectors too, the mythology around job losses is slowly being interrogated – for example, Yale University Budget Lab found no discernible disruption to labour markets since ChatGPT’s release 33 months ago.

The research goes on to state: “While this finding may contradict the most alarming headlines, it is not surprising given past precedents. Historically, widespread technological disruption in workplaces tends to occur over decades, rather than months or years. Computers didn’t become commonplace in offices until nearly a decade after their release to the public, and it took even longer for them to transform office workflows. Even if new AI technologies will go on to impact the labour market as much, or more dramatically, it is reasonable to expect that widespread effects will take longer than 33 months to materialise”.

Normal technology

In other words, AI is just, well, technology as we have always known it – or as experts Aryind Narayanan and Sayash Kapoor call AI, just “normal technology”. 

Importantly in their paper, AI as normal technology – An alternative to the vision of AI as a potential superintelligence, they identify key lessons from past technological revolutions – the slow and uncertain nature of technology adoption and diffusion; continuity between the past and future trajectory of AI in terms of social impact; and the role of institutions in shaping this trajectory. They also “strongly disagree with the characterisation of generative AI adoption as rapid, which reinforces our assumption about the similarity of AI diffusion to past technologies”

A good example of AI as normal technology without all the hype, hyperbole and billion-dollar burn rate, is the City of Austin, Texas. Here, an on-premise AI system helped the local government process building permits in days instead of months.

According to David Stout, CEO of WebAI, this was done “with no spectacle. No headlines. Just efficiency gains that will outlast the market cycle. He said, “That’s the point too often missed in the frenzy. Mega-models attract headlines, consume billions in capital, and struggle to demonstrate sustainable economics. Meanwhile, smaller, domain-specific systems are already delivering efficiency gains, cost savings and productivity improvements. The smart play isn’t to abandon AI, but to pivot towards models and deployments that will endure”.

Technology like we have always known it to be – not the insane fantasy of “superintelligence” that is powering this dangerous bubble.

The question to ask is, given the prediction of at least a 33-month lag before any return on investment, however small, will the markets wait another 33 months for their returns to materialise?

Protracted crisis

A recent report on MarketWatch suggests the AI bubble is now ”seventeen times the size of the dot com frenzy and four times the sub-prime bubble”. MarketWatch quotes financial analyst Julien Garran, who previously led UBS’s commodities strategy team, who said “AI now accounts for over four times the wealth trapped in the 2008 sub-prime mortgage bubble, which resulted in years of protracted crisis across the globe”.

Warnings from the Bank of England in its semi-annual Financial Policy Committee report are equally stark: “Uncertainty around the global risk environment increases the risk that markets have not fully priced in possible adverse outcomes, and a sudden correction could occur should any of these risks crystallize.”

The bank also warned of “the risk of a sharp market for global financial markets amid AI bubble risks and political pressure on the Federal Reserve.”

What a sudden correction means is that a collapse of the AI investment bubble will take trillions of investment with it, impacting us all.

Even more worrying is the issue of debt financing among those competing in the AI race – that is, all the tech bros. It now appears, according to Axios, that these companies are turning to private debt markets and special purpose vehicles for cash, which means this kind of borrowing does not have to show on their balance sheets.

Meta, for example, recently sought $29bn from private capital firms for its AI datacentres. This off-book debt financing should ring more alarm bells that something is terribly wrong with the AI growth narrative.

After all, as pointed out by the Axios analysts, “If hugely profitable tech companies need to mask their borrowings to fund AI spending, it signals they’re not confident that they’ll soon get the returns needed to justify such investments. That suggests the very spending powering today’s earnings boom can’t last forever.”

Unit economics

To go back to Cory Doctorow’s argument, we are not in the early days of the web, or Amazon, or other dot com companies that lost money before becoming profitable: “Those were all propositions with excellent unit economics. They got cheaper with every successive technological generation and the more customers they added, the more profitable they became”.

AI companies do not have excellent unit economics – in fact they have the opposite, according to Doctorow: “Each generation of AI has been vastly more expensive than the previous one, and each new AI customer makes the AI companies lose more money”.

[Only] about 5% of tasks will be able to be profitably performed by AI within 10 years
Daron Acemoglu

And if that’s not sobering enough for the VC and private equity firms, then the circular investing going on between these tech firms should be a huge concern.

Microsoft is investing $10bn in OpenAI by giving free access to its servers. OpenAI reports this as an “investment,” then redeems these tokens at Microsoft datacentres, which Microsoft books as $10bn in revenue.

Bain & Co says the only way to make today’s AI investments profitable “is for the sector to bring in $2tn by 2030,” which, according to the Wall Street Journal, is more than the revenue of Amazon, Google, Microsoft, Apple, Nvidia and Meta – combined.

Taking a closer look at US economic growth is surely more cause for concern.

According to Harvard economist Jason Furman’s analysis, GDP growth in the first half of 2025 was driven almost entirely by investment in information processing equipment and software. This spending was largely tied to the rapid expansion of AI infrastructure and datacentres.

While these tech sectors only made up 4% of total GDP, they contributed a staggering 92% of growth. Absent this investment, Furman estimates US GDP growth would have hovered around 0.1% on an annualised basis – barely above zero.

There is a lot riding on a technology that’s supposed to be godlike and all powerful but which, according to MIT Institute professor Daron Acemoglu, is far less likely to achieve the insane hyperbolic claims being made by the tech bros in an effort to win an unwinnable race.

Acemoglu estimates the 10-year effect of AI in the US will be that only “about 5% of tasks will be able to be profitably performed by AI within that timeframe,” with the GDP boost likely to be closer to 1% over that timespan. If that’s not a recipe for stock market collapse, what is?

Emperor’s new clothes

Going back to the AI booster narrative and how it’s driving things, Doctorow is again incisive: “The most important thing about AI isn’t its technical capabilities or limitations. The most important thing is the investor story and the ensuing mania that has teed up an economic catastrophe that will harm hundreds of millions or even billions of people. AI isn’t going to wake up, become super intelligent and turn you into paperclips – but rich people with AI investor psychosis are almost certainly going to make you much, much poorer”.

I’m not an economist, so I did what we are all supposed to do now for our enlightenment. I gave the machines built by the tech bros all the same prompt: “What fable best encapsulates the current AI bubble?”

Gemini, Perplexity and ChatGPT were all in agreement with nearly the same explanation of why they all picked the same story: “The emperor’s new clothes remains the best classic fable to explain the AI bubble, as it encapsulates the collective willingness to believe in – and profit from – an imagined reality, until facts and external shocks eventually break the spell.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Framework Has a Better, More Take-Apart-Able Laptop

Published

on

Framework Has a Better, More Take-Apart-Able Laptop


Framework, the company that makes laptops designed for optimal repairability, announced a new version of its main product, a 13-inch screen laptop. It’s called the Framework Laptop 13 Pro, and it has far better battery life, a touchscreen, a haptic touchpad, and is fitted with Intel processors.

At an event in San Francisco today, Framework CEO Nirav Patel showed off the company’s new tech, opening with a joke about making Framework AI—something the company is very much not doing. Framework’s whole thing, after all, is aiming to give users control over the physical tech they use.

“That industry is fighting for you to own nothing, and they own everything,” Patel said about the AI industry. “We’re fighting for a future where you can own everything and be free.”

Framework used the event to detail other updates coming to its 16-inch laptop. It also showed off previews of an official developer kit and a wireless keyboard for controlling your rig from the couch.

Framework 13 Pro

The Framework Laptop 13 Pro.

Courtesy of Framework

As the name implies, the 13 Pro is a step up from the company’s last version, the Framework 13. It’s also pricier, starting at $1,199 for a DIY Edition that requires assembling the computer yourself. Pre-built units start at $1,499 but can be upgraded with more features. Framework says it will start shipping the 13 Pro in June.

Framework’s signature move for its products is the ability to take the thing apart. The 13 Pro is made with that ethos in mind, so its parts can be easily swapped out, upgraded, or replaced. Four Thunderbolt 4 interfaces let you pick which ports (USB-C, HDMI, etc.) you want and then choose where to place them. Framework says it planned the laptop with cross-generation compatibility in mind, so current Framebook 13 laptop owners will be able to use new 13 Pro parts like the mainboard, display, and battery, and put them into their existing machine.

The big changes in the guts of the 13 Pro come from Framework’s shift away from using an AMD processor to Intel’s Core Ultra Series 3 processors, which Framework described in its press release as “just insanely efficient.” That efficiency, along with a bigger battery, translates to more than 20 hours of battery life while streaming 4K Netflix videos, at least that’s the claim. That’s almost 12 hours longer than the Framework 13.

Image may contain Computer Electronics Laptop Pc Computer Hardware Hardware Monitor and Screen

Courtesy of Framework

Image may contain Computer Electronics Laptop Pc Computer Hardware Hardware Monitor and Screen

Courtesy of Framework



Source link

Continue Reading

Tech

OpenAI Beefs Up ChatGPT’s Image Generation Model

Published

on

OpenAI Beefs Up ChatGPT’s Image Generation Model


OpenAI launched a new image generation AI model on Tuesday, dubbed ChatGPT Images 2.0. This model can generate more than one image from a single prompt, like an entire study booklet, as well as output text, including in non-English languages, like Chinese and Hindi. This release is available globally for ChatGPT and Codex users, with a more powerful version available for paying subscribers.

When any major AI company releases a new image model, it can revive interest and boost usage, especially if social media users adopt a meme-able trend, transforming images of themselves. Last year, Google’s launch of the Nano Banana model was a major moment for the company, especially when users started posting hyperrealistic figurines of themselves online. Earlier this year, ChatGPT Images made waves on social media as users shared AI-generated caricatures.

What’s Different?

Since the new model can tap into ChatGPT’s “reasoning” capabilities, Images 2.0 can search the internet for recent information and generate more than one image at a time. In essence, the bot can use additional steps to output more thorough generations from a single prompt. Images 2.0 also has a more recent knowledge cutoff date: December 2025.

This also means that outputs from the new model are more granular. For example, I generated an infographic with San Francisco’s weather forecast for the next day, as well as activities worth doing. The image ChatGPT generated included accurate weather details for the rainy day, along with accurate-looking drawings of the Ferry Building, Castro Theater, Painted Ladies houses, and Transamerica Pyramid.

Additionally, Images 2.0 is more customizable for users who want unique aspect ratios for image outputs. The new model can generate images, ranging from 3:1 wide to 1:3 tall, and users can adjust the image’s size as part of their prompt to the AI tool.

First Impressions

After a few hours of generating images with the new model, I was generally impressed with the text rendering capabilities, in English at least. Not that long ago, image outputs featuring text, from any of the major models, often included numerous malformed characters or words with errant extra letters. ChatGPT struggled to label images accurately two years prior, so the cleaner, more complex outputs from Images 2.0 are a sign of continued improvement. Google has also focused on improving image outputs featuring text in its recent iterations of Nano Banana.

Image may contain Advertisement Poster Person Beverage Coffee Coffee Cup Clothing Coat and Jacket

AI-GENERATED BY REECE ROGERS



Source link

Continue Reading

Tech

TAG Heuer Has Dropped New Polylight-Powered F1s

Published

on

TAG Heuer Has Dropped New Polylight-Powered F1s


No doubt looking to find some breathing space after the hubbub of Watches and Wonders last week, TAG Heuer has dropped an update to its 2025 revamped collection of the brand’s iconic plastic-cased 1980s watch, the “Formula 1.”

The five new pieces are called the “pastel collection” by TAG, and all are built on the same solar-powered Formula 1 Solargraph 38 mm that launched in March last year. Two models feature a sandblasted stainless steel case, while the remaining three have cases made from TAG’s proprietary bio-polamide plastic, Polylight.

It’s these Polylight versions that, for WIRED, are the stars of the new mini collection. Coming in pastel blue, beige, and pink, and sporting case-matching rubber straps and bidirectional-rotating Polylight bezels, they reference classic F1 designs that made the line iconic in the first place.

The new Polylight beige.

Courtesy of TAG Heuer

Image may contain Wristwatch Arm Body Part and Person

The “pastel green” steel F1 Solargraphs.

Courtesy of TAG Heuer

The stainless steel models have a 3-link sandblasted steel bracelet and either a “pastel green” or “lavender blue” dial with matching Polylight bezels. The dials on both watches also see eight diamonds replace the circular hour markers. TAG says these models add “a touch of refinement for those seeking sophistication,” but considering these “luxury” F1s will retail at $2,800, as opposed to the already punchy $1,950 full Polylight versions, our pick is most definitely the plastic pieces.

Not only do these blue, beige, and pink versions pleasingly hark back to vintage F1 designs—though now 38 mm in size instead of the original 35 mm—but also, just like all F1 Solargraphs, they come equipped with screw-down crowns and casebacks, making for 100 meters of water resistance and ensuring these will serve well as dive and sports watches. My recommendation? Go for the pink, it looks superb on the wrist. The beige is a very close second.

Image may contain Wristwatch Arm Body Part and Person

Pretty in pink: The new Polylight pink F1 is limited to 1,110 pieces for the 110th anniversary of the Indy 500.

Photograph: Jeremy White



Source link

Continue Reading

Trending