Connect with us

Tech

Why we should all be worried about the AI bubble | Computer Weekly

Published

on

Why we should all be worried about the AI bubble | Computer Weekly


It’s becoming clearer that we are in a perilous financial situation globally. Fears over an “AI bubble” are being cited by the Bank of England, the International Monetary Fund and the boss of JP Morgan, Jamie Dimon.

If you want a sense of how insane the narrative is around AI investments, consider this: Thinking Machines Lab, an AI startup, recently raised $2bn funding on a valuation of $10bn.

The company has zero products, zero customers and zero revenues. The only thing it made public to its investors was the resume of its founder, Mira Murati, formerly chief technology officer at OpenAI. If that’s not hubris meeting market exuberance, what is?

But narrative is crucial here because it’s what’s driving all this insane investment in the future of AI or so-called artificial general intelligence (AGI), and it’s important to examine which narrative you believe in if you are to protect yourself for what’s to come.

If I were to pick between the views of a politician such as UK prime minister Keir Starmer, and a writer such as Cory Doctorow, I’d put my bet on Doctorow. Contrast these two statements and see which you feel more comfortable with…

Starmer: “Today’s plan mainlines AI into the veins of this enterprising nation”.

Doctorow: “AI is the asbestos we are shovelling into the walls of our society and our descendants will be digging it out for generations”.

Puncture the bubble

Doctorow suggests the AI bubble needs to be punctured as soon as possible to “halt this before it progresses any further and to head off the accumulation of social and economic debt”.

He suggests doing that by taking aim at the basis for the AI bubble – namely, “creating a growth story by claiming that AI can do your job”.

AI is the asbestos we are shovelling into the walls of our society and our descendants will be digging it out for generations
Cory Doctorow

Claims about jobs disappearing to AI have been around since 2019 with Sam Altman, then leader of venture capital (VC) fund Y Combinator, speaking about radiology jobs disappearing in the future: “Human radiologists are already much worse than computer radiologists. If I had to pick a human or an AI to read my scan, I’d pick the AI.”

Fast forward six years to 2025 and look how that worked out. According to a recent report by Works in Progress, despite the fact that radiology combines digital images, clear benchmarks and repeatable tasks, demand for human radiologists is at an all-time high.

The report authors’ conclusions drive a horse and cart through the current AI/AGI narrative that if left unstopped will cause severe global economic pain: “In many jobs, tasks are diverse, stakes are high, and demand is elastic. When this is the case, we should expect software to initially lead to more human work, not less. The lesson from a decade of radiology models is neither optimism about increased output nor dread about replacement. Models can lift productivity, but their implementation depends on behaviour, institutions and incentives. For now, the paradox has held – the better the machines, the busier radiologists have become.”

Across other sectors too, the mythology around job losses is slowly being interrogated – for example, Yale University Budget Lab found no discernible disruption to labour markets since ChatGPT’s release 33 months ago.

The research goes on to state: “While this finding may contradict the most alarming headlines, it is not surprising given past precedents. Historically, widespread technological disruption in workplaces tends to occur over decades, rather than months or years. Computers didn’t become commonplace in offices until nearly a decade after their release to the public, and it took even longer for them to transform office workflows. Even if new AI technologies will go on to impact the labour market as much, or more dramatically, it is reasonable to expect that widespread effects will take longer than 33 months to materialise”.

Normal technology

In other words, AI is just, well, technology as we have always known it – or as experts Aryind Narayanan and Sayash Kapoor call AI, just “normal technology”. 

Importantly in their paper, AI as normal technology – An alternative to the vision of AI as a potential superintelligence, they identify key lessons from past technological revolutions – the slow and uncertain nature of technology adoption and diffusion; continuity between the past and future trajectory of AI in terms of social impact; and the role of institutions in shaping this trajectory. They also “strongly disagree with the characterisation of generative AI adoption as rapid, which reinforces our assumption about the similarity of AI diffusion to past technologies”

A good example of AI as normal technology without all the hype, hyperbole and billion-dollar burn rate, is the City of Austin, Texas. Here, an on-premise AI system helped the local government process building permits in days instead of months.

According to David Stout, CEO of WebAI, this was done “with no spectacle. No headlines. Just efficiency gains that will outlast the market cycle. He said, “That’s the point too often missed in the frenzy. Mega-models attract headlines, consume billions in capital, and struggle to demonstrate sustainable economics. Meanwhile, smaller, domain-specific systems are already delivering efficiency gains, cost savings and productivity improvements. The smart play isn’t to abandon AI, but to pivot towards models and deployments that will endure”.

Technology like we have always known it to be – not the insane fantasy of “superintelligence” that is powering this dangerous bubble.

The question to ask is, given the prediction of at least a 33-month lag before any return on investment, however small, will the markets wait another 33 months for their returns to materialise?

Protracted crisis

A recent report on MarketWatch suggests the AI bubble is now ”seventeen times the size of the dot com frenzy and four times the sub-prime bubble”. MarketWatch quotes financial analyst Julien Garran, who previously led UBS’s commodities strategy team, who said “AI now accounts for over four times the wealth trapped in the 2008 sub-prime mortgage bubble, which resulted in years of protracted crisis across the globe”.

Warnings from the Bank of England in its semi-annual Financial Policy Committee report are equally stark: “Uncertainty around the global risk environment increases the risk that markets have not fully priced in possible adverse outcomes, and a sudden correction could occur should any of these risks crystallize.”

The bank also warned of “the risk of a sharp market for global financial markets amid AI bubble risks and political pressure on the Federal Reserve.”

What a sudden correction means is that a collapse of the AI investment bubble will take trillions of investment with it, impacting us all.

Even more worrying is the issue of debt financing among those competing in the AI race – that is, all the tech bros. It now appears, according to Axios, that these companies are turning to private debt markets and special purpose vehicles for cash, which means this kind of borrowing does not have to show on their balance sheets.

Meta, for example, recently sought $29bn from private capital firms for its AI datacentres. This off-book debt financing should ring more alarm bells that something is terribly wrong with the AI growth narrative.

After all, as pointed out by the Axios analysts, “If hugely profitable tech companies need to mask their borrowings to fund AI spending, it signals they’re not confident that they’ll soon get the returns needed to justify such investments. That suggests the very spending powering today’s earnings boom can’t last forever.”

Unit economics

To go back to Cory Doctorow’s argument, we are not in the early days of the web, or Amazon, or other dot com companies that lost money before becoming profitable: “Those were all propositions with excellent unit economics. They got cheaper with every successive technological generation and the more customers they added, the more profitable they became”.

AI companies do not have excellent unit economics – in fact they have the opposite, according to Doctorow: “Each generation of AI has been vastly more expensive than the previous one, and each new AI customer makes the AI companies lose more money”.

[Only] about 5% of tasks will be able to be profitably performed by AI within 10 years
Daron Acemoglu

And if that’s not sobering enough for the VC and private equity firms, then the circular investing going on between these tech firms should be a huge concern.

Microsoft is investing $10bn in OpenAI by giving free access to its servers. OpenAI reports this as an “investment,” then redeems these tokens at Microsoft datacentres, which Microsoft books as $10bn in revenue.

Bain & Co says the only way to make today’s AI investments profitable “is for the sector to bring in $2tn by 2030,” which, according to the Wall Street Journal, is more than the revenue of Amazon, Google, Microsoft, Apple, Nvidia and Meta – combined.

Taking a closer look at US economic growth is surely more cause for concern.

According to Harvard economist Jason Furman’s analysis, GDP growth in the first half of 2025 was driven almost entirely by investment in information processing equipment and software. This spending was largely tied to the rapid expansion of AI infrastructure and datacentres.

While these tech sectors only made up 4% of total GDP, they contributed a staggering 92% of growth. Absent this investment, Furman estimates US GDP growth would have hovered around 0.1% on an annualised basis – barely above zero.

There is a lot riding on a technology that’s supposed to be godlike and all powerful but which, according to MIT Institute professor Daron Acemoglu, is far less likely to achieve the insane hyperbolic claims being made by the tech bros in an effort to win an unwinnable race.

Acemoglu estimates the 10-year effect of AI in the US will be that only “about 5% of tasks will be able to be profitably performed by AI within that timeframe,” with the GDP boost likely to be closer to 1% over that timespan. If that’s not a recipe for stock market collapse, what is?

Emperor’s new clothes

Going back to the AI booster narrative and how it’s driving things, Doctorow is again incisive: “The most important thing about AI isn’t its technical capabilities or limitations. The most important thing is the investor story and the ensuing mania that has teed up an economic catastrophe that will harm hundreds of millions or even billions of people. AI isn’t going to wake up, become super intelligent and turn you into paperclips – but rich people with AI investor psychosis are almost certainly going to make you much, much poorer”.

I’m not an economist, so I did what we are all supposed to do now for our enlightenment. I gave the machines built by the tech bros all the same prompt: “What fable best encapsulates the current AI bubble?”

Gemini, Perplexity and ChatGPT were all in agreement with nearly the same explanation of why they all picked the same story: “The emperor’s new clothes remains the best classic fable to explain the AI bubble, as it encapsulates the collective willingness to believe in – and profit from – an imagined reality, until facts and external shocks eventually break the spell.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Tackling the housing shortage with robotic microfactories

Published

on

Tackling the housing shortage with robotic microfactories



A national housing shortage is straining finances and communities across the United States. In Massachusetts, at least 222,000 homes will have to be built in the next 10 years to meet the population’s needs. At the same time, there are numerous challenges in traditional construction. There’s a shortage of skilled construction workers. Most projects involve multiple contractors and subcontractors, adding complexity and lag time. And the construction process, as well as the buildings themselves, can be a major source of emissions that contribute to climate change.

Reframe Systems, co-founded by Vikas Enti SM ’20, uses robotics, software, and high-performance materials to address these problems. Founded in 2022, the company deploys microfactories that bring housing fabrication and production closer to the regions where the homes are needed. The first homes designed and manufactured in Reframe’s first microfactory have been fully built in Arlington and Somerville, Massachusetts. 

Enti’s experiences in MIT System Design and Management (SDM) shaped the company from its start. “Learning how to navigate the system and finding the optimal value for each stakeholder has been a key part of the business strategy,” he says, “and that’s rooted in what I learned at SDM.”

Better tools for system-level problems

Enti applied to SDM’s master of science in engineering and management while he was working at Kiva Systems, overseeing its acquisition by Amazon and transformation into Amazon Robotics. He found that the SDM program’s fundamentals of systems engineering, system architecture, and project management provided him with the tools he needed to address system-level problems in his work.

While he was at MIT, Enti also served as an associate director for the MIT $100K Entrepreneurship Competition, which offers students and researchers mentorship, feedback, and potential funding for their startup ideas. He realized that “there isn’t a single formula for how businesses start, or how long it takes to get them started,” he says, which helped shape his plans to start his own business.

Enti took a leave of absence from MIT to oversee the expansion of Amazon Robotics in Europe. He returned and completed his degree in 2020, writing his thesis on developing technology that could mitigate falls for elderly people. This instinct to use his education for a good cause resurfaced when his daughters were born. He wanted his future business to address a real-world problem and have a social impact, while also reducing carbon emissions.

Growing housing, shrinking emissions

Enti concluded that housing, with immediate real-world impact and a significant share of global carbon emissions, was the right problem to work on. He reached out to his colleagues Aaron Small and Felipe Polido from Amazon Robotics to share his idea for advanced, low-cost factories that could be deployed quickly and close to where they were needed. The two joined him as co-founders.

Currently, the microfactory in Andover, Massachusetts, produces structural panels, with robotics completing wall and ceiling framing and people completing the rest of the work, including wiring and plumbing. Eventually, Reframe hopes to automate more of the building process through further use of robotics. The modular construction process allows for reduced waste and disruption on the eventual home site. And the finished homes are designed to be energy-efficient and ready for solar panel installation. The company is set to start work soon on a group of homes in Devens, Massachusetts.

In addition to the Andover location, Reframe is setting up in southern California to help rebuild homes that were destroyed in the area’s January 2025 wildfires. The company’s software-assisted design process and the adjustability of the microfactories allows them to meet local zoning and building codes and align with the local architectural aesthetic. This means that in Somerville, Reframe’s completed buildings look like modernized versions of the neighboring three-story buildings, known locally as “triple-deckers.” On the other side of the country, Reframe’s design offerings include Spanish-style and craftsman homes.

“Housing is a complex systems problem,” Enti says, explaining the impact SDM has had on his work at Reframe. The methods and tools taught in the integrated core class EM.412 (Foundations of System Design and Management) help him tackle systems-level problems and take the needs of multiple stakeholders into account. The Reframe team used technology roadmapping as they devised their overall business plan, inspired by the work of Olivier de Weck, associate head of the MIT Department of Aeronautics and Astronautics. And lectures on project management from Bryan Moser, SDM’s academic director, remain relevant. 

“Embracing the fact that this is a systems problem, and learning how to navigate the system and the stakeholders to make sure we’re finding the optimal value, has been a key part of the business strategy,” Enti says.

Reframe Systems is set to continue learning through iteration as they plan to expand their network of microfactories. The company remains committed to the core vision of sustainably meeting the country’s need for more housing. “I’m grateful we get to do this,” Enti says. “Once you strip away all the robotics, the advanced algorithms, and the factories, these are high-quality, healthy homes that families get to live in and grow.” 



Source link

Continue Reading

Tech

Framework Has a Better, More Take-Apart-Able Laptop

Published

on

Framework Has a Better, More Take-Apart-Able Laptop


Framework, the company that makes laptops designed for optimal repairability, announced a new version of its main product, a 13-inch screen laptop. It’s called the Framework Laptop 13 Pro, and it has far better battery life, a touchscreen, a haptic touchpad, and is fitted with Intel processors.

At an event in San Francisco today, Framework CEO Nirav Patel showed off the company’s new tech, opening with a joke about making Framework AI—something the company is very much not doing. Framework’s whole thing, after all, is aiming to give users control over the physical tech they use.

“That industry is fighting for you to own nothing, and they own everything,” Patel said about the AI industry. “We’re fighting for a future where you can own everything and be free.”

Framework used the event to detail other updates coming to its 16-inch laptop. It also showed off previews of an official developer kit and a wireless keyboard for controlling your rig from the couch.

Framework 13 Pro

The Framework Laptop 13 Pro.

Courtesy of Framework

As the name implies, the 13 Pro is a step up from the company’s last version, the Framework 13. It’s also pricier, starting at $1,199 for a DIY Edition that requires assembling the computer yourself. Pre-built units start at $1,499 but can be upgraded with more features. Framework says it will start shipping the 13 Pro in June.

Framework’s signature move for its products is the ability to take the thing apart. The 13 Pro is made with that ethos in mind, so its parts can be easily swapped out, upgraded, or replaced. Four Thunderbolt 4 interfaces let you pick which ports (USB-C, HDMI, etc.) you want and then choose where to place them. Framework says it planned the laptop with cross-generation compatibility in mind, so current Framebook 13 laptop owners will be able to use new 13 Pro parts like the mainboard, display, and battery, and put them into their existing machine.

The big changes in the guts of the 13 Pro come from Framework’s shift away from using an AMD processor to Intel’s Core Ultra Series 3 processors, which Framework described in its press release as “just insanely efficient.” That efficiency, along with a bigger battery, translates to more than 20 hours of battery life while streaming 4K Netflix videos, at least that’s the claim. That’s almost 12 hours longer than the Framework 13.

Image may contain Computer Electronics Laptop Pc Computer Hardware Hardware Monitor and Screen

Courtesy of Framework

Image may contain Computer Electronics Laptop Pc Computer Hardware Hardware Monitor and Screen

Courtesy of Framework



Source link

Continue Reading

Tech

OpenAI Beefs Up ChatGPT’s Image Generation Model

Published

on

OpenAI Beefs Up ChatGPT’s Image Generation Model


OpenAI launched a new image generation AI model on Tuesday, dubbed ChatGPT Images 2.0. This model can generate more than one image from a single prompt, like an entire study booklet, as well as output text, including in non-English languages, like Chinese and Hindi. This release is available globally for ChatGPT and Codex users, with a more powerful version available for paying subscribers.

When any major AI company releases a new image model, it can revive interest and boost usage, especially if social media users adopt a meme-able trend, transforming images of themselves. Last year, Google’s launch of the Nano Banana model was a major moment for the company, especially when users started posting hyperrealistic figurines of themselves online. Earlier this year, ChatGPT Images made waves on social media as users shared AI-generated caricatures.

What’s Different?

Since the new model can tap into ChatGPT’s “reasoning” capabilities, Images 2.0 can search the internet for recent information and generate more than one image at a time. In essence, the bot can use additional steps to output more thorough generations from a single prompt. Images 2.0 also has a more recent knowledge cutoff date: December 2025.

This also means that outputs from the new model are more granular. For example, I generated an infographic with San Francisco’s weather forecast for the next day, as well as activities worth doing. The image ChatGPT generated included accurate weather details for the rainy day, along with accurate-looking drawings of the Ferry Building, Castro Theater, Painted Ladies houses, and Transamerica Pyramid.

Additionally, Images 2.0 is more customizable for users who want unique aspect ratios for image outputs. The new model can generate images, ranging from 3:1 wide to 1:3 tall, and users can adjust the image’s size as part of their prompt to the AI tool.

First Impressions

After a few hours of generating images with the new model, I was generally impressed with the text rendering capabilities, in English at least. Not that long ago, image outputs featuring text, from any of the major models, often included numerous malformed characters or words with errant extra letters. ChatGPT struggled to label images accurately two years prior, so the cleaner, more complex outputs from Images 2.0 are a sign of continued improvement. Google has also focused on improving image outputs featuring text in its recent iterations of Nano Banana.

Image may contain Advertisement Poster Person Beverage Coffee Coffee Cup Clothing Coat and Jacket

AI-GENERATED BY REECE ROGERS



Source link

Continue Reading

Trending