Tech
What could burst the AI bubble?
Some of the world’s biggest tech firms have soared in value over the last year. As AI evolves at pace, there are hopes that it will improve lives in ways that people could never have imagined a decade ago—in sectors as diverse as health care, employment and scientific discovery.
OpenAI is now worth US$500 billion (£373 billion), compared with US$157 billion last October. Another firm, Anthropic, has almost trebled its valuation. But the Bank of England has now warned of a possible rapid “correction” due to its concerns about these staggering valuation rises.
The question is whether these values are realistic—or based on hype, excitement and unfounded optimism for the potential of AI. Put simply, is AI’s value today a product of what AI will do in future or what people hope it may do? Ultimately, we will only really know if it’s a bubble if it bursts—though the warning signs are evident today.
With hindsight, many things that happen in a bubble may sound exceedingly optimistic. If you take many headlines and replace the word AI with the word computers it often sounds a lot more naive.
But, predicting the path of technological change is hard. Back in 2000, the Daily Mail declared the internet could be a passing fad. Just a few months earlier the dotcom boom had peaked.
A burst bubble may not change the end of the journey. The internet was not a passing fad. However, bubbles are extremely disruptive and affect people in very real ways. Stocks fall, pensions suffer, unemployment rises and investment is wasted. Real potential is crowded out in the hype and mania to focus all investment in a small number of stocks and firms.
Right now, we have the first sign of a bubble—a rapid rise in valuations. If these correct and fall we will have a bubble. If these valuations continue to rise we could be seeing a new sustained market that is focused on the technology of the future.
Of course, it might be that these valuations plateau. What happens then depends on whether people have invested in the belief that prices will always rise.
Consider a situation where people believe—as the Bank of England does—that AI firms’ valuations may be “stretched.” It’s helpful to consider what these valuations are based on. Investment is simply a bet that AI increases profitability for the firms involved. These massive valuations are bets that AI will hugely increase future profitability.
In some cases, these are bets that AI will improve in capabilities towards some kind of “artificial superintelligence” that can do everything a human can do—or more. This could raise the living standards of everyone on Earth. Leading computer scientist Stuart Russell estimates the value of that at US$14 quadrillion—investors are buying a claim on that outcome too.
If investors begin to fear that AI profits won’t materialize, then they will try to get their money back. This realization can appear quite suddenly and can be prompted by seemingly minor events. It doesn’t require a big needle to pop a bubble.
A US article published in March 2000 warned that internet companies were fast running out of money. This caused many people to rethink their investments
At this stage of the bubble, investment excitement had spread to everyday investors. These regular people balanced their fear of missing out with a fear that they were investing in something new that they didn’t know much about. For many, an article in a popular magazine suggesting they may have made a mistake tipped the scales towards caution. They began to sell their dotcom stocks.
In search of profit
It may come as a surprise to some that, despite its increasing valuations, OpenAI does not yet make a profit. It may require ten times more revenue to do so.
A US$500 billion valuation is quite something for a company that reportedly lost US$7.8 billion in the first half of this year. Some of this value appears to flow from a new deal between OpenAI and Nvidia where Nvidia will invest in OpenAI and OpenAI will buy Nvidia chips. This circular financing keeps everything afloat for now, but at some point investors will need to see returns.
AI firms more generally do not appear to be profitable at the moment. Investors are not putting their money into today’s losses—they are betting on an AI future.
It is of course perfectly feasible that AI firms will develop business models to increase their profitability. OpenAI is exploring advertising options and allowing chatbots to recommend products.
Using AI to deliver these messages is a viable option, though they will have to avoid the tricks and manipulations associated with online platforms, such as when hotel websites announce that rooms are about to sell out. We believe that AI can increase the power of these manipulations and we wonder how persuasive chatbots may be in their recommendations.
However, the big four—Meta, Alphabet, Microsoft and Amazon—are this year spending the equivalent of the GDP of Portugal on AI infrastructure. This is not investment in new targeted ads, it is investment in an AI future. The bubble will burst if and when this future is in doubt.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Citation:
What could burst the AI bubble? (2025, October 11)
retrieved 11 October 2025
from https://techxplore.com/news/2025-10-ai.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
OpenAI Had Banned Military Use. The Pentagon Tested Its Models Through Microsoft Anyway
OpenAI CEO Sam Altman is still in the hot seat this week after his company signed a deal with the US military. OpenAI employees have criticized the move, which came after Anthropic’s roughly $200 million contract with the Pentagon imploded, and asked Altman to release more information about the agreement. Altman admitted it looked “sloppy” in a social media post.
While this incident has become a major news story, it may just be the latest and most public example of OpenAI creating vague policies around how the US military can access its AI.
In 2023, OpenAI’s usage policy explicitly banned the military from accessing its AI models. But some OpenAI employees discovered the Pentagon had already started experimenting with Azure OpenAI, a version of OpenAI’s models offered by Microsoft, two sources familiar with the matter said. At the time, Microsoft had been contracting with the Department of Defense for decades. It was also OpenAI’s largest investor, and had broad license to commercialize the startup’s technology.
That same year, OpenAI employees saw Pentagon officials walking through the company’s San Francisco offices, the sources said. They spoke on the condition of anonymity as they aren’t licensed to comment on private company matters.
Some OpenAI employees were wary about associating with the Pentagon, while others were simply confused about what OpenAI’s usage policies meant. Did the policy apply to Microsoft? While sources tell WIRED it was not clear to most employees at the time, spokespeople from OpenAI and Microsoft say Azure OpenAI products are not, and were not, subject to OpenAI’s policies.
“Microsoft has a product called the Azure OpenAI Service that became available to the US Government in 2023 and is subject to Microsoft terms of service,” said spokesperson Frank Shaw in a statement to WIRED. Microsoft declined to comment specifically on when it made Azure OpenAI available to the Pentagon, but notes the service was not approved for “top secret” government workloads until 2025.
“AI is already playing a significant role in national security and we believe it’s important to have a seat at the table to help ensure it’s deployed safely and responsibly,” OpenAI spokesperson Liz Bourgeois said in a statement. “We’ve been transparent with our employees as we’ve approached this work, providing regular updates and dedicated channels where teams can ask questions and engage directly with our national security team.”
The Department of Defense did not respond to WIRED’s request for comment.
By January 2024, OpenAI updated its policies to remove the blanket ban on military use. Several OpenAI employees found out about the policy update through an article in The Intercept, sources say. Company leaders later addressed the change at an all-hands meeting, explaining how the company would tread carefully in this area moving forward.
In December 2024, OpenAI announced a partnership with Anduril to develop and deploy AI systems for “national security missions.” Ahead of the announcement, OpenAI told employees that the partnership was narrow in scope and would only deal with unclassified workloads, the same sources said. This stood in contrast to a deal Anthropic had signed with Palantir, which would see Anthropic’s AI used for classified military work.
Palantir approached OpenAI in the fall of 2024 to discuss participating in their “FedStart” program, an OpenAI spokesperson confirmed to WIRED. The company ultimately turned it down, and told employees it would’ve been too high-risk, two sources familiar with the matter tell WIRED. However, OpenAI now works with Palantir in other ways.
Around the time the Anduril deal was announced, a few dozen OpenAI employees joined a public Slack channel to discuss their concerns about the company’s military partnerships, sources say and a spokesperson confirmed. Some believed the company’s models were too unreliable to handle a user’s credit card information, let alone assist Americans on the battlefield.
Tech
Don’t Risk Birdwatching FOMO—Put Out Your Hummingbird Feeders Now
Though most people associate the beginning of March with the hopefulness of spring and the indignities of daylight saving time, there’s another important event taking place yards all over the country: hummingbird season.
While many species of hummingbirds can be seen in regions year-round, others are migratory, and this time typically marks their return from wintering grounds in Central and South America. These tiny birds can lose up to 40 percent of their body weight by the time they arrive here after having flown thousands of miles, and since many flowers haven’t bloomed yet, nectar feeders can be a source of essential fuel.
Though I test smart bird feeders year-round, I don’t use hummingbird feeders as often as I should, as it’s imperative that they be cleaned and refilled with new nectar every two or three days (a ratio of 1:4 granulated sugar to water is best, and avoid any dyes or additives) to prevent deadly bacteria and mold, and I don’t always have the time.
But if you are going to invest the energy in maintaining a hummingbird feeder, right now is the best time, as you have a chance to see migratory species you might not otherwise encounter, such as black-chinned hummingbirds. A smart feeder helps you ID them, whether they’re stopping at your feeder on their way north or arriving at their final destination.
Birdbuddy’s Pro is the smart hummingbird feeder I recommend and use myself when I’m not actively testing. The app is easy to navigate and sends cleaning reminders, the built-in solar roof keeps the battery charged, and, unlike other feeders, only the shallow bottom screws off for refilling. No having to pour sticky nectar through a narrow opening, or turn a giant cylinder upside down and risk spilling.
Note that it’s not perfect; the sensor is inconsistent and doesn’t capture every hummingbird that visits, but for the camera quality (5 MP photos, 2K video with slow-motion, 122-degree field of view) and ease of use, it’s a foible I’m willing to put up with. If you already have another Birdbuddy feeder, the hummingbird feeder images and videos will integrate seamlessly into your app feed.
Right now, the feeder is 37 percent off on Birdbuddy’s website—a deal I usually don’t see outside of shopping events like Black Friday or Amazon Prime Day. Note that the feeder only runs on 2.4 GHz Wi-Fi, and while it is fully functional without a subscription, a Birdbuddy Premium subscription will let you add friends and family members to your account so they can see the birds as well. That’s $99 a year through the app.
Power up with unlimited access to WIRED. Get best-in-class reporting and exclusive subscriber content that’s too important to ignore. Subscribe Today.
Tech
The Controversies Finally Caught Up to Kristi Noem
After a tenure marked by controversy and a contentious week of Congressional hearings, secretary Kristi Noem is out as head of the Department of Homeland Security.
President Donald Trump announced in a Truth Social post on Thursday that Noem would be replaced by senator Markwayne Mullin of Oklahoma, a staunch Trump ally and immigration hardliner. “The current Secretary, Kristi Noem, who has served us well, and has had numerous and spectacular results (especially on the Border!), will be moving to be Special Envoy for The Shield of the Americas, our new Security Initiative in the Western Hemisphere we are announcing on Saturday in Doral, Florida,” Trump wrote. “I thank Kristi for her service at ‘Homeland.’”
DHS did not immediately respond to a request for comment.
The agencies under DHS include Immigration and Customs Enforcement, US Customs and Border Protection, the Cybersecurity and Infrastructure Security Agency, the Federal Emergency Management Agency, US Citizenship and Immigration Services, the US Coast Guard, and others. It’s a sprawling network whose vast responsibilities and rapidly expanding budget have put it at the center of the Trump administration’s radical overhaul of immigration and border policy.
Speculation has swirled around Noem’s departure for months. Critics have assailed DHS’s aggressive immigration enforcement tactics, while Noem and figures like White House border czar Tom Homan have reportedly been at odds over how to execute the administration’s mass deportation agenda, with Noem and senior adviser Corey Lewandowski said to have emphasized sheer numbers of arrests and deportations above other considerations.
The relationship between Noem and Lewandowski has itself been a subject of controversy, with CNN reporting that a September meeting between the two and president Donald Trump grew “contentious.” Last month, the Wall Street Journal reported that Lewandowski attempted to fire a pilot during a flight for failing to bring Noem’s blanket from one plane to another during a transfer.
The ousted secretary faced mounting scrutiny over the deaths of US citizens during federal operations in Minneapolis, including the killings of Renee Good and Alex Pretti by federal agents under Noem’s employ. In both cases, Noem publicly labeled the deceased “domestic terrorists,” framing echoed by Trump and other key administration officials. Video evidence, witness testimony, and an independent autopsy contradicted the agency’s claims, including early assertions that Pretti brandished a firearm.
Scrutiny of Noem’s tenure extends beyond the fatal shootings in Minneapolis to a broader pattern of aggressive enforcement tactics, warrantless raids, and mass detention camps. A secretive policy directive issued in May 2025, first reported by the Associated Press, authorized ICE agents to forcibly enter private residences without a judicial warrant. The memo, signed by acting ICE director Todd Lyons, instructed agents to rely solely on an administrative removal document to bypass Fourth Amendment requirements. The policy led to multiple documented instances of federal agents entering the wrong homes, including a January raid in Minnesota where agents removed a US citizen at gunpoint with no legitimate reason.
A record 53 people died in ICE or CBP custody last year, according to House Democrats on the Committee on Homeland Security. Concurrently, Noem has initiated a $38 billion procurement effort to buy and refurbish up to 24 warehouses across the country, aimed at converting them into mass detention camps for people awaiting deportation.
Noem’s tenure has led to controversy at other DHS agencies as well. Her insistence on approving any contracts or grants over $100,000 at the department have caused particular strain at FEMA, which has experienced a massive backlog of funding that has slowed normal processes at the agency. A report issued from Senate Democrats Wednesday found that Noem’s vetting process at FEMA has caused more than 1,000 contracts, grants, and awards to be held up. Multiple FEMA employees have told WIRED that this process has made the agency less ready to respond to disasters and threats.
-
Politics1 week agoWhat are Iran’s ballistic missile capabilities?
-
Business7 days agoIndia Us Trade Deal: Fresh look at India-US trade deal? May be ‘rebalanced’ if circumstances change, says Piyush Goyal – The Times of India
-
Business1 week agoAttock Cement’s acquisition approved | The Express Tribune
-
Politics1 week agoUS arrests ex-Air Force pilot for ‘training’ Chinese military
-
Fashion1 week agoPolicy easing drives Argentina’s garment import surge in 2025
-
Business1 week agoHouseholds set for lower energy bills amid price cap shake-up
-
Sports6 days agoLPGA legend shares her feelings about US women’s Olympic wins: ‘Gets me really emotional’
-
Fashion7 days agoTexwin Spinning showcasing premium cotton yarn range at VIATT 2026

