It’s becoming clearer that we are in a perilous financial situation globally. Fears over an “AI bubble” are being cited by the Bank of England, the International Monetary Fund and the boss of JP Morgan, Jamie Dimon.
If you want a sense of how insane the narrative is around AI investments, consider this: Thinking Machines Lab, an AI startup, recently raised $2bn funding on a valuation of $10bn.
The company has zero products, zero customers and zero revenues. The only thing it made public to its investors was the resume of its founder, Mira Murati, formerly chief technology officer at OpenAI. If that’s not hubris meeting market exuberance, what is?
But narrative is crucial here because it’s what’s driving all this insane investment in the future of AI or so-called artificial general intelligence (AGI), and it’s important to examine which narrative you believe in if you are to protect yourself for what’s to come.
If I were to pick between the views of a politician such as UK prime minister Keir Starmer, and a writer such as Cory Doctorow, I’d put my bet on Doctorow. Contrast these two statements and see which you feel more comfortable with…
Doctorow suggests the AI bubble needs to be punctured as soon as possible to “halt this before it progresses any further and to head off the accumulation of social and economic debt”.
He suggests doing that by taking aim at the basis for the AI bubble – namely, “creating a growth story by claiming that AI can do your job”.
AI is the asbestos we are shovelling into the walls of our society and our descendants will be digging it out for generations Cory Doctorow
Claims about jobs disappearing to AI have been around since 2019 with Sam Altman, then leader of venture capital (VC) fund Y Combinator, speaking about radiology jobs disappearing in the future: “Human radiologists are already much worse than computer radiologists. If I had to pick a human or an AI to read my scan, I’d pick the AI.”
Fast forward six years to 2025 and look how that worked out. According to a recent report by Works in Progress, despite the fact that radiology combines digital images, clear benchmarks and repeatable tasks, demand for human radiologists is at an all-time high.
The report authors’ conclusions drive a horse and cart through the current AI/AGI narrative that if left unstopped will cause severe global economic pain: “In many jobs, tasks are diverse, stakes are high, and demand is elastic. When this is the case, we should expect software to initially lead to more human work, not less. The lesson from a decade of radiology models is neither optimism about increased output nor dread about replacement. Models can lift productivity, but their implementation depends on behaviour, institutions and incentives. For now, the paradox has held – the better the machines, the busier radiologists have become.”
Across other sectors too, the mythology around job losses is slowly being interrogated – for example, Yale University Budget Lab found no discernible disruption to labour markets since ChatGPT’s release 33 months ago.
The research goes on to state: “While this finding may contradict the most alarming headlines, it is not surprising given past precedents. Historically, widespread technological disruption in workplaces tends to occur over decades, rather than months or years. Computers didn’t become commonplace in offices until nearly a decade after their release to the public, and it took even longer for them to transform office workflows. Even if new AI technologies will go on to impact the labour market as much, or more dramatically, it is reasonable to expect that widespread effects will take longer than 33 months to materialise”.
Normal technology
In other words, AI is just, well, technology as we have always known it – or as experts Aryind Narayanan and Sayash Kapoor call AI, just “normal technology”.
Importantly in their paper, AI as normal technology – An alternative to the vision of AI as a potential superintelligence, they identify key lessons from past technological revolutions – the slow and uncertain nature of technology adoption and diffusion; continuity between the past and future trajectory of AI in terms of social impact; and the role of institutions in shaping this trajectory. They also “strongly disagree with the characterisation of generative AI adoption as rapid, which reinforces our assumption about the similarity of AI diffusion to past technologies”
A good example of AI as normal technology without all the hype, hyperbole and billion-dollar burn rate, is the City of Austin, Texas. Here, an on-premise AI system helped the local government process building permits in days instead of months.
According to David Stout, CEO of WebAI, this was done “with no spectacle. No headlines. Just efficiency gains that will outlast the market cycle. He said, “That’s the point too often missed in the frenzy. Mega-models attract headlines, consume billions in capital, and struggle to demonstrate sustainable economics. Meanwhile, smaller, domain-specific systems are already delivering efficiency gains, cost savings and productivity improvements. The smart play isn’t to abandon AI, but to pivot towards models and deployments that will endure”.
Technology like we have always known it to be – not the insane fantasy of “superintelligence” that is powering this dangerous bubble.
The question to ask is, given the prediction of at least a 33-month lag before any return on investment, however small, will the markets wait another 33 months for their returns to materialise?
Protracted crisis
A recent report on MarketWatch suggests the AI bubble is now ”seventeen times the size of the dot com frenzy and four times the sub-prime bubble”. MarketWatch quotes financial analyst Julien Garran, who previously led UBS’s commodities strategy team, who said “AI now accounts for over four times the wealth trapped in the 2008 sub-prime mortgage bubble, which resulted in years of protracted crisis across the globe”.
Warnings from the Bank of England in its semi-annual Financial Policy Committee report are equally stark: “Uncertainty around the global risk environment increases the risk that markets have not fully priced in possible adverse outcomes, and a sudden correction could occur should any of these risks crystallize.”
The bank also warned of “the risk of a sharp market for global financial markets amid AI bubble risks and political pressure on the Federal Reserve.”
What a sudden correction means is that a collapse of the AI investment bubble will take trillions of investment with it, impacting us all.
Even more worrying is the issue of debt financing among those competing in the AI race – that is, all the tech bros. It now appears, according to Axios, that these companies are turning to private debt markets and special purpose vehicles for cash, which means this kind of borrowing does not have to show on their balance sheets.
Meta, for example, recently sought $29bn from private capital firms for its AI datacentres. This off-book debt financing should ring more alarm bells that something is terribly wrong with the AI growth narrative.
After all, as pointed out by the Axios analysts, “If hugely profitable tech companies need to mask their borrowings to fund AI spending, it signals they’re not confident that they’ll soon get the returns needed to justify such investments. That suggests the very spending powering today’s earnings boom can’t last forever.”
Unit economics
To go back to Cory Doctorow’s argument, we are not in the early days of the web, or Amazon, or other dot com companies that lost money before becoming profitable: “Those were all propositions with excellent unit economics. They got cheaper with every successive technological generation and the more customers they added, the more profitable they became”.
AI companies do not have excellent unit economics – in fact they have the opposite, according to Doctorow: “Each generation of AI has been vastly more expensive than the previous one, and each new AI customer makes the AI companies lose more money”.
[Only] about 5% of tasks will be able to be profitably performed by AI within 10 years Daron Acemoglu
And if that’s not sobering enough for the VC and private equity firms, then the circular investing going on between these tech firms should be a huge concern.
Microsoft is investing $10bn in OpenAI by giving free access to its servers. OpenAI reports this as an “investment,” then redeems these tokens at Microsoft datacentres, which Microsoft books as $10bn in revenue.
Bain & Co says the only way to make today’s AI investments profitable “is for the sector to bring in $2tn by 2030,” which, according to the Wall Street Journal, is more than the revenue of Amazon, Google, Microsoft, Apple, Nvidia and Meta – combined.
Taking a closer look at US economic growth is surely more cause for concern.
According to Harvard economist Jason Furman’s analysis, GDP growth in the first half of 2025 was driven almost entirely by investment in information processing equipment and software. This spending was largely tied to the rapid expansion of AI infrastructure and datacentres.
While these tech sectors only made up 4% of total GDP, they contributed a staggering 92% of growth. Absent this investment, Furman estimates US GDP growth would have hovered around 0.1% on an annualised basis – barely above zero.
There is a lot riding on a technology that’s supposed to be godlike and all powerful but which, according to MIT Institute professor Daron Acemoglu, is far less likely to achieve the insane hyperbolic claims being made by the tech bros in an effort to win an unwinnable race.
Acemoglu estimates the 10-year effect of AI in the US will be that only “about 5% of tasks will be able to be profitably performed by AI within that timeframe,” with the GDP boost likely to be closer to 1% over that timespan. If that’s not a recipe for stock market collapse, what is?
Emperor’s new clothes
Going back to the AI booster narrative and how it’s driving things, Doctorow is again incisive: “The most important thing about AI isn’t its technical capabilities or limitations. The most important thing is the investor story and the ensuing mania that has teed up an economic catastrophe that will harm hundreds of millions or even billions of people. AI isn’t going to wake up, become super intelligent and turn you into paperclips – but rich people with AI investor psychosis are almost certainly going to make you much, much poorer”.
I’m not an economist, so I did what we are all supposed to do now for our enlightenment. I gave the machines built by the tech bros all the same prompt: “What fable best encapsulates the current AI bubble?”
Gemini, Perplexity and ChatGPT were all in agreement with nearly the same explanation of why they all picked the same story: “The emperor’s new clothes remains the best classic fable to explain the AI bubble, as it encapsulates the collective willingness to believe in – and profit from – an imagined reality, until facts and external shocks eventually break the spell.”
Deveillance also claims the Spectre can find nearby microphones by detecting radio frequencies (RF), but critics say finding a microphone via RF emissions is not effective unless the sensor is immediately beside it.
“If you could detect and recognize components via RF the way Spectre claims to, it would literally be transformative to technology,” Jordan wrote in a text to WIRED after he built a device to test detecting RF signatures in microphones. “You’d be able to do radio astronomy in Manhattan.”
Deveillance is also looking at ways to integrate nonlinear junction detection (NLJD), a very high-frequency radio signal used by security professionals to find hidden mics and bugs. NLJD detectors are expensive and used primarily in professional contexts like military operations.
Even if a device could detect a microphone’s exact location, objects around a room can change how the frequencies spread and interact. The emitted frequencies could also be a problem. There haven’t been adequate studies to show what effects ultrasonic frequencies have on the human ear, but some people and many pets can hear them and find them obnoxious or even painful. Baradari acknowledges that her team needs to do more testing to see how pets are affected.
“They simply cannot do this,” engineer and YouTuber Dave Jones (who runs the channel EEVblog) wrote in an email to WIRED. “They are using the classic trick of using wording to imply that it will detect every type of microphone, when all they are probably doing is scanning for Bluetooth audio devices. It’s totally lame.” Baradari reiterates that the Spectre uses a combination of RF and Bluetooth low energy to detect microphones.
WIRED asked Baradari to share any evidence of the Spectre’s effectiveness at identifying and blocking microphones in a person’s vicinity. Baradari shared a few short videoclips of people putting their phones to their ears listening to audioclips—which were presumably jammed by the Spectre—but these videos do little to prove that the device works.
Future Imperfect
Baradari has taken the critiques in stride, acknowledging that the tech is still in development. “I actually appreciate those comments, because they’re making me think and see more things as well,” Baradari says. “I do believe that with the ideas that we’re having and integrating into one device, these concerns can be addressed.”
People were quick to poke fun at the Spectre I online, calling the technology the cone of silence from Dune. Now, the Deveillance website reads, “Our goal is to make the cone of silence become reality.”
John Scott-Railton, a cybersecurity researcher at Citizen Lab, who is critical of the Spectre I, lauded the device’s virality as an indication of the real hunger for these kinds of gadgets to win back our privacy.
“The silver lining of this blowing up is that it is a Ring-like moment that highlights how quickly and intensely consumer attitudes have shifted around pervasive recording devices,” says Scott-Railton. “We need to be building products that do all the cool things that people want but that don’t have the massive privacy- and consent-violation undertow. You need device-level controls, and you need regulations of the companies that are doing this.”
Cooper Quintin, a senior staff technologist at the Electronic Frontier Foundation, echoed those sentiments, even if critics believe Deveillance’s efforts to be flawed.
“If this technology works, it could be a boon for many,” Quintin wrote in an email to WIRED. “It is nice to see a company creating something to protect privacy instead of working on new and creative ways to extract data from us.”
Portrait Light: You can change up the lighting in your portrait selfies after you take them by opening them up in Google Photos, tapping the Edit button, and heading to Actions >Portrait Light. This adds an artificial light you can place anywhere in the photo to brighten up your face and erase that 5 o’clock shadow. Use the slider at the bottom to tweak the strength of the light. It also works on older Portrait mode photos you may have captured. It works only on faces.
Health and Accessibility Features
Cough & Snore Detection (Tensor G2 and newer): On the Pixel 7 and newer, you can have your Pixel detect if you cough and snore when sleeping, provided you place your Pixel near your bed before you nod off. This will work only if you use Google’s Bedtime mode function, which you can turn on by heading to Settings > Digital Wellbeing & Parental Controls > Bedtime Mode.
Guided Frame (Tensor G2 and newer): For blind or low-vision people, the camera app can now help take a selfie with audio cues (it works with the front and rear cameras). You’ll need to enable TalkBack for this to work (Settings > Accessibility > TalkBack). Then open the camera app. It will automatically help you frame the shot.
Simple View: This mode makes the font size bigger, along with other elements on the screen, like widgets and quick-settings tiles. It also increases touch sensitivity, all of which hopefully makes it easier to see and use the screen. You can enable it by heading to Settings > Accessibility > Simple View.
Safety and Security Features
Theft Protection: This is a broader Android 15 feature, but essentially, Google’s algorithms can figure out if someone snatches your Pixel out of your hands. If they’re trying to get away, the device automatically locks. Additionally, with another device, you can use Remote Lock to lock your stolen Pixel with your phone number and a security answer. To toggle these features on, go to Settings > Security & privacy > Device unlock > Theft protection.
Identity Check: If your Pixel detects you’re in a new location, Identity Check will require your fingerprint or face authentication before you can make any changes to sensitive settings, offering extra peace of mind in case you lose your phone or if it’s stolen. You can enable this in Settings > Security & privacy > Device unlock > Theft protection > Identity Check.
Courtesy of Google
Private Space: Another Android 15 addition, Pixel phones finally have a feature that lets you hide and lock select apps. You can use a separate Google account, set a lock, and install any app to hide away. To set it all up, head to Settings > Security & privacy > Private space.
Satellite eSOS (Pixel 9 and Pixel 10 series, excluding Pixel 9a): Like Apple’s SOS feature on iPhones, you can now reach emergency contacts or emergency services even when you don’t have cell service or Wi-Fi connectivity. It’s not just available in the continental US, but also in Hawaii, Alaska, Canada, and even Europe.
Portrait Light: You can change up the lighting in your portrait selfies after you take them by opening them up in Google Photos, tapping the Edit button, and heading to Actions >Portrait Light. This adds an artificial light you can place anywhere in the photo to brighten up your face and erase that 5 o’clock shadow. Use the slider at the bottom to tweak the strength of the light. It also works on older Portrait mode photos you may have captured. It works only on faces.
Health and Accessibility Features
Cough & Snore Detection (Tensor G2 and newer): On the Pixel 7 and newer, you can have your Pixel detect if you cough and snore when sleeping, provided you place your Pixel near your bed before you nod off. This will work only if you use Google’s Bedtime mode function, which you can turn on by heading to Settings > Digital Wellbeing & Parental Controls > Bedtime Mode.
Guided Frame (Tensor G2 and newer): For blind or low-vision people, the camera app can now help take a selfie with audio cues (it works with the front and rear cameras). You’ll need to enable TalkBack for this to work (Settings > Accessibility > TalkBack). Then open the camera app. It will automatically help you frame the shot.
Simple View: This mode makes the font size bigger, along with other elements on the screen, like widgets and quick-settings tiles. It also increases touch sensitivity, all of which hopefully makes it easier to see and use the screen. You can enable it by heading to Settings > Accessibility > Simple View.
Safety and Security Features
Theft Protection: This is a broader Android 15 feature, but essentially, Google’s algorithms can figure out if someone snatches your Pixel out of your hands. If they’re trying to get away, the device automatically locks. Additionally, with another device, you can use Remote Lock to lock your stolen Pixel with your phone number and a security answer. To toggle these features on, go to Settings > Security & privacy > Device unlock > Theft protection.
Identity Check: If your Pixel detects you’re in a new location, Identity Check will require your fingerprint or face authentication before you can make any changes to sensitive settings, offering extra peace of mind in case you lose your phone or if it’s stolen. You can enable this in Settings > Security & privacy > Device unlock > Theft protection > Identity Check.
Courtesy of Google
Private Space: Another Android 15 addition, Pixel phones finally have a feature that lets you hide and lock select apps. You can use a separate Google account, set a lock, and install any app to hide away. To set it all up, head to Settings > Security & privacy > Private space.
Satellite eSOS (Pixel 9 and Pixel 10 series, excluding Pixel 9a): Like Apple’s SOS feature on iPhones, you can now reach emergency contacts or emergency services even when you don’t have cell service or Wi-Fi connectivity. It’s not just available in the continental US, but also in Hawaii, Alaska, Canada, and even Europe.