Tech
Why fears of a trillion-dollar AI bubble are growing
For almost as long as the artificial intelligence boom has been in full swing, there have been warnings of a speculative bubble that could rival the dot-com craze of the late 1990s that ended in a spectacular crash and a wave of bankruptcies.
Tech firms are spending hundreds of billions of dollars on advanced chips and data centers, not just to keep pace with a surge in the use of chatbots such as ChatGPT, Gemini and Claude, but to make sure they’re ready to handle a more fundamental and disruptive shift of economic activity from humans to machines.
The final bill may run into the trillions. The financing is coming from venture capital, debt and, lately, some more unconventional arrangements that have raised eyebrows on Wall Street.
Even some of AI’s biggest cheerleaders acknowledge the market is frothy, while still professing their belief in the technology’s long-term potential. AI, they say, is poised to reshape multiple industries, cure diseases and generally accelerate human progress.
Yet never before has so much money been spent so rapidly on a technology that remains somewhat unproven as a profit-making business model. Tech industry executives who privately doubt the most effusive assessments of AI’s revolutionary potential—or at least struggle to see how to monetize it—may feel they have little choice but to keep pace with their rivals’ investments or risk being out-scaled and sidelined in the future AI marketplace.
Sharp falls in global technology stocks in early November underscored investors’ growing unease over the sector’s sky-high valuations, with Wall Street chief executives warning of an overdue market correction.
What are the warning signs for AI?
When Sam Altman, the chief executive of ChatGPT maker OpenAI, announced a $500 billion AI infrastructure plan known as Stargate alongside other executives at the White House in January, the price tag triggered some disbelief. Since then, other tech rivals have ramped up spending, including Meta’s Mark Zuckerberg, who has pledged to invest hundreds of billions in data centers. Not to be outdone, Altman has since said he expects OpenAI to spend “trillions” on AI infrastructure.
To finance those projects, OpenAI is entering into new territory. In September, chipmaker Nvidia Corp. announced an agreement to invest up to $100 billion in OpenAI’s data center buildout, a deal that some analysts say raises questions about whether the chipmaker is trying to prop up its customers so that they keep spending on its own products.
The concerns have followed Nvidia, to varying degrees, for much of the boom. The dominant maker of AI accelerator chips has backed dozens of companies in recent years, including AI model makers and cloud computing providers. Some of them then use that capital to buy Nvidia’s expensive semiconductors. The OpenAI deal was far larger in scale.
OpenAI has also indicated it could pursue debt financing, rather than leaning on partners such as Microsoft Corp. and Oracle Corp. The difference is that those companies have rock-solid, established businesses that have been profitable for many years. OpenAI expects to burn through $115 billion of cash through 2029, The Information has reported.
Other large tech companies are also relying increasingly on debt to support their unprecedented spending. Meta, for example, turned to lenders to secure $26 billion in financing for a planned data center complex in Louisiana that it says will eventually approach the size of Manhattan. JPMorgan Chase & Co. and Mitsubishi UFJ Financial Group are also leading a loan of more than $22 billion to support Vantage Data Centers’ plan to build a massive data-center campus, Bloomberg News has reported.
So how about the payback?
By 2030, AI companies will need $2 trillion in combined annual revenue to fund the computing power needed to meet projected demand, Bain & Co. said in a report released in September. Yet their revenue is likely to fall $800 billion short of that mark, Bain predicted.
“The numbers that are being thrown around are so extreme that it’s really, really hard to understand them,” said David Einhorn, a prominent hedge fund manager and founder of Greenlight Capital. “I’m sure it’s not zero, but there’s a reasonable chance that a tremendous amount of capital destruction is going to come through this cycle.”
In a sign of the times, there’s also a growing number of less proven firms trying to capitalize on the data center goldrush. Nebius, an Amsterdam-based cloud provider that split off from Russian internet giant Yandex in 2024, recently inked an infrastructure deal with Microsoft worth up to $19.4 billion. And Nscale, a little-known British data center company, is working with Nvidia, OpenAI and Microsoft on build-outs in Europe. Like some other AI infrastructure providers, Nscale previously focused on another frothy sector: cryptocurrency mining.
Are there concerns about the technology itself?
The data center spending spree is overshadowed by persistent skepticism about the payoff from AI technology. In August, investors were rattled after researchers at the Massachusetts Institute of Technology found that 95% of organizations saw zero return on their investment in AI initiatives.
More recently, researchers at Harvard and Stanford offered a possible explanation for why. Employees are using AI to create “workslop,” which the researchers define as “AI-generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.”
The promise of AI has long been that it would help streamline tasks and boost productivity, making it an invaluable asset for workers and one that corporations would pay top dollar for. Instead, the Harvard and Stanford researchers found the prevalence of workslop could cost larger organizations millions of dollars a year in lost productivity.
AI developers have also been confronting a different challenge. OpenAI, Claude chatbot developer Anthropic and others have for years bet on the so-called scaling laws—the idea that more computing power, data and larger models will inevitably pave the way for greater leaps in the power of AI.
Eventually, they say, these advances will lead to artificial general intelligence, a hypothetical form of the technology so sophisticated that it matches or exceeds humans in most tasks.
Over the past year, however, these developers have experienced diminishing returns from their costly efforts to build more advanced AI. Some have also struggled to match their own hype.
After months of touting GPT-5 as a significant leap, OpenAI’s release of its latest AI model in August was met with mixed reviews. In remarks around the launch, Altman conceded that “we’re still missing something quite important” to reach AGI.
Those concerns are compounded by growing competition from China, where companies are flooding the market with competitive, low-cost AI models. While U.S. firms are generally still viewed as ahead in the race, the Chinese alternatives risk undercutting Silicon Valley on price in certain markets, making it harder to recoup the significant investment in AI infrastructure.
There’s also the risk that the AI industry’s vast data center buildout, entailing a huge increase in electricity consumption, will be held back by the limitations of national power networks.
What does the AI industry say in response?
Sam Altman, the face of the current AI boom, has repeatedly acknowledged the risk of a bubble in recent months while maintaining his optimism for the technology. “Are we in a phase where investors as a whole are overexcited about AI? In my opinion, yes,” he said in August. “Is AI the most important thing to happen in a very long time? My opinion is also yes.”
Altman and other tech leaders continue to express confidence in the roadmap toward AGI, with some suggesting it could be closer than skeptics think.
“Developing superintelligence is now in sight,” Zuckerberg wrote in July, referencing an even more powerful form of AI that his company is aiming for. In the near term, some AI developers also say they need to drastically ramp up computing capacity to support the rapid adoption of their services.
Altman, in particular, has stressed repeatedly that OpenAI remains constrained in computing resources as hundreds of millions of people around the world use its services to converse with ChatGPT, write code and generate images and videos.
OpenAI and Anthropic have also released their own research and evaluations that indicate AI systems are having a meaningful impact on work tasks, in contrast to the more damning reports from outside academic institutions. An Anthropic report released in September found that roughly three quarters of companies are using Claude to automate work.
The same month, OpenAI released a new evaluation system called GDPval that measures the performance of AI models across dozens of occupations.
“We found that today’s best frontier models are already approaching the quality of work produced by industry experts,” OpenAI said in a blog post. “Especially on the subset of tasks where models are particularly strong, we expect that giving a task to a model before trying it with a human would save time and money.”
So how much will customers eventually be willing to pay for these services? The hope among developers is that, as AI models improve and field more complex tasks on users’ behalf, they will be able to convince businesses and individuals to spend far more to access the technology.
“I want the door open to everything,” OpenAI Chief Financial Officer Sarah Friar said in late 2024, when asked about a report that the company has discussed a $2,000 monthly subscription for its AI products. “If it’s helping me move about the world with literally a Ph.D.-level assistant for anything that I’m doing, there are certainly cases where that would make all the sense in the world.”
In September, Zuckerberg said an AI bubble is “quite possible,” but stressed that his bigger concern is not spending enough to meet the opportunity. “If we end up misspending a couple of hundred billion dollars, I think that that is going to be very unfortunate, obviously,” he said in a podcast interview. “But what I’d say is I actually think the risk is higher on the other side.”
What makes a market bubble?
Bubbles are economic cycles defined by a swift increase in market values to levels that aren’t supported by the underlying fundamentals. They’re usually followed by a sharp selloff—the so-called pop.
A bubble often begins when investors get swept up in a speculative frenzy—over a new technology or other market opportunity—and pile in for fear of missing out on further gains. American economist Hyman Minsky identified five stages of a market bubble: displacement, boom, euphoria, profit-taking and panic.
Bubbles are sometimes difficult to spot because market prices can become dislocated from real-world values for many reasons, and a sharp price drop isn’t always inevitable. And, because a crash is part of a bubble cycle, they can be hard to pinpoint until after the fact.
Generally, bubbles pop when investors realize that the lofty expectations they had were too high. This usually follows a period of over-exuberance that tips into mania, when everyone is buying into the trend at the very top.
What comes next is usually a slow, prolonged selloff where company earnings start to suffer, or a singular event that changes the long-term view, sending investors dashing for the exits.
There was some fear that an AI bubble had already popped in late January, when China’s DeepSeek upended the market with the release of a competitive AI model purportedly built at a fraction of the amount that top U.S. developers spend. DeepSeek’s viral success triggered a trillion-dollar selloff of technology shares. Nvidia, a bellwether AI stock, slumped 17% in one day.
The DeepSeek episode underscored the risks of investing heavily in AI. But Silicon Valley remained largely undeterred. In the months that followed, tech companies redoubled their costly AI spending plans, and investors resumed cheering on these bets. Nvidia shares charged back from an April low to fresh records. It was worth more than $4 trillion by the end of September, making it the most valuable company in the world.
So is this 1999 all over again?
As with today’s AI boom, the companies at the center of the dot-com frenzy drew in vast amounts of investor capital, often using questionable metrics such as website traffic rather than their actual ability to turn a profit. There were many flawed business models and exaggerated revenue projections.
Telecommunication companies raced to build fiber-optic networks only to find the demand wasn’t there to pay for them. When it all crashed in 2001, many companies were liquidated, others absorbed by healthier rivals at knocked-down prices.
Echoes of the dot-com era can be found in AI’s massive infrastructure build-out, sky-high valuations and showy displays of wealth. Venture capital investors have been courting AI startups with private jets, box seats and big checks.
Many AI startups tout their recurring revenue as a key metric for growth, but there are doubts as to how sustainable or predictable those projections are, particularly for younger businesses. Some AI firms are completing multiple mammoth fundraisings in a single year. Not all will necessarily flourish.
“I think there’s a lot of parallels to the internet bubble,” said Bret Taylor, OpenAI’s chairman and the CEO of Sierra, an AI startup valued at $10 billion. Like the dot-com era, a number of high-flying companies will almost certainly go bust. But in Taylor’s telling, there will also be large businesses that emerge and thrive over the long term, just as happened with Amazon.com Inc. and Alphabet Inc.’s Google in the late 90s.
“It is both true that AI will transform the economy, and I think it will, like the internet, create huge amounts of economic value in the future,” Taylor said. “I think we’re also in a bubble, and a lot of people will lose a lot of money.”
Amazon Chairman Jeff Bezos said the spending on AI resembles an “industrial bubble” akin to the biotech bubble of the 1990s, but he still expects it to improve the productivity of “every company in the world.”
There are also some key differences to the dot-com boom that market watchers point out, the first being the broad health and stability of the biggest businesses that are at the forefront of the trend. Most of the “Magnificent Seven” group of U.S. tech companies are long-established giants that make up much of the earnings growth in the S&P 500 Index. These firms have huge revenue streams and are sitting on large stockpiles of cash.
Despite the skepticism, AI adoption has also proceeded at a rapid clip. OpenAI’s ChatGPT has about 700 million weekly users, making it one of the fastest growing consumer products in history. Top AI developers, including OpenAI and Anthropic, have also seen remarkably strong sales growth. OpenAI previously forecast revenue would more than triple in 2025 to $12.7 billion.
While the company does not expect to be cash-flow positive until near the end of this decade, a recent deal to help employees sell shares gave it an implied valuation of $500 billion—making it the world’s most valuable company never to have turned a profit.
2025 Bloomberg L.P. Distributed by Tribune Content Agency, LLC.
Citation:
Why fears of a trillion-dollar AI bubble are growing (2025, November 6)
retrieved 6 November 2025
from https://techxplore.com/news/2025-11-trillion-dollar-ai.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
Researchers propose a new model for legible, modular software
Coding with large language models (LLMs) holds huge promise, but it also exposes some long-standing flaws in software: code that’s messy, hard to change safely, and often opaque about what’s really happening under the hood. Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) are charting a more “modular” path ahead.
Their new approach breaks systems into “concepts,” separate pieces of a system, each designed to do one job well, and “synchronizations,” explicit rules that describe exactly how those pieces fit together. The result is software that’s more modular, transparent, and easier to understand.
A small domain-specific language (DSL) makes it possible to express synchronizations simply, in a form that LLMs can reliably generate. In a real-world case study, the team showed how this method can bring together features that would otherwise be scattered across multiple services. The paper is published in the Proceedings of the 2025 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software.
The team, including Daniel Jackson, an MIT professor of electrical engineering and computer science (EECS) and CSAIL associate director, and Eagon Meng, an EECS Ph.D. student, CSAIL affiliate, and designer of the new synchronization DSL, explore this approach in their paper “What You See Is What It Does: A Structural Pattern for Legible Software,” which they presented at the Splash Conference in Singapore in October.
The challenge, they explain, is that in most modern systems, a single feature is never fully self-contained. Adding a “share” button to a social platform like Instagram, for example, doesn’t live in just one service. Its functionality is split across code that handles posting, notification, authenticating users, and more. All these pieces, despite being scattered across the code, must be carefully aligned, and any change risks unintended side effects elsewhere.
Jackson calls this “feature fragmentation,” a central obstacle to software reliability. “The way we build software today, the functionality is not localized. You want to understand how ‘sharing’ works, but you have to hunt for it in three or four different places, and when you find it, the connections are buried in low-level code,” says Jackson.
Concepts and synchronizations are meant to tackle this problem. A concept bundles up a single, coherent piece of functionality, like sharing, liking, or following, along with its state and the actions it can take. Synchronizations, on the other hand, describe at a higher level how those concepts interact.
Rather than writing messy low-level integration code, developers can use a small domain-specific language to spell out these connections directly. In this DSL, the rules are simple and clear: one concept’s action can trigger another, so that a change in one piece of state can be kept in sync with another.
“Think of concepts as modules that are completely clean and independent. Synchronizations then act like contracts—they say exactly how concepts are supposed to interact. That’s powerful because it makes the system both easier for humans to understand and easier for tools like LLMs to generate correctly,” says Jackson.
“Why can’t we read code like a book? We believe that software should be legible and written in terms of our understanding: our hope is that concepts map to familiar phenomena, and synchronizations represent our intuition about what happens when they come together,” says Meng.
The benefits extend beyond clarity. Because synchronizations are explicit and declarative, they can be analyzed, verified, and of course generated by an LLM. This opens the door to safer, more automated software development, where AI assistants can propose new features without introducing hidden side effects.
In their case study, the researchers assigned features like liking, commenting, and sharing each to a single concept—like a microservices architecture, but more modular. Without this pattern, these features were spread across many services, making them hard to locate and test. Using the concepts-and-synchronizations approach, each feature became centralized and legible, while the synchronizations spelled out exactly how the concepts interacted.
The study also showed how synchronizations can factor out common concerns like error handling, response formatting, or persistent storage. Instead of embedding these details in every service, synchronization can handle them once, ensuring consistency across the system.
More advanced directions are also possible. Synchronizations could coordinate distributed systems, keeping replicas on different servers in step, or allow shared databases to interact cleanly. Weakening synchronization semantics could enable eventual consistency while still preserving clarity at the architectural level.
Jackson sees potential for a broader cultural shift in software development. One idea is the creation of “concept catalogs,” shared libraries of well-tested, domain-specific concepts. Application development could then become less about stitching code together from scratch and more about selecting the right concepts and writing the synchronizations between them.
“Concepts could become a new kind of high-level programming language, with synchronizations as the programs written in that language. It’s a way of making the connections in software visible,” says Jackson. “Today, we hide those connections in code. But if you can see them explicitly, you can reason about the software at a much higher level. You still have to deal with the inherent complexity of features interacting. But now it’s out in the open, not scattered and obscured.”
“Building software for human use on abstractions from underlying computing machines has burdened the world with software that is all too often costly, frustrating, even dangerous, to understand and use,” says University of Virginia Associate Professor Kevin Sullivan, who wasn’t involved in the research.
“The impacts (such as in health care) have been devastating. Meng and Jackson flip the script and insist on building interactive software on abstractions from human understanding, which they call ‘concepts.’ They combine expressive mathematical logic and natural language to specify such purposeful abstractions, providing a basis for verifying their meanings, composing them into systems, and refining them into programs fit for human use. It’s a new and important direction in the theory and practice of software design that bears watching.”
“It’s been clear for many years that we need better ways to describe and specify what we want software to do,” adds Thomas Ball, Lancaster University honorary professor and University of Washington affiliate faculty, who also wasn’t involved in the research. “LLMs’ ability to generate code has only added fuel to the specification fire. Meng and Jackson’s work on concept design provides a promising way to describe what we want from software in a modular manner. Their concepts and specifications are well-suited to be paired with LLMs to achieve the designer’s intent.”
Looking ahead, the researchers hope their work can influence how both industry and academia think about software architecture in the age of AI. “If software is to become more trustworthy, we need ways of writing it that make its intentions transparent,” says Jackson. “Concepts and synchronizations are one step toward that goal.”
More information:
Eagon Meng et al, What You See Is What It Does: A Structural Pattern for Legible Software, Proceedings of the 2025 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software (2025). DOI: 10.1145/3759429.3762628
Citation:
Researchers propose a new model for legible, modular software (2025, November 6)
retrieved 6 November 2025
from https://techxplore.com/news/2025-11-legible-modular-software.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
‘Vibe coding’ named word of the year by Collins dictionary
“Vibe coding,” a word that essentially means using artificial intelligence (AI) to tell a machine what you want instead of coding it yourself, was on Thursday named the Collins Word of the Year 2025.
Coined by OpenAI co-founder Andrej Karpathy, the word refers to “an emerging software development that turns natural language into computer code using AI,” according to Collins Dictionary.
“It’s programming by vibes, not variables,” said Collins.
“While tech experts debate whether it’s revolutionary or reckless, the term has resonated far beyond Silicon Valley, speaking to a broader cultural shift toward AI-assisted everything in everyday life,” it added.
Lexicographers at Collins Dictionary monitor the 24 billion-word Collins Corpus, which draws from a range of media sources including social media, to create the annual list of new and notable words that reflect our ever-evolving language.
The 2025 shortlist highlights a range of words that have emerged in the past year to pithily reflect the changing world around us.
“Broligarchy” made the list in a year that saw tech billionaire Elon Musk briefly at the heart of US President Donald Trump’s administration and Amazon founder Jeff Bezos cozying up to the president.
The word is defined as a small clique of very wealthy men who exert political influence.
‘Coolcation’
New words linked to work and technology include “clanker,” a derogatory term for a computer, robot or source of artificial intelligence, and “HENRY,” an acronym for high earner, not rich yet.
Another is “taskmasking,” the act of giving a false impression that one is being productive in the workplace, while “micro-retirement” refers to a break taken between periods of employment to pursue personal interests.
In the health and behavioral sphere, “biohacking” also gets a spot, meaning the activity of altering the natural processes of one’s body in an attempt to improve one’s health and longevity.
Also listed is “aura farming,” the deliberate cultivation of a distinctive and charismatic persona and the verb “to glaze,” to praise or flatter someone excessively or undeservedly.
Although the list is dominated by words linked to technology and employment, one from the world of leisure bags a spot—”coolcation,” meaning a holiday in a place with a cool climate.
Last year’s word of the year was “Brat,” the name of UK singer Charli XCX’s hit sixth album, signifying a “confident, independent, and hedonistic attitude” rather than simply a term for a badly-behaved child.
© 2025 AFP
Citation:
‘Vibe coding’ named word of the year by Collins dictionary (2025, November 6)
retrieved 6 November 2025
from https://techxplore.com/news/2025-11-vibe-coding-word-year-collins.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
I’ve Tested a Lot of Bad, Cheap Laptops. These Ones Are Actually Good
Compare Top 12 Budget Laptops
Other Budget Laptops to Consider
Photograph: Daniel Thorp-Lancaster
The Acer Chromebook Plus Spin 714 for $750: The Acer Chromebook Plus Spin 714 (9/10, WIRED Recommends) checks a lot of boxes. It has a surprisingly premium feel for such an affordable machine, and the keyboard and trackpad are excellent for those of us who type all day long. It also has one of the best displays I’ve seen on a Chromebook, with fantastic colors that pop off the glossy touch display. It’s just a bit too expensive compared to something like the new Lenovo Chromebook Plus 14.
Acer Swift Go 14 for $730: The Acer Swift Go 14 (7/10, WIRED Recommends) has a chintzy build quality, a stiff touchpad, and lackluster keyboard backlighting, but it’s hard to beat the performance you get at this price. There’s also an array of ports that make it very versatile, including a microSD card slot. The Intel Core Ultra 7 155H chip with 16 GB of RAM makes for a surprisingly powerful punch when it comes to productivity work, and our tester noted decent results in AI tasks as well. We averaged 11 hours in our battery test (with a full-brightness YouTube video on loop), which is respectable.
Acer Chromebook Plus CX34 for $260: If you want to stand out from the crowd a bit and don’t need Windows, the Asus Chromebook Plus CX34 (7/10, WIRED Recommends) is the best-looking Chromebook. When I got my hands on the CX34, I was impressed by its beautiful white design that stands out in a sea of gray slabs. It’s not left wanting for power, either, with the Core i5 CPU inside offering plenty of performance to easily handle multiple tabs and app juggling.
What Are Important Specs in a Cheap Laptop?
Read our How to Choose the Right Laptop guide if you want all the details on specs and what to look for. In short, your budget is the most important factor, as it determines what you can expect out of the device you’re purchasing. But you should consider display size, chassis thickness, CPU, memory, storage, and port selection. While appropriate specs can vary wildly when you’re considering laptops ranging from $200 to $800, there are a few hard lines I don’t recommend crossing.
For example, don’t buy a laptop if doesn’t have a display resolution of at least 1920 x 1080. In 2025, there’s just no excuse for anything less than that. You should also never buy a laptop without at least 8 GB of RAM and 128 GB of storage. Even in Chromebooks, these specs are becoming the new standard. You’re selling yourself short by getting anything less. Another rule is to avoid a Windows laptop with an Intel Celeron processor—leave those for Chromebooks only.
Specs are only half the battle though. Based on our years of testing, laptop manufacturers tend to make compromises in display quality and touchpad quality. You can’t tell from the photos or listed specs online, but once you get the laptop in your hands, you may notice that the colors of the screen look a bit off or that the touchpad feels choppy to use. It’s nearly impossible to find laptops under $500 that don’t compromise in these areas, but this is where our reviewers and testers can help.
How Much RAM Do You Need in a Cheap Laptop?
The simple answer? You need at least 8 GB of RAM. These days, there are even some Windows laptops at around $700 or $800 that come with 16 GB of RAM standard, as part of the Copilot+ PC marketing push. That’s a great value, and ensures you’ll get the best performance out of your laptop, especially when running heavier applications or multitasking. Either way, it’s important to factor in the price of the RAM, because manufacturers will often charge $100 or even $200 to double the memory.
On Chromebooks, there are some rare occasions where 4 GB of RAM is acceptable, but only on the very cheapest models that are under $200. Even budget Chromebooks like the Asus Chromebook CX15 now start with 8 GB of RAM.
Are There Any Good Laptops Under $300?
Yes, but you need to be careful. Don’t just go buy a random laptop on Amazon under $300, as you’ll likely end up with an outdated, slow device that you’ll regret purchasing. You might be tempted by something like this or this, but trust me—there are better options, some of which you’ll find in this guide.
For starters, you shouldn’t buy a Windows laptop under $300. That price puts you solidly in cheap Chromebook territory. While these are still budget-level in terms of quality, they’re better in almost every way than their Windows counterparts of a similar price. A good example is the Asus Chromebook CX15.
If you want a Windows laptop that you won’t give you instant buyers remorse, you’ll need to spend at least a few hundred more. Once you hit $500 or $600, there are some more solid Windows laptops available, such as the Acer Aspire Go 14, though even there, you’re making some significant compromises in performance and storage capacity. These days, Windows laptops really start to get better in the $600-plus range.
Should You Buy a Chromebook or a Cheap Windows Laptop?
The eternal question. If you’re looking for a laptop under $500, I highly recommend that you opt for a Chromebook. I know that won’t be a possibility for everyone, as some have certain applications that require a Windows laptop or MacBook. If you do aim to get a Chromebook, make sure all your connected accessories and other devices are compatible.
Chromebooks give you access to a full desktop Chrome browser, as well as Android apps. While that leaves some gaps for apps that some may need, you might be surprised by how much you can get done without the need to install any software. Most applications have web versions that are every bit as useful.
While Chromebooks are most well-known as junky student laptops, the recent “Chromebook Plus” designation has filled in the gap between dirt-cheap Chromebooks and $800 Windows laptops. You’ll find some great Chromebook Plus options in the $400 to $600 range that have better performance and displays, while also looking a bit more like a modern laptop. The Lenovo Flex 5i Chromebook Plus is a great example of this. You can read more about the differences between Windows laptops and Chromebooks here.
Power up with unlimited access to WIRED. Get best-in-class reporting and exclusive subscriber content that’s too important to ignore. Subscribe Today.
-
Tech1 week agoUS Ralph Lauren partners with Microsoft for AI shopping experience
-
Tech1 week agoOpenAI says a million ChatGPT users talk about suicide
-
Tech1 week agoHow digital technologies can support a circular economy
-
Tech1 week agoAI chatbots are becoming everyday tools for mundane tasks, use data shows
-
Sports1 week agoBilly Bob Thornton dishes on Cowboys owner Jerry Jones’ acting prowess after ‘Landman’ cameo
-
Fashion1 week agoCalvin Klein launches Re-Calvin take-back programme across the US
-
Business1 week agoTransfer test: Children from Belfast low income families to be given free tuition
-
Tech1 week agoNvidia, Cisco look to deepen AI innovation across 6G, telecoms | Computer Weekly
