Tech
Nature’s resilience inspires an improved power grid
Natural ecosystems made up of plants, animals and microorganisms face constant challenges from natural hazards, like extreme weather or invasive species.
Despite these challenges, ecosystems have thrived for millions of years, showcasing high levels of resilience against hazards and disturbances. What if the mechanisms and patterns responsible for this prosperous resilience could be applied to the power grid?
Texas A&M University researchers have tested bio-inspired cyber-physical systems to strengthen the power grid to mitigate different types of cyber-attacks and understand their impacts.
Possible cyber threats to resource networks like the power grid include presentations of false information to data systems and information theft attempts, which can affect a network’s performance abilities.
“Ecosystems experience many of the same unexpected disturbances as human-made systems, like droughts and floods,” said Dr. Astrid Layton, an associate professor in the J. Mike Walker ’66 Department of Mechanical Engineering and head of the Bio-inspired SystemS Lab (BiSSL).
“While ecosystems may be damaged by these hazards, they have the unique ability to survive these disturbances without wasteful levels of redundancies, not only at the ecosystem level, but on a species level as well—which is why we’re interested in cyber-physical power systems from this ecological perspective.”
As their name suggests, cyber-physical power systems are made up of both cyber and physical elements, referred to as components. Cyber components—like firewalls and routers—deal with digital information flows, while physical components—like buses and generators—process tangible energy flows. Despite their prevalence, the system’s complexity causes incomplete knowledge of how disturbances move through and impact a cyber-physical power system.
“It’s crucial for a system to not only survive the hard times, but to thrive during good times,” said Layton. “Using ecological models and the insight they give allows us to assess the cyber-physical interface, clarifying how the system can run more efficiently when there are no immediate threats while still understanding and minimizing damages when they do happen.”
The main goal of this project was to better understand the relationship between the cyber components and physical components that make up cyber-physical power systems. A stronger understanding of the system’s interface allows researchers to predict potential impacts of cyber-attacks on the physical components and physical attacks on the cyber components, informing policymakers and grid operators on how best to prepare for and operate during these threats.
Layton, an expert in bio-inspired systems design and analysis techniques, collaborated with Dr. Katherine Davis, an associate professor of electrical and computer engineering, who brings extensive power system knowledge. Layton and Davis have worked as collaborators since a 2018 Texas A&M Energy Institute seed grant.
Their combined knowledge of mechanical and electrical engineering makes them a great team for understanding and designing cyber-physical power systems for resilience.
Layton and Davis were also joined by their senior Ph.D. students Emily Payne and Shining Sun for the Sandia study. Payne, a mechanical engineering student, started working with Layton in the Bio-inspired SystemS Lab as an undergraduate architectural engineering student in 2022.
Sun, an electrical and computer engineering student, has worked with Davis since 2023. Both Payne and Sun have published several papers relating to this work and have presented their findings at conferences, each winning awards for their research.
“Part of the success of this project has been these engineering graduate students, Emily and Shining, who have excelled at the interdisciplinary aspects of the work in addition to the highly technical focus of the problem,” Layton said.
“My research in particular asks engineering students to read ecology papers, which are essentially a different language from engineering papers, and apply this to their research.”
The approach enables Layton to view engineering problems from an innovative perspective.
The Sandia National Laboratories project ended in September 2025, but the researchers are continuing to collaborate on their bio-inspired power systems.
Layton and Davis are set to participate in a collaborative study focusing on modeling the impacts of weather disturbances on the power grid.
Citation:
Nature’s resilience inspires an improved power grid (2025, November 6)
retrieved 6 November 2025
from https://techxplore.com/news/2025-11-nature-resilience-power-grid.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
‘Vibe coding’ named word of the year by Collins dictionary
“Vibe coding,” a word that essentially means using artificial intelligence (AI) to tell a machine what you want instead of coding it yourself, was on Thursday named the Collins Word of the Year 2025.
Coined by OpenAI co-founder Andrej Karpathy, the word refers to “an emerging software development that turns natural language into computer code using AI,” according to Collins Dictionary.
“It’s programming by vibes, not variables,” said Collins.
“While tech experts debate whether it’s revolutionary or reckless, the term has resonated far beyond Silicon Valley, speaking to a broader cultural shift toward AI-assisted everything in everyday life,” it added.
Lexicographers at Collins Dictionary monitor the 24 billion-word Collins Corpus, which draws from a range of media sources including social media, to create the annual list of new and notable words that reflect our ever-evolving language.
The 2025 shortlist highlights a range of words that have emerged in the past year to pithily reflect the changing world around us.
“Broligarchy” made the list in a year that saw tech billionaire Elon Musk briefly at the heart of US President Donald Trump’s administration and Amazon founder Jeff Bezos cozying up to the president.
The word is defined as a small clique of very wealthy men who exert political influence.
‘Coolcation’
New words linked to work and technology include “clanker,” a derogatory term for a computer, robot or source of artificial intelligence, and “HENRY,” an acronym for high earner, not rich yet.
Another is “taskmasking,” the act of giving a false impression that one is being productive in the workplace, while “micro-retirement” refers to a break taken between periods of employment to pursue personal interests.
In the health and behavioral sphere, “biohacking” also gets a spot, meaning the activity of altering the natural processes of one’s body in an attempt to improve one’s health and longevity.
Also listed is “aura farming,” the deliberate cultivation of a distinctive and charismatic persona and the verb “to glaze,” to praise or flatter someone excessively or undeservedly.
Although the list is dominated by words linked to technology and employment, one from the world of leisure bags a spot—”coolcation,” meaning a holiday in a place with a cool climate.
Last year’s word of the year was “Brat,” the name of UK singer Charli XCX’s hit sixth album, signifying a “confident, independent, and hedonistic attitude” rather than simply a term for a badly-behaved child.
© 2025 AFP
Citation:
‘Vibe coding’ named word of the year by Collins dictionary (2025, November 6)
retrieved 6 November 2025
from https://techxplore.com/news/2025-11-vibe-coding-word-year-collins.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
I’ve Tested a Lot of Bad, Cheap Laptops. These Ones Are Actually Good
Compare Top 12 Budget Laptops
Other Budget Laptops to Consider
Photograph: Daniel Thorp-Lancaster
The Acer Chromebook Plus Spin 714 for $750: The Acer Chromebook Plus Spin 714 (9/10, WIRED Recommends) checks a lot of boxes. It has a surprisingly premium feel for such an affordable machine, and the keyboard and trackpad are excellent for those of us who type all day long. It also has one of the best displays I’ve seen on a Chromebook, with fantastic colors that pop off the glossy touch display. It’s just a bit too expensive compared to something like the new Lenovo Chromebook Plus 14.
Acer Swift Go 14 for $730: The Acer Swift Go 14 (7/10, WIRED Recommends) has a chintzy build quality, a stiff touchpad, and lackluster keyboard backlighting, but it’s hard to beat the performance you get at this price. There’s also an array of ports that make it very versatile, including a microSD card slot. The Intel Core Ultra 7 155H chip with 16 GB of RAM makes for a surprisingly powerful punch when it comes to productivity work, and our tester noted decent results in AI tasks as well. We averaged 11 hours in our battery test (with a full-brightness YouTube video on loop), which is respectable.
Acer Chromebook Plus CX34 for $260: If you want to stand out from the crowd a bit and don’t need Windows, the Asus Chromebook Plus CX34 (7/10, WIRED Recommends) is the best-looking Chromebook. When I got my hands on the CX34, I was impressed by its beautiful white design that stands out in a sea of gray slabs. It’s not left wanting for power, either, with the Core i5 CPU inside offering plenty of performance to easily handle multiple tabs and app juggling.
What Are Important Specs in a Cheap Laptop?
Read our How to Choose the Right Laptop guide if you want all the details on specs and what to look for. In short, your budget is the most important factor, as it determines what you can expect out of the device you’re purchasing. But you should consider display size, chassis thickness, CPU, memory, storage, and port selection. While appropriate specs can vary wildly when you’re considering laptops ranging from $200 to $800, there are a few hard lines I don’t recommend crossing.
For example, don’t buy a laptop if doesn’t have a display resolution of at least 1920 x 1080. In 2025, there’s just no excuse for anything less than that. You should also never buy a laptop without at least 8 GB of RAM and 128 GB of storage. Even in Chromebooks, these specs are becoming the new standard. You’re selling yourself short by getting anything less. Another rule is to avoid a Windows laptop with an Intel Celeron processor—leave those for Chromebooks only.
Specs are only half the battle though. Based on our years of testing, laptop manufacturers tend to make compromises in display quality and touchpad quality. You can’t tell from the photos or listed specs online, but once you get the laptop in your hands, you may notice that the colors of the screen look a bit off or that the touchpad feels choppy to use. It’s nearly impossible to find laptops under $500 that don’t compromise in these areas, but this is where our reviewers and testers can help.
How Much RAM Do You Need in a Cheap Laptop?
The simple answer? You need at least 8 GB of RAM. These days, there are even some Windows laptops at around $700 or $800 that come with 16 GB of RAM standard, as part of the Copilot+ PC marketing push. That’s a great value, and ensures you’ll get the best performance out of your laptop, especially when running heavier applications or multitasking. Either way, it’s important to factor in the price of the RAM, because manufacturers will often charge $100 or even $200 to double the memory.
On Chromebooks, there are some rare occasions where 4 GB of RAM is acceptable, but only on the very cheapest models that are under $200. Even budget Chromebooks like the Asus Chromebook CX15 now start with 8 GB of RAM.
Are There Any Good Laptops Under $300?
Yes, but you need to be careful. Don’t just go buy a random laptop on Amazon under $300, as you’ll likely end up with an outdated, slow device that you’ll regret purchasing. You might be tempted by something like this or this, but trust me—there are better options, some of which you’ll find in this guide.
For starters, you shouldn’t buy a Windows laptop under $300. That price puts you solidly in cheap Chromebook territory. While these are still budget-level in terms of quality, they’re better in almost every way than their Windows counterparts of a similar price. A good example is the Asus Chromebook CX15.
If you want a Windows laptop that you won’t give you instant buyers remorse, you’ll need to spend at least a few hundred more. Once you hit $500 or $600, there are some more solid Windows laptops available, such as the Acer Aspire Go 14, though even there, you’re making some significant compromises in performance and storage capacity. These days, Windows laptops really start to get better in the $600-plus range.
Should You Buy a Chromebook or a Cheap Windows Laptop?
The eternal question. If you’re looking for a laptop under $500, I highly recommend that you opt for a Chromebook. I know that won’t be a possibility for everyone, as some have certain applications that require a Windows laptop or MacBook. If you do aim to get a Chromebook, make sure all your connected accessories and other devices are compatible.
Chromebooks give you access to a full desktop Chrome browser, as well as Android apps. While that leaves some gaps for apps that some may need, you might be surprised by how much you can get done without the need to install any software. Most applications have web versions that are every bit as useful.
While Chromebooks are most well-known as junky student laptops, the recent “Chromebook Plus” designation has filled in the gap between dirt-cheap Chromebooks and $800 Windows laptops. You’ll find some great Chromebook Plus options in the $400 to $600 range that have better performance and displays, while also looking a bit more like a modern laptop. The Lenovo Flex 5i Chromebook Plus is a great example of this. You can read more about the differences between Windows laptops and Chromebooks here.
Power up with unlimited access to WIRED. Get best-in-class reporting and exclusive subscriber content that’s too important to ignore. Subscribe Today.
Tech
Why fears of a trillion-dollar AI bubble are growing
For almost as long as the artificial intelligence boom has been in full swing, there have been warnings of a speculative bubble that could rival the dot-com craze of the late 1990s that ended in a spectacular crash and a wave of bankruptcies.
Tech firms are spending hundreds of billions of dollars on advanced chips and data centers, not just to keep pace with a surge in the use of chatbots such as ChatGPT, Gemini and Claude, but to make sure they’re ready to handle a more fundamental and disruptive shift of economic activity from humans to machines.
The final bill may run into the trillions. The financing is coming from venture capital, debt and, lately, some more unconventional arrangements that have raised eyebrows on Wall Street.
Even some of AI’s biggest cheerleaders acknowledge the market is frothy, while still professing their belief in the technology’s long-term potential. AI, they say, is poised to reshape multiple industries, cure diseases and generally accelerate human progress.
Yet never before has so much money been spent so rapidly on a technology that remains somewhat unproven as a profit-making business model. Tech industry executives who privately doubt the most effusive assessments of AI’s revolutionary potential—or at least struggle to see how to monetize it—may feel they have little choice but to keep pace with their rivals’ investments or risk being out-scaled and sidelined in the future AI marketplace.
Sharp falls in global technology stocks in early November underscored investors’ growing unease over the sector’s sky-high valuations, with Wall Street chief executives warning of an overdue market correction.
What are the warning signs for AI?
When Sam Altman, the chief executive of ChatGPT maker OpenAI, announced a $500 billion AI infrastructure plan known as Stargate alongside other executives at the White House in January, the price tag triggered some disbelief. Since then, other tech rivals have ramped up spending, including Meta’s Mark Zuckerberg, who has pledged to invest hundreds of billions in data centers. Not to be outdone, Altman has since said he expects OpenAI to spend “trillions” on AI infrastructure.
To finance those projects, OpenAI is entering into new territory. In September, chipmaker Nvidia Corp. announced an agreement to invest up to $100 billion in OpenAI’s data center buildout, a deal that some analysts say raises questions about whether the chipmaker is trying to prop up its customers so that they keep spending on its own products.
The concerns have followed Nvidia, to varying degrees, for much of the boom. The dominant maker of AI accelerator chips has backed dozens of companies in recent years, including AI model makers and cloud computing providers. Some of them then use that capital to buy Nvidia’s expensive semiconductors. The OpenAI deal was far larger in scale.
OpenAI has also indicated it could pursue debt financing, rather than leaning on partners such as Microsoft Corp. and Oracle Corp. The difference is that those companies have rock-solid, established businesses that have been profitable for many years. OpenAI expects to burn through $115 billion of cash through 2029, The Information has reported.
Other large tech companies are also relying increasingly on debt to support their unprecedented spending. Meta, for example, turned to lenders to secure $26 billion in financing for a planned data center complex in Louisiana that it says will eventually approach the size of Manhattan. JPMorgan Chase & Co. and Mitsubishi UFJ Financial Group are also leading a loan of more than $22 billion to support Vantage Data Centers’ plan to build a massive data-center campus, Bloomberg News has reported.
So how about the payback?
By 2030, AI companies will need $2 trillion in combined annual revenue to fund the computing power needed to meet projected demand, Bain & Co. said in a report released in September. Yet their revenue is likely to fall $800 billion short of that mark, Bain predicted.
“The numbers that are being thrown around are so extreme that it’s really, really hard to understand them,” said David Einhorn, a prominent hedge fund manager and founder of Greenlight Capital. “I’m sure it’s not zero, but there’s a reasonable chance that a tremendous amount of capital destruction is going to come through this cycle.”
In a sign of the times, there’s also a growing number of less proven firms trying to capitalize on the data center goldrush. Nebius, an Amsterdam-based cloud provider that split off from Russian internet giant Yandex in 2024, recently inked an infrastructure deal with Microsoft worth up to $19.4 billion. And Nscale, a little-known British data center company, is working with Nvidia, OpenAI and Microsoft on build-outs in Europe. Like some other AI infrastructure providers, Nscale previously focused on another frothy sector: cryptocurrency mining.
Are there concerns about the technology itself?
The data center spending spree is overshadowed by persistent skepticism about the payoff from AI technology. In August, investors were rattled after researchers at the Massachusetts Institute of Technology found that 95% of organizations saw zero return on their investment in AI initiatives.
More recently, researchers at Harvard and Stanford offered a possible explanation for why. Employees are using AI to create “workslop,” which the researchers define as “AI-generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.”
The promise of AI has long been that it would help streamline tasks and boost productivity, making it an invaluable asset for workers and one that corporations would pay top dollar for. Instead, the Harvard and Stanford researchers found the prevalence of workslop could cost larger organizations millions of dollars a year in lost productivity.
AI developers have also been confronting a different challenge. OpenAI, Claude chatbot developer Anthropic and others have for years bet on the so-called scaling laws—the idea that more computing power, data and larger models will inevitably pave the way for greater leaps in the power of AI.
Eventually, they say, these advances will lead to artificial general intelligence, a hypothetical form of the technology so sophisticated that it matches or exceeds humans in most tasks.
Over the past year, however, these developers have experienced diminishing returns from their costly efforts to build more advanced AI. Some have also struggled to match their own hype.
After months of touting GPT-5 as a significant leap, OpenAI’s release of its latest AI model in August was met with mixed reviews. In remarks around the launch, Altman conceded that “we’re still missing something quite important” to reach AGI.
Those concerns are compounded by growing competition from China, where companies are flooding the market with competitive, low-cost AI models. While U.S. firms are generally still viewed as ahead in the race, the Chinese alternatives risk undercutting Silicon Valley on price in certain markets, making it harder to recoup the significant investment in AI infrastructure.
There’s also the risk that the AI industry’s vast data center buildout, entailing a huge increase in electricity consumption, will be held back by the limitations of national power networks.
What does the AI industry say in response?
Sam Altman, the face of the current AI boom, has repeatedly acknowledged the risk of a bubble in recent months while maintaining his optimism for the technology. “Are we in a phase where investors as a whole are overexcited about AI? In my opinion, yes,” he said in August. “Is AI the most important thing to happen in a very long time? My opinion is also yes.”
Altman and other tech leaders continue to express confidence in the roadmap toward AGI, with some suggesting it could be closer than skeptics think.
“Developing superintelligence is now in sight,” Zuckerberg wrote in July, referencing an even more powerful form of AI that his company is aiming for. In the near term, some AI developers also say they need to drastically ramp up computing capacity to support the rapid adoption of their services.
Altman, in particular, has stressed repeatedly that OpenAI remains constrained in computing resources as hundreds of millions of people around the world use its services to converse with ChatGPT, write code and generate images and videos.
OpenAI and Anthropic have also released their own research and evaluations that indicate AI systems are having a meaningful impact on work tasks, in contrast to the more damning reports from outside academic institutions. An Anthropic report released in September found that roughly three quarters of companies are using Claude to automate work.
The same month, OpenAI released a new evaluation system called GDPval that measures the performance of AI models across dozens of occupations.
“We found that today’s best frontier models are already approaching the quality of work produced by industry experts,” OpenAI said in a blog post. “Especially on the subset of tasks where models are particularly strong, we expect that giving a task to a model before trying it with a human would save time and money.”
So how much will customers eventually be willing to pay for these services? The hope among developers is that, as AI models improve and field more complex tasks on users’ behalf, they will be able to convince businesses and individuals to spend far more to access the technology.
“I want the door open to everything,” OpenAI Chief Financial Officer Sarah Friar said in late 2024, when asked about a report that the company has discussed a $2,000 monthly subscription for its AI products. “If it’s helping me move about the world with literally a Ph.D.-level assistant for anything that I’m doing, there are certainly cases where that would make all the sense in the world.”
In September, Zuckerberg said an AI bubble is “quite possible,” but stressed that his bigger concern is not spending enough to meet the opportunity. “If we end up misspending a couple of hundred billion dollars, I think that that is going to be very unfortunate, obviously,” he said in a podcast interview. “But what I’d say is I actually think the risk is higher on the other side.”
What makes a market bubble?
Bubbles are economic cycles defined by a swift increase in market values to levels that aren’t supported by the underlying fundamentals. They’re usually followed by a sharp selloff—the so-called pop.
A bubble often begins when investors get swept up in a speculative frenzy—over a new technology or other market opportunity—and pile in for fear of missing out on further gains. American economist Hyman Minsky identified five stages of a market bubble: displacement, boom, euphoria, profit-taking and panic.
Bubbles are sometimes difficult to spot because market prices can become dislocated from real-world values for many reasons, and a sharp price drop isn’t always inevitable. And, because a crash is part of a bubble cycle, they can be hard to pinpoint until after the fact.
Generally, bubbles pop when investors realize that the lofty expectations they had were too high. This usually follows a period of over-exuberance that tips into mania, when everyone is buying into the trend at the very top.
What comes next is usually a slow, prolonged selloff where company earnings start to suffer, or a singular event that changes the long-term view, sending investors dashing for the exits.
There was some fear that an AI bubble had already popped in late January, when China’s DeepSeek upended the market with the release of a competitive AI model purportedly built at a fraction of the amount that top U.S. developers spend. DeepSeek’s viral success triggered a trillion-dollar selloff of technology shares. Nvidia, a bellwether AI stock, slumped 17% in one day.
The DeepSeek episode underscored the risks of investing heavily in AI. But Silicon Valley remained largely undeterred. In the months that followed, tech companies redoubled their costly AI spending plans, and investors resumed cheering on these bets. Nvidia shares charged back from an April low to fresh records. It was worth more than $4 trillion by the end of September, making it the most valuable company in the world.
So is this 1999 all over again?
As with today’s AI boom, the companies at the center of the dot-com frenzy drew in vast amounts of investor capital, often using questionable metrics such as website traffic rather than their actual ability to turn a profit. There were many flawed business models and exaggerated revenue projections.
Telecommunication companies raced to build fiber-optic networks only to find the demand wasn’t there to pay for them. When it all crashed in 2001, many companies were liquidated, others absorbed by healthier rivals at knocked-down prices.
Echoes of the dot-com era can be found in AI’s massive infrastructure build-out, sky-high valuations and showy displays of wealth. Venture capital investors have been courting AI startups with private jets, box seats and big checks.
Many AI startups tout their recurring revenue as a key metric for growth, but there are doubts as to how sustainable or predictable those projections are, particularly for younger businesses. Some AI firms are completing multiple mammoth fundraisings in a single year. Not all will necessarily flourish.
“I think there’s a lot of parallels to the internet bubble,” said Bret Taylor, OpenAI’s chairman and the CEO of Sierra, an AI startup valued at $10 billion. Like the dot-com era, a number of high-flying companies will almost certainly go bust. But in Taylor’s telling, there will also be large businesses that emerge and thrive over the long term, just as happened with Amazon.com Inc. and Alphabet Inc.’s Google in the late 90s.
“It is both true that AI will transform the economy, and I think it will, like the internet, create huge amounts of economic value in the future,” Taylor said. “I think we’re also in a bubble, and a lot of people will lose a lot of money.”
Amazon Chairman Jeff Bezos said the spending on AI resembles an “industrial bubble” akin to the biotech bubble of the 1990s, but he still expects it to improve the productivity of “every company in the world.”
There are also some key differences to the dot-com boom that market watchers point out, the first being the broad health and stability of the biggest businesses that are at the forefront of the trend. Most of the “Magnificent Seven” group of U.S. tech companies are long-established giants that make up much of the earnings growth in the S&P 500 Index. These firms have huge revenue streams and are sitting on large stockpiles of cash.
Despite the skepticism, AI adoption has also proceeded at a rapid clip. OpenAI’s ChatGPT has about 700 million weekly users, making it one of the fastest growing consumer products in history. Top AI developers, including OpenAI and Anthropic, have also seen remarkably strong sales growth. OpenAI previously forecast revenue would more than triple in 2025 to $12.7 billion.
While the company does not expect to be cash-flow positive until near the end of this decade, a recent deal to help employees sell shares gave it an implied valuation of $500 billion—making it the world’s most valuable company never to have turned a profit.
2025 Bloomberg L.P. Distributed by Tribune Content Agency, LLC.
Citation:
Why fears of a trillion-dollar AI bubble are growing (2025, November 6)
retrieved 6 November 2025
from https://techxplore.com/news/2025-11-trillion-dollar-ai.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
-
Tech1 week agoUS Ralph Lauren partners with Microsoft for AI shopping experience
-
Tech1 week agoOpenAI says a million ChatGPT users talk about suicide
-
Sports1 week agoBilly Bob Thornton dishes on Cowboys owner Jerry Jones’ acting prowess after ‘Landman’ cameo
-
Tech1 week agoAI chatbots are becoming everyday tools for mundane tasks, use data shows
-
Tech1 week agoHow digital technologies can support a circular economy
-
Fashion1 week agoITMF elects new board at 2025 Yogyakarta conference
-
Fashion1 week agoCalvin Klein launches Re-Calvin take-back programme across the US
-
Business1 week agoLucid targets industry-first self-driving car technology with Nvidia
