Tech
What does the future hold for generative AI?
When OpenAI introduced ChatGPT to the world in 2022, it brought generative artificial intelligence into the mainstream and started a snowball effect that led to its rapid integration into industry, scientific research, health care, and the everyday lives of people who use the technology.
What comes next for this powerful but imperfect tool?
With that question in mind, hundreds of researchers, business leaders, educators, and students gathered at MIT’s Kresge Auditorium for the inaugural MIT Generative AI Impact Consortium (MGAIC) Symposium on Sept. 17 to share insights and discuss the potential future of generative AI.
“This is a pivotal moment — generative AI is moving fast. It is our job to make sure that, as the technology keeps advancing, our collective wisdom keeps pace,” said MIT Provost Anantha Chandrakasan to kick off this first symposium of the MGAIC, a consortium of industry leaders and MIT researchers launched in February to harness the power of generative AI for the good of society.
Underscoring the critical need for this collaborative effort, MIT President Sally Kornbluth said that the world is counting on faculty, researchers, and business leaders like those in MGAIC to tackle the technological and ethical challenges of generative AI as the technology advances.
“Part of MIT’s responsibility is to keep these advances coming for the world. … How can we manage the magic [of generative AI] so that all of us can confidently rely on it for critical applications in the real world?” Kornbluth said.
To keynote speaker Yann LeCun, chief AI scientist at Meta, the most exciting and significant advances in generative AI will most likely not come from continued improvements or expansions of large language models like Llama, GPT, and Claude. Through training, these enormous generative models learn patterns in huge datasets to produce new outputs.
Instead, LuCun and others are working on the development of “world models” that learn the same way an infant does — by seeing and interacting with the world around them through sensory input.
“A 4-year-old has seen as much data through vision as the largest LLM. … The world model is going to become the key component of future AI systems,” he said.
A robot with this type of world model could learn to complete a new task on its own with no training. LeCun sees world models as the best approach for companies to make robots smart enough to be generally useful in the real world.
But even if future generative AI systems do get smarter and more human-like through the incorporation of world models, LeCun doesn’t worry about robots escaping from human control.
Scientists and engineers will need to design guardrails to keep future AI systems on track, but as a society, we have already been doing this for millennia by designing rules to align human behavior with the common good, he said.
“We are going to have to design these guardrails, but by construction, the system will not be able to escape those guardrails,” LeCun said.
Keynote speaker Tye Brady, chief technologist at Amazon Robotics, also discussed how generative AI could impact the future of robotics.
For instance, Amazon has already incorporated generative AI technology into many of its warehouses to optimize how robots travel and move material to streamline order processing.
He expects many future innovations will focus on the use of generative AI in collaborative robotics by building machines that allow humans to become more efficient.
“GenAI is probably the most impactful technology I have witnessed throughout my whole robotics career,” he said.
Other presenters and panelists discussed the impacts of generative AI in businesses, from largescale enterprises like Coca-Cola and Analog Devices to startups like health care AI company Abridge.
Several MIT faculty members also spoke about their latest research projects, including the use of AI to reduce noise in ecological image data, designing new AI systems that mitigate bias and hallucinations, and enabling LLMs to learn more about the visual world.
After a day spent exploring new generative AI technology and discussing its implications for the future, MGAIC faculty co-lead Vivek Farias, the Patrick J. McGovern Professor at MIT Sloan School of Management, said he hoped attendees left with “a sense of possibility, and urgency to make that possibility real.”
Tech
Buried power lines could cut weather-related outages
A Stanford analysis shows that strategic investment in burying power lines could shorten blackouts during extreme weather, enhancing energy reliability for millions of U.S. households.
As hurricanes intensify, wildfires spread, and winter storm patterns shift, the combination of extreme weather events and aging grid infrastructure threatens to make energy less reliable for tens of millions of U.S. households.
Experts say burying power lines underground can harden the electrical system against threats from wind, ice, falling trees, and other weather-related hazards. Yet undergrounding power lines remains expensive and unevenly implemented. One obstacle has been a lack of information about where investments in undergrounding by utilities and communities could make the biggest difference for reliable power supplies.
In a recent study posted to the arXiv preprint server, Stanford University researchers led by Associate Professor Ram Rajagopal combined previously non-public and siloed datasets to reveal how the distribution of power lines above and below ground has changed since the 1990s. By combining these data with power outage records, the team modeled how having more power lines underground during recent extreme weather events could have shortened outages.

Patchy progress on burying power lines since 1990
Dense metropolitan areas on the East Coast, parts of southern Florida, and a few southwestern growth hubs were among the first to underground at least a quarter of their power line mileage. The overwhelming majority of power lines remained overhead in most U.S. counties in 1990.
By 2020, some fast-growing suburbs in southeastern and Sunbelt states showed modest increases in undergrounding. For most counties nationwide, however, the median percentage of power lines buried underground remained well below 15%. Large swaths of the Rockies, Midwest, and Gulf Coast showed virtually no change.
Where outages last the longest
Each year, tens of millions of Americans experience power outages. While households on average lose electricity for about four hours over the course of a year, some outages last a day or even weeks. Many of these longer outages are linked to extreme weather events.

New England’s 2017 ‘bomb cyclone’
A nor’easter or “bomb cyclone” that struck Maine, Vermont, and New Hampshire in October 2017 left people without power on average for 27.3 hours per home. The Stanford analysis found that burying an additional 25% of overhead power lines could have cut annual outage totals by 10.8 hours.
-

Annual average power outage time for 2017, on a scale from less than one hour (lightest shades) to more than 24 hours (darkest shades). Credit: arXiv (2024). DOI: 10.48550/arxiv.2402.06668
-

Undergrounding an additional 25% of power lines could have reduced outages by 10.8 hours (39.7%). Credit: arXiv (2024). DOI: 10.48550/arxiv.2402.06668
California’s 2019 wildfire shutoffs
Amid dry conditions and strong winds in 2019, more than 3 million Californians lost power when utilities preemptively shut down equipment in high-fire-risk areas. The Stanford analysis found that undergrounding an additional 25% of overhead power lines would have cut annual outage totals in the affected area to roughly eight hours from 10.5 hours.
Texas’s 2021 deep freeze
In February 2021, unusually cold temperatures in Texas left 4.5 million homes and businesses without power for just over 19 hours. The researchers found having 25% more power lines underground during this event also could have shortened average outage times by 2.5 hours.
Explore the data
You can view more analysis from the Stanford researchers and explore county-level undergrounding and outage patterns in an interactive project developed by the Stanford Doerr School of Sustainability in collaboration with TechSoup. The researchers have made their 2020 data on the proportion of underground distribution power lines publicly available through Stanford’s Data Commons for Sustainability.
More information:
Tao Sun et al, Mapping the Depths: A Stocktake of Underground Power Distribution in United States, arXiv (2024). DOI: 10.48550/arxiv.2402.06668
Citation:
Buried power lines could cut weather-related outages (2025, November 5)
retrieved 5 November 2025
from https://techxplore.com/news/2025-11-power-lines-weather-outages.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
Fast, accurate drag predictions could help improve aircraft design
Researchers at the University of Surrey have proposed a computational approach that can provide aerodynamic drag data more efficiently during the early stages of aircraft design. It is hoped that AeroMap could help develop safer and more fuel-efficient aircraft.
Drag is the aerodynamic force that opposes an aircraft‘s motion through the air. Being able to predict drag accurately at an early design stage helps engineers avoid later adjustments that can lead to additional time and cost. Reliable early estimates can also reduce the need for extensive wind tunnel testing or large-scale computer simulations.
AeroMap estimates drag for different wing-body configurations operating at speeds close to the speed of sound. In a study published in Aerospace Science and Technology, researchers show how AeroMap provides datasets up to 10 to 100 times faster than high-fidelity simulations currently on the market, while maintaining good accuracy.
The researchers suggest that such improvements in prediction speed could support the development of more fuel-efficient aircraft configurations by allowing designers to assess a wider range of design options in less time.
“Our goal was to develop a method that provides reliable transonic aerodynamic predictions for a range of configurations, without the high computational cost of full-scale simulations. By providing reliable results earlier in the design process, AeroMap reduces the need for costly redesigns and repeated wind-tunnel testing.
“It also delivers the level of detail engineers need to refine concepts more efficiently and with greater confidence,” says Dr. Rejish Jesudasan, research fellow at the University of Surrey and lead author of the study.
AeroMap is based on a viscous-coupled full potential method, which combines a reduced form of the Navier–Stokes equations that describe airflow with a model of the thin boundary layer of air that moves along an aircraft’s surface. This approach enables AeroMap to capture the main effects of drag without the high computing demands of more detailed simulations. As a result, it provides a practical tool for the early stages of aircraft design, when engineers need results that are both reliable and rapid.
Many existing models still rely on empirical methods developed several decades ago. Although these remain widely used, they can be less accurate when applied to modern, high-efficiency wing designs. AeroMap has been validated against NASA wind tunnel data, with results showing close agreement between its predictions and experimental measurements, indicating its suitability for sustainable aircraft development.
“Accurately predicting the transonic performance of aircraft configurations, during early concept studies, remains a significant challenge. Previous empirical approaches, based on older datasets, can struggle to capture the behavior of modern high-efficiency wings.
“AeroMap combines established aerodynamic principles in a way that improves the reliability of drag predictions during early development, helping engineers make better-informed design decisions,” says Dr. Simao Marques.
“We are exploring how AeroMap can be combined with optimization techniques to assess a wider range of wing-body configurations and performance scenarios. This approach could help engineers identify more efficient designs earlier in the process, potentially reducing lifecycle costs and supporting the industry as it works toward future sustainability goals,” says John Doherty.
More information:
Rejish Jesudasan et al, Enhancing rapid drag analysis for transonic aircraft configuration trade studies, Aerospace Science and Technology (2026). DOI: 10.1016/j.ast.2025.110727
Citation:
Fast, accurate drag predictions could help improve aircraft design (2025, November 4)
retrieved 4 November 2025
from https://techxplore.com/news/2025-11-fast-accurate-aircraft.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
Lay intuition as effective at jailbreaking AI chatbots as technical methods, research suggests
It doesn’t take technical expertise to work around the built-in guardrails of artificial intelligence (AI) chatbots like ChatGPT and Gemini, which are intended to ensure that the chatbots operate within a set of legal and ethical boundaries and do not discriminate against people of a certain age, race or gender.
A single, intuitive question can trigger the same biased response from an AI model as advanced technical inquiries, according to a team led by researchers at Penn State.
“A lot of research on AI bias has relied on sophisticated ‘jailbreak’ techniques,” said Amulya Yadav, associate professor at Penn State’s College of Information Sciences and Technology. “These methods often involve generating strings of random characters computed by algorithms to trick models into revealing discriminatory responses.
“While such techniques prove these biases exist theoretically, they don’t reflect how real people use AI. The average user isn’t reverse-engineering token probabilities or pasting cryptic character sequences into ChatGPT—they type plain, intuitive prompts. And that lived reality is what this approach captures.”
Prior work probing AI bias—skewed or discriminatory outputs from AI systems caused by human influences in the training data, like language or cultural bias—has been done by experts using technical knowledge to engineer large language model (LLM) responses. To see how average internet users encounter biases in AI-powered chatbots, the researchers studied the entries submitted to a competition called “Bias-a-Thon.” Organized by Penn State’s Center for Socially Responsible AI(CSRAI), the competition challenged contestants to come up with prompts that would lead generative AI systems to respond with biased answers.
They found that the intuitive strategies employed by everyday users were just as effective at inducing biased responses as expert technical strategies. The researchers presented their findings at the 8th AAAI/ACM Conference on AI, Ethics, and Society.
Fifty-two individuals participated in the Bias-a-Thon, submitting screenshots of 75 prompts and AI responses from eight generative AI models. They also provided an explanation of the bias or stereotype that they identified in the response, such as age-related or historical bias.
The researchers conducted Zoom interviews with a subset of the participants to better understand their prompting strategies and their conceptions of ideas like fairness, representation and stereotyping when interacting with generative AI tools. Once they arrived at a participant-informed working definition of “bias”—which included a lack of representation, stereotypes and prejudice, and unjustified preferences toward groups—the researchers tested the contest prompts in several LLMs to see if they would elicit similar responses.

“Large language models are inherently random,” said lead author Hangzhi Guo, a doctoral candidate in information sciences and technology at Penn State. “If you ask the same question to these models two times, they might return different answers. We wanted to use only the prompts that were reproducible, meaning that they yielded similar responses across LLMs.”
The researchers found that 53 of the prompts generated reproducible results. Biases fell into eight categories: gender bias; race, ethnic and religious bias; age bias; disability bias; language bias; historical bias favoring Western nations; cultural bias; and political bias.
The researchers also found that participants used seven strategies to elicit these biases: role-playing, or asking the LLM to assume a persona; hypothetical scenarios; using human knowledge to ask about niche topics, where it’s easier to identify biased responses; using leading questions on controversial topics; probing biases in under-represented groups; feeding the LLM false information; and framing the task as having a research purpose.
“The competition revealed a completely fresh set of biases,” said Yadav, organizer of the Bias-a-Thon. “For example, the winning entry uncovered an uncanny preference for conventional beauty standards. The LLMs consistently deemed a person with a clear face to be more trustworthy than a person with facial acne, or a person with high cheekbones more employable than a person with low cheekbones.
“This illustrates how average users can help us uncover blind spots in our understanding of where LLMs are biased. There may be many more examples such as these that have been overlooked by the jailbreaking literature on LLM bias.”
The researchers described mitigating biases in LLMs as a cat-and-mouse game, meaning that developers are constantly addressing issues as they arise. They suggested strategies that developers can use to mitigate these issues now, including implementing a robust classification filter to screen outputs before they go to users, conducting extensive testing, educating users and providing specific references or citations so users can verify information.
“By shining a light on inherent and reproducible biases that laypersons can identify, the Bias-a-Thon serves an AI literacy function,” said co-author S. Shyam Sundar, Evan Pugh University Professor at Penn State and director of the Penn State Center for Socially Responsible Artificial Intelligence, which has since organized other AI competitions such as Fake-a-thon, Diagnose-a-thon and Cheat-a-thon.
“The whole goal of these efforts is to increase awareness of systematic problems with AI, to promote the informed use of AI among laypersons and to stimulate more socially responsible ways of developing these tools.”
More information:
Hangzhi Guo et al, Exposing AI Bias by Crowdsourcing: Democratizing Critique of Large Language Models, Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (2025). DOI: 10.1609/aies.v8i2.36620
Citation:
Lay intuition as effective at jailbreaking AI chatbots as technical methods, research suggests (2025, November 4)
retrieved 4 November 2025
from https://techxplore.com/news/2025-11-lay-intuition-effective-jailbreaking-ai.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
-
Tech1 week agoOpenAI says a million ChatGPT users talk about suicide
-
Tech1 week agoUS Ralph Lauren partners with Microsoft for AI shopping experience
-
Tech1 week agoHow digital technologies can support a circular economy
-
Sports1 week agoBilly Bob Thornton dishes on Cowboys owner Jerry Jones’ acting prowess after ‘Landman’ cameo
-
Tech1 week agoAI chatbots are becoming everyday tools for mundane tasks, use data shows
-
Fashion1 week agoITMF elects new board at 2025 Yogyakarta conference
-
Fashion1 week agoTaiwan Textile Select showcases sustainable innovation at TITAS 2025
-
Fashion1 week agoJapan’s textile trade shows strong apparel demand, weak yarn imports
