Connect with us

Tech

Media professor says AI’s superior ability to formulate thoughts for us weakens our ability to think critically

Published

on

Media professor says AI’s superior ability to formulate thoughts for us weakens our ability to think critically


Credit: CC0 Public Domain

AI’s superior ability to formulate thoughts and statements for us weakens our judgment and ability to think critically, says media professor Petter Bae Brandtzæg.

No one knew about Chat GPT just three years ago. Today, 800 million people use the technology. The speed at which AI is rolling out breaks all records and has become the new normal.

Many AI researchers, like Brandtzæg, are skeptical. AI is a technology that interferes with our ability to think, read, and write. “We can largely avoid , but not AI. It is integrated into social media, Word, online newspapers, email programs, and the like. We all become partners with AI—whether we want to or not,” says Brandtzæg.

The professor of media innovations at the University of Oslo has examined how AI affects us in the recently completed project “An AI-Powered Society.”

The freedom of expression commission overlooked AI

The project has been conducted in collaboration with the research institute SINTEF. It is the first of its kind in Norway to research generative AI, that is, AI that creates content, and how it affects both users and the public.

The background was that Brandtzæg reacted to the fact that the report from the Norwegian Commission for Freedom of Expression, which was presented in 2022, did not sufficiently address the impact of AI on society—at least not generative AI.

“There are studies that show that AI can weaken . It affects our language, how we think, understand the world, and our moral judgment,” says Brandtzæg.

A few months after the Commission for Freedom of Expression report, ChatGPT was launched, making his research even more relevant.

“We wanted to understand how such generative AI affects society, and especially how AI changes social structures and relationships.”

AI-Individualism

The social implications of generative AI is a relatively new field that still lacks theory and concepts, and the researchers have therefore launched the concept of “AI-individualism.” It builds on “network individualism,” a framework which was launched in the early 2000s.

Back then, the need was to express how smartphones, the Internet, and social media enabled people to create and tailor their social networks beyond family, friends, and neighbors.

Networked individualism showed how technology weakened the old limits of time and place, enabling flexible, personalized networks. With AI, something new happens: the line between people and systems also starts to blur, as AI begins to take on roles that used to belong to humans.

“AI can also meet personal, social, and emotional needs,” says Brandtzæg.

With a background in psychology, he has for a long time studied human-AI relationships with chatbots like Replika. ChatGPT and similar social AIs can provide immediate, personal support for any number of things.

“It strengthens individualism by enabling more autonomous behavior and reducing our dependence on people around us. While it can enhance personal autonomy, it may also weaken community ties. A shift toward AI-individualism could therefore reshape core social structures.”

He argues that the concept of “AI-individualism” offers a new perspective for understanding and explaining how relationships change in society with AI. “We use it as a relational partner, a collaborative partner at work, to make decisions,” says Brandtzæg.

Students choose chatbot

The project is based on several investigations, including a questionnaire with open-ended answers to 166 on how they use AI.

“They (ChatGPT and MyAI) go straight to the point regarding what we ask, so we don’t have to search endlessly in the books or online,” said one high school student about the benefits of AI.

“ChatGPT helps me with problems, I can open up and talk about difficult things, get comfort and good advice,” responded a student.

In another study, using an online experiment with a blind test, it turned out that many preferred answers from a chatbot over a professional when they had questions about mental health. More than half preferred answers from a chatbot, less than 20% said a professional, while 30% responded both.

“This shows how powerful this technology is, and that we sometimes prefer AI-generated content over human-generated,” says Brandtzæg.

‘Model power’

The theory of “model power” is another concept they’ve launched. It builds on a power relationship theory developed by sociologist Stein Bråten 50 years ago.

Model power is the influence one has by being in possession of a model of reality that has impact, and which others must accept in the absence of equivalent models of power of their own, according to the article “Modellmakt og styring” (online newspaper Panorama—in Norwegian).

In the 1970s, it was about how media, science, and various groups with authority could influence people, and had model power. Now it’s AI.

Brandtzæg’s point is that AI-generated content no longer operates in a vacuum. It spreads everywhere, in public reports, new media, in research, and in encyclopedias. When we perform Google searches, we first get an AI-generated summary.

“A kind of AI layer is covering everything. We suggest that the model power of social AI can lead to model monopolies, significantly affecting human beliefs and behavior.”

Because AI models, like ChatGPT, are based on dialog, they call them social AI. But how genuine is a dialog with a machine fed with enormous amounts of text?

“Social AI can promote an illusion of real conversation and independence—a pseudo-autonomy through pseudo-dialog,” says Brandtzæg.

Critical but still following AI advice

According to a survey from The Norwegian Communications Authority (Nkom) from August 2025, 91% of Norwegians are concerned about the spread of false information from AI services like Copilot, ChatGPT, and Gemini.

AI can hallucinate. A known example is a report the municipality of Tromsø used as a basis for a proposal to close eight schools, was based on sources that AI had fabricated. Thus, AI may contribute to misinformation, and may undermine user trust in both AI, service providers and public institutions.

Brandtzæg asks how many other smaller municipalities and public institutions have done the same and he is worried about the spread of this unintentional spread of misinformation.

He and his researcher colleagues have reviewed various studies indicating that although we like to say we are critical, we nevertheless follow AI’s advice, which highlights the model power in such AI systems.

“It’s perhaps not surprising that we follow the advice that we get. It’s the first time in history that we’re talking to a kind of almighty entity that has read so much. But it gives a model power that is scary. We believe we are in a dialog, that it’s cooperation, but it’s one-way communication.”

American monoculture

Another aspect of this model power is that the AI companies are based in the U.S. and built on vast amounts of American data.

“We estimate that as little as 0.1% is Norwegian in AI models like ChatGPT. This means that it is American information we relate to, which can affect our values, norms and decisions.”

What does this mean for diversity? The principle is that “the winner takes it all.” AI does not consider minority interests. Brandtzæg points out that the world has never before faced such an intrusive technology, which necessitates regulation and balancing against real human needs and values.

“We must not forget that AI is not a public, democratic project. It’s commercial, and behind it are a few American companies and billionaires,” says Brandtzæg.

More information:
Marita Skjuve et al, Unge og helseinformasjon, Tidsskrift for velferdsforskning (2025). DOI: 10.18261/tfv.27.4.2

Petter Bae Brandtzaeg et al, AI Individualism, Oxford Intersections: AI in Society (2025). DOI: 10.1093/9780198945215.003.0099

Provided by
University of Oslo


Citation:
Media professor says AI’s superior ability to formulate thoughts for us weakens our ability to think critically (2025, November 16)
retrieved 16 November 2025
from https://techxplore.com/news/2025-11-media-professor-ai-superior-ability.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Tech

Mark Zuckerberg Tries to Play It Safe in Social Media Addiction Trial Testimony

Published

on

Mark Zuckerberg Tries to Play It Safe in Social Media Addiction Trial Testimony


Zuckerberg repeatedly fell back on accusing Lanier of “mischaracterizing” his previous statements. When it came to emails, Zuckerberg typically objected based on how old the message was, or his lack of familiarity with the Meta employees involved. “I don’t think so, no,” he replied when directed to clarify if he knew Karina Newton, Instagram’s head of public policy in 2021. And Zuckerberg never failed to point out when he wasn’t actually on an email thread entered as evidence.

Perhaps anticipating these detached and repetitive talking points from Zuckerberg—who claimed over and over that any increased engagement from a user on Facebook or Instagram merely reflected the “value” of those apps—Lanier early on suggested that the CEO has been coached to address these issues. “You have extensive media training,” he said. “I think I’m sort of well-known to be pretty bad at this,” Zuckerberg protested, getting a rare laugh from the courtroom. Lanier went on to present Meta documents outlining communication strategies for Zuckerberg, describing his team as “telling you what kind of answers to give,” including in a context such as testifying under oath. “I’m not sure what you’re trying to imply,” Zuckerberg said. In the afternoon, Meta counsel Paul Schmidt returned to that line of questioning, asking if Zuckerberg had to speak to the media because of his role as head of a major business. “More than I would like,” Zuckerberg said, to more laughter.

In an even more, well, “meta” moment after the court had returned from lunch, Kuhl struck a stern tone by warning all in the room that anyone wearing “glasses that record”—such as the AI-equipped Oakley and Ray-Ban glasses sold by Meta for up to $499—had to remove them while attending the proceedings, where both video and audio recordings are prohibited.

K.G.M.’s suit and the others to follow are novel in their sidestepping of Section 230, a law that has protected tech companies from liability for content created by users on their platforms. As such, Zuckerberg stuck to a playbook that framed the lawsuit as a fundamental misunderstanding of how Meta works. When Lanier presented evidence that Meta teams were working on increasing the minutes users spent on their platforms each day, Zuckerberg countered that the company had long ago moved on from those objectives, or that those numbers were not even “goals” per se, just metrics of competitiveness within the industry. When Lanier questioned if Meta was merely hiding behind an age limit policy that was “unenforced” and maybe “unenforceable,” per an email from Nick Clegg, Meta’s former president of global affairs, Zuckerberg calmly deflected with a narrative about people circumventing their safeguards despite continual improvements on that front.

Lanier, though, could always return to K.G.M., who he said had signed up for Instagram at the age of 9, some five years before the app started asking users for their birthday in 2019. While Zuckerberg could more or less brush off internal data on, say, the need to convert tweens into loyal teen users, or Meta’s apparent rejection of the alarming expert analysis they had commissioned on the risks of Instagram’s “beauty filters,” he didn’t have a prepackaged response to Lanier’s grand finale: a billboard-sized tarp, which took up half the width of the courtroom and required seven people to hold, of hundreds of posts from K.G.M.’s Instagram account. As Zuckerberg blinked hard at the vast display, visible only to himself, Kuhl, and the jury, Lanier said it was a measure of the sheer amount of time K.G.M. had poured into the app. “In a sense, y’all own these pictures,” he added. “I’m not sure that’s accurate,” Zuckerberg replied.

When Lanier had finished and Schmidt was given the chance to set Zuckerberg up for an alternate vision of Meta as a utopia of connection and free expression, the founder quickly gained his stride again. “I wanted people to have a good experience with it,” he said of the company’s platforms. Then, a moment later: “People shift their time naturally according to what they find valuable.”



Source link

Continue Reading

Tech

The Best Bose Noise-Canceling Headphones Are Discounted Right Now

Published

on

The Best Bose Noise-Canceling Headphones Are Discounted Right Now


Bose helped write the book on noise canceling when it entered the market way back in the 1970s. Lately, the brand has been on a tear, with the goal of creating the ultimate in sonic solitude. The QuietComfort Ultra Gen 2 are Bose’s latest and greatest creation, offering among the very best noise canceling we’ve ever tested.

Just as importantly, they’re currently on sale for $50 off. Now, this might not seem like a huge discount on a $450 pair of headphones, but this is the lowest price we’ve seen on these headphones outside of a major shopping holiday. So if you missed your chance during Black Friday but you have a spring break trip to Mexico or Hawaii on the calendar, this is your best bet.

The Best Noise Canceling Headphones Are on Sale

I’ve wondered over the last few years if the best noise cancelers even needed another potency upgrade. Previous efforts like Sony’s WH-1000XM5, Apple’s AirPods Max, and Bose’s own QuietComfort 45 offering enough silence that my own wife gives me a jump scare when she walks up behind me.

Then I had a kid.

Bose’s properly named QuietComfort Ultra not only do a fantastic job quelling the many squeaks, squawks, and adorable pre-nap protests my baby makes. Now that my wife and I have turned my solo office into a shared space, I can go about my business in near total sonic freedom, even as she sits in on a loud Zoom call.

In testing, we found Sony’s latest WH-1000XM6 offered a slight bump in noise canceling performance over Bose’s latest, due in part to their zippy response time when attacking unwanted sounds. But both were within a hair of each other when tested across frequencies. I prefer Bose’s pair for travel, due to their more cushy design that lets me listen for a full cross-country flight in luxe comfort.

Upgrades to the latest generation, like the ability to sleep them and quickly wake them, make these headphones surprisingly more intuitive to use daily. The new built-in USB-C audio interface lets you listen to lossless audio directly from supported devices, a nice touch now that Spotify has joined Apple Music and other services with lossless audio support.

Speaking of audio, the QC Ultra Gen 2’s performance is impressive, providing clear and crisp detail and dialog, with a lively touch that brings some added excitement to instruments like percussion or zippy guitar tones. It’s a lovely overall presentation. I’m not a huge fan of the new spatial audio mode (what Bose calls Cinema mode), but it’s always nice to have options.

These headphones often bounce between full price and this $50 discount, so if you’ve been waiting for the dip, now’s the time to buy. If you’ve deal with daily distractions like me, whether at home or in a busy office space, you’ll appreciate the latest level of sound-smashing solitude Bose’s best noise-cancelers ever can provide.


Power up with unlimited access to WIRED. Get best-in-class reporting and exclusive subscriber content that’s too important to ignore. Subscribe Today.



Source link

Continue Reading

Tech

This Defense Company Made AI Agents That Blow Things Up

Published

on

This Defense Company Made AI Agents That Blow Things Up


Like many Silicon Valley companies today, Scout AI is training large AI models and agents to automate chores. The big difference is that instead of writing code, answering emails, or buying stuff online, Scout AI’s agents are designed to seek and destroy things in the physical world with exploding drones.

In a recent demonstration, held at an undisclosed military base in central California, Scout AI’s technology was put in charge of a self-driving off-road vehicle and a pair of lethal drones. The agents used these systems to find a truck hiding in the area, and then blew it to bits using an explosive charge.

“We need to bring next-generation AI to the military,” Colby Adcock, Scout AI’s CEO, told me in a recent interview. (Adcock’s brother, Brett Adcock, is the CEO of Figure AI, a startup working on humanoid robots). “We take a hyperscaler foundation model and we train it to go from being a generalized chatbot or agentic assistant to being a warfighter.”

Adcock’s company is part of a new generation of startups racing to adapt technology from big AI labs for the battlefield. Many policymakers believe that harnessing AI will be the key to future military dominance. The combat potential of AI is one reason why the US government has sought to limit the sale of advanced AI chips and chipmaking equipment to China, although the Trump administration recently chose to loosen those controls.

“It’s good for defense tech startups to push the envelope with AI integration,” says Michael Horowitz, a professor at the University of Pennsylvania who previously served in the Pentagon as deputy assistant secretary of defense for force development and emerging capabilities. “That’s exactly what they should be doing if the US is going to lead in military adoption of AI.”

Horowitz also notes, though, that harnessing the latest AI advances can prove particularly difficult in practice.

Large language models are inherently unpredictable and AI agents—like the ones that control the popular AI assistant OpenClawcan misbehave when given even relatively benign tasks like ordering goods online. Horowitz says it may be especially hard to demonstrate that such systems are robust from a cybersecurity standpoint—something that would be required for widespread military use.

Scout AI’s recent demo involved several steps where AI had free rein over combat systems.

At the outset of the mission the following command was fed into a Scout AI system known as Fury Orchestrator:

Fury Orchestrator, send 1 ground vehicle to checkpoint ALPHA. Execute a 2 drone kinetic strike mission. Destroy the blue truck 500m East of the airfield and send confirmation.

A relatively large AI model with over a 100 billion parameters, which can run either on a secure cloud platform or an air-gapped computer on-site, interprets the initial command. Scout AI uses an undisclosed open source model with its restrictions removed. This model then acts as an agent, issuing commands to smaller, 10-billion-parameter models running on the ground vehicles and the drones involved in the exercise. The smaller models also act as agents themselves, issuing their own commands to lower-level AI systems that control the vehicles’ movements.

Seconds after receiving marching orders, the ground vehicle zipped off along a dirt road that winds between brush and trees. A few minutes later, the vehicle came to a stop and dispatched the pair of drones, which flew into the area where it had been instructed that the target was waiting. After spotting the truck, an AI agent running on one of the drones issued an order to fly toward it and detonate an explosive charge just before impact.



Source link

Continue Reading

Trending