Connect with us

Tech

Tiny explosions and soft materials make onscreen braille more robust

Published

on

Tiny explosions and soft materials make onscreen braille more robust


Credit: Cornell University

From texting on a smart phone to ordering train tickets at a kiosk, touch screens are ubiquitous and, in most cases, relatively reliable. But for people who are blind or visually impaired and use electronic braille devices, the technology can be vulnerable to the elements, easily broken or clogged by dirt, and difficult to repair.

By combining the and materials of soft robotics with microscale combustions, Cornell researchers have now created a high-resolution electronic tactile display that is more robust than other haptic braille systems and can operate in messy, unpredictable environments.

The technology also has potential applications in teleoperation, automation and could bring more tactile experiences to virtual reality.

The research is published in Science Robotics. The paper’s co-first authors are Ronald Heisser, Ph.D. ’23 and postdoctoral researcher Khoi Ly.

“The central premise of this work is two-fold: using energy stored in fluid to reduce the complexity of mass transport, and then thermal control of pressure to remove the requirements of complex valving,” said Rob Shepherd, the John F. Carr Professor of Mechanical Engineering in Cornell Engineering and the paper’s senior author.

“Very small amounts of combustible fuel allow us to create high-pressure actuation for tactile feedback wherever we like using small fluid channels, and cooling the gas during the reaction means this pressure stays localized and does not create pressure where we do not want it,” he said. “This chemical and thermal approach to solves the long-standing “Holy Braille’ challenge.”

The majority of refreshable electronic tactile displays contain dozens of tiny, intricate components in a single braille cell, which has six raised dots. Considering that a page of braille can hold upwards of 6,000 dots, that adds up to a lot of moving parts, all at risk of being jostled or damaged. Also, most refreshable displays only have a single line of braille, with a maximum of roughly 40 characters, which can be extremely limiting for readers, according to Heisser.







Credit: Cornell University

“Now people want to have multi-line displays so you can show pictures, or if you want to edit a spreadsheet or write and read it back in braille,” he said.

Rather than relying on electromechanical systems—such as motors, hydraulics or tethered pumps—to power their tactile displays, Shepherd’s Organic Robotics Lab has taken a more explosive approach: micro combustion. In 2021, they unveiled a system in which liquid metal electrodes caused a spark to ignite a microscale volume of premixed methane and oxygen. The rapid combustion forced a haptic array of densely packed, 3-millimeter-wide actuators to cause molded silicone membrane dots—their form determined by a magnetic latching system—to pop up.

For the new iteration, the researchers created a 10-by-10-dot array of 2-millimeter-wide soft actuators, which are eversible—i.e., able to be turned inside out. When triggered by a mini combustion of oxygen and butane, the dots pop up in 0.24 milliseconds and remain fixed in place by virtue of their domed shape until a vacuum sucks them down. The untethered system maintains the elegance of soft robotics, Heisser said, resulting in something that is less bulky, less expensive and more resilient—”far beyond what typical braille displays are like.”

“We opted to have this rubber format where we’re molding separate components together, but because we’re kind of molding it all in one go and adhering everything, you have sheets of rubber,” said Heisser, currently a postdoctoral researcher at the Massachusetts Institute of Technology. “So now, instead of having 1,000 moving parts, we just have a few parts, and these parts aren’t sliding against each other. They’re integrated in this way that makes it simpler from a manufacturing and use standpoint.”

The silicone sheets would be replaceable, extending the lifespan of the device, and could be scaled up to include a larger number of braille characters while still being relatively portable. The hermetically sealed design also keeps out dirt and troublesome liquids.

“From a maintenance standpoint, if you want to give someone the ability to read braille in a public setting, like a museum or restaurant or sports game, we think this sort of display would be much more appropriate, more reliable,” Heisser said. “So someone spills beer on the braille display, is it going to survive? We think, in our case, yes, you can just wipe it down.”

This type of technology has numerous medical and in which the sense of touch is important, from mimicking muscle to providing high-resolution during surgery or from automated machines, in addition to increasing accessibility and literacy for people who are blind or visually impaired.

“As technologies become more and more digitized, as we rely more and more on computer access, becomes essential,” Heisser said. “Reading is equivalent to literacy. The workaround has been screen-reading technologies that allow you to interact with the computer, but don’t encourage your cognitive fluency.”

More information:
Ronald H. Heisser et al, Explosion-powered eversible tactile displays, Science Robotics (2025). DOI: 10.1126/scirobotics.adu2381

Provided by
Cornell University


Citation:
Tiny explosions and soft materials make onscreen braille more robust (2025, September 30)
retrieved 30 September 2025
from https://techxplore.com/news/2025-09-tiny-explosions-soft-materials-onscreen.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Tech

A changing reporting landscape at the intersection of accounting and cryptocurrency

Published

on

A changing reporting landscape at the intersection of accounting and cryptocurrency


Credit: Pixabay/CC0 Public Domain

Cryptocurrency continues to reshape the financial landscape. As cryptocurrency moves from niche to mainstream, companies are grappling with how to account for these volatile digital assets. New research from Scheller College of Business accounting professor Robbie Moon, and his co-authors Chelsea M. Anderson, Vivian W. Fang, and Jonathan E. Shipman, sheds light on how U.S. public companies have navigated crypto holdings and accounting practices over the past decade.

ASU 2023-08, the Financial Accounting Standards Board’s (FASB) newly enacted rule, aims to bring clarity and consistency to crypto asset reporting with the mandate for fair value reporting. Moon’s research, which examined a comprehensive set of companies from 2013 to 2022, looks at the exponential rise in corporate crypto investments and the diverse, and often inconsistent, ways firms have reported them.

In “Accounting for Cryptocurrencies,” Moon and his co-authors work to better understand this pivotal point in financial reporting with research that dives into why firms hold crypto—whether for mining, payment acceptance, or investment—and how reporting practices have evolved to meet this current moment. The work is published in the Journal of Accounting Research.

Keep reading to learn more about Moon’s research and why it matters right now.

Why do companies hold cryptocurrencies, and how has this changed over time?

Companies hold cryptocurrency for three main reasons: they mine it, they accept it as payment, or they consider it an investment. Early on, most businesses kept crypto because customers used it to pay for goods and services. Around 2017, that trend declined, and more companies began mining crypto themselves. Today, mining accounts for about half of corporate crypto holdings, while payment acceptance and investment make up the rest.

What were the main challenges companies face when trying to report cryptocurrency holdings in their financial statements?

Until the end of 2023, there were no official rules on how companies should report cryptocurrency on their . Back in 2018, the Big Four accounting firms (Deloitte, PwC, EY, and KPMG) stepped in with guidance, suggesting that crypto be treated like intangible assets, similar to things like patents or trademarks. This is known as the impairment model.

What is the difference between the ‘fair value model’ and the ‘impairment model’ for accounting crypto assets, and why does it matter?

The two accounting methods differ in how they handle changes in crypto value. The fair value model updates the value of a company’s crypto to match current market prices every reporting period. If the price goes up or down, the change shows up on the company’s income statement as a gain or loss.

The impairment model only lets companies record losses when the value drops below what they paid. If the price goes up, they can’t record the increase.

The difference in the two approaches can best be seen when crypto prices rise. Under the impairment model, companies’ balance sheets understate the true value of the crypto since the gains cannot be recorded. The fair value model allows companies to adjust the balance sheet value of crypto as market prices change.

What factors led ASU 2023–08 to favor fair value reporting?

When the FASB was trying to decide if they should add crypto accounting to their standard setting agenda, they reached out to the public for feedback. The response was overwhelming and most practitioners and firms called for the use of the fair value model.

How do big accounting firms, like Deloitte or PwC, influence how companies report their crypto holdings?

When there aren’t official rules for complex issues like crypto , the Big Four firms often step in to guide companies. In 2018, they recommended using the impairment model, which they viewed as most appropriate based on existing standards. After that, most companies switched from fair value reporting to the impairment approach.

Their guidance in 2018 was based on what was allowed under the standards at that time. With the new rule in place, the firms will likely help clients manage the transition.

Does using fair value accounting for crypto make a company’s stock price more volatile or its earnings reports more useful to investors?

The primary downside of using a fair value model for a risky asset like crypto is how volatility affects earnings. Moon’s research suggests that stock price volatility increases for firms using the fair value model, and it doesn’t appear the model makes earnings more useful for investors. That said, the results should be viewed cautiously because the study’s sample largely consisted of smaller companies.

Why does this research matter right now?

This research matters because more companies are investing in cryptocurrency. That trend is only expected to grow. This research looks at how businesses handled before official rules came out in 2023, showing that many treated it like traditional investments. This provides a baseline against which future research can evaluate the new rule.

The research also warns that the fair value approach could make stock prices more volatile without necessarily making earnings reports more useful for investors.

More information:
Chelsea M. Anderson et al, Accounting for Cryptocurrencies*, Journal of Accounting Research (2025). DOI: 10.1111/1475-679x.70018

Citation:
A changing reporting landscape at the intersection of accounting and cryptocurrency (2025, November 17)
retrieved 17 November 2025
from https://techxplore.com/news/2025-11-landscape-intersection-accounting-cryptocurrency.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Tech

Getting started with agentic AI | Computer Weekly

Published

on

Getting started with agentic AI | Computer Weekly


A study by Boston Consulting Group (BCG) suggests that organisations that lead in technology development are gaining a first-mover advantage when it comes to artificial intelligence (AI) and using agentic AI to improve business processes.

What is striking about BCG’s findings, according to Jessica Apotheker, managing director and senior partner at Boston Consulting Group, is that the leading companies in AI are mostly the same ones that were leaders eight years ago.

“What this year’s report shows is that the value gap between these companies and others is widening quite a bit,” she says. In other words, BCG’s research shows that organisations that have invested disproportionately in technology achieve a higher return from that investment.

Numerous pieces of research show that a high proportion of AI initiatives are failing to deliver measurable business success. BCG’s Build for the future 2025 report shows that the companies it rates as the best users of AI generate 1.7 times more revenue growth than the 60% of companies in the categories it defines as stagnating or emerging.

For Ilan Twig, co-founder and chief technology officer (CTO) at Navan, AI projects that fail to deliver value are indicative of how businesses use AI technology. Too often, AI is dropped on top of old systems and outdated processes. 

Building on RPA

However, there is certainly a case to build on previous initiatives such as robotic process automation (RPA).

Speaking at the recent Forrester Technology and Innovation Summit in London, Bernhard Schaffrik, principal analyst at Forrester, discussed how agentic AI can be built on top of a deterministic RPA system to provide greater flexibility than what existing systems can be programmed to achieve.

The analyst firm uses the term “process orchestration” to describe the next level of automating business processes, using agentic AI in workflow to handle ambiguities far more easily than the programming scripts used in RPA.

“Classic process automation tools require you to know everything at the design stage – you need to anticipate all of the errors and all the exceptions,” says Schaffrik.

He points out that considering these things at design time is unrealistic when trying to orchestrate complex processes. But new tools are being developed for process orchestration that rely on AI agents.

A strong data foundation

Boston Consulting Group (BCG) says prerequisites for the successful roll-out of AI agents include strong data foundations, scaled AI capabilities and clear governance.

Standardisation of data is a key requirement for success, according to Twig. “A big part of the issue is data,” he says. “AI is only as strong as the information it runs on, and many companies don’t have the standardised, consistent datasets needed to train or deploy it reliably.”

Within the context of agentic AI, this is important to avoid miscommunications both at the technology infrastructure level and in people’s understanding of the information. But the entire data foundation does not have to be built all at once.

BCG’s Apotheker says companies can have an enterprise-wide goal to achieve clean data, and build this out one project at a time, providing a clean data foundation on which subsequent projects can be built. In doing so, organisations are able to gain a better understanding of the enterprise data these projects require while they ensure that the datasets are clean and good data management practices are followed.

A working agentic AI strategy relies on AI agents connected by a metadata layer, whereby people understand where and when to delegate certain decisions to the AI or pass work to external contractors. It’s a focus on defining the role of the AI and where people involved in the workflow need to contribute. 

This functionality can be considered a sort of platform. Scott Willson, head of product marketing at xtype, describes AI workflow platforms as orchestration engines, coordinating multiple AI agents, data sources and human touchpoints through sophisticated non-deterministic workflows. At the code level, these platforms may implement event-driven architectures using message queues to handle asynchronous processing and ensure fault tolerance.

Data lineage tracking should happen at the code level through metadata propagation systems that tag every data transformation, model inference and decision point with unique identifiers. Willson says this creates an immutable audit trail that regulatory frameworks increasingly demand. According to Willson, advanced implementations may use blockchain-like append-only logs to ensure governance data cannot be retroactively modified.

Adapting workflows and change management

Having built AI-native systems from the ground up and transformed the company’s own product development processes using AI, Alan LeFort, CEO and co-founder of StrongestLayer, notes that most organisations are asking completely the wrong questions when evaluating AI workflow platforms.

“The fundamental issue isn’t technological, it’s actually organisational,” he says.

Conway’s Law states that organisations design systems that mirror their communication structures. But, according to LeFort, most AI workflow evaluations assume organisations bolt AI onto existing processes designed around human limitations. This, he says, results in serial decision-making, risk-averse approval chains and domain-specific silos.

When you try to integrate AI into human-designed processes, you get marginal improvements. When you redesign processes around AI capabilities, you get exponential gains
Alan LeFort, StrongestLayer

“AI doesn’t have those limitations. AI can parallelise activities that humans must do serially, doesn’t suffer from territorial knowledge hoarding and doesn’t need the elaborate safety nets we’ve built around human fallibility,” he adds. “When you try to integrate AI into human-designed processes, you get marginal improvements. When you redesign processes around AI capabilities, you get exponential gains.”

StrongestLayer recently transformed its front-end software development process using this principle. Traditional product development flows serially. A product manager talks to customers, extracts requirements and then hands over to the user experience team for design, the programme management team then approves the design, and developers implement the software. It used to take 18-24 months to completely rebuild the application in this process, he says.

Instead of bolting AI onto this process, LeFort says StrongestLayer “fundamentally reimagined it”.

“We created a full-stack prototyper role-paired with a front-end engineer focused on architecture. The key was building an AI pipeline that captured the contextual knowledge of each role: design philosophy, tech stack preferences, non-functional requirements, testing standards and documentation needs.”

As a result of making these workload changes, he says the company was able to achieve the same outcome from a product development perspective in a quarter of the time. This, he says, was not necessarily achieved by working faster, but by redesigning the workflow around AI’s ability to parallelise human sequential activities.

LeFort expected to face pushback. “My response was to lead from the front. I paired directly with our chief product officer, Joshua Bass, to build the process, proving it worked before asking others to adopt it. We reframed success for our front-end engineer around velocity and pioneering new ways of working,” he says.

For LeFort, true speed to value comes from two fundamental sources: eliminating slack time between value activities and accelerating individual activity completion through AI automation. “This requires upfront investment in process redesign rather than quick technology deployment,” he says.

LeFort urges organisations to evaluate AI workflow platforms based on their ability to enable fundamental process transformation, rather than working to integrate existing inefficiencies.

Getting agentic AI decision-making right 

Research from BCG suggests that the best way to deploy agents is through a few high-value workflows with clear implementation plans and workforce training, rather than in a massive roll-out of agents everywhere at once.

There are different models with different strengths. We want to use the best model for each task
Ranil Boteju, Lloyds Banking Group

One of the areas IT leaders need to consider is that their organisation will more than likely rely on a number of AI models to support agentic AI workflows. For instance, Ranil Boteju, chief data and analytics officer at Lloyds Banking Group, believes different models can be tasked with tackling each distinct part of a customer query.

“The way we think about this is that there are different models with different strengths, and what we want to do is to use the best model for each task,” says Boteju. This approach is how the bank sees agentic AI being deployed.

With agentic AI, problems can be broken down into smaller and smaller parts, where different agents respond to each part. Boteju believes in using AI agents to check the output from other agents, rather like acting as a judge or a second-line colleague acting as an observer. This can help to cut erroneous decision-making arising from AI hallucinations when the AI model basically produces a spurious result.

IT security in agentic AI

People in IT tend to appreciate the importance of adhering to cyber security best practices. But as Fraser Dear, head of AI and innovation at BCN, points out, most users do not think like a software developer who keeps governance in mind when creating their own agents. He urges organisations to impose policies that ensure the key security steps are not skipped in the rush to deploy agentic AI.

“Think about what these AI agents might access across SharePoint: multiple versions of documents, transcripts, HR files, salary data, and lots more. Without guardrails, AI agents can access all this indiscriminately. They won’t necessarily know which versions of these documents are draft and which are approved,” he warns.

The issue escalates when an agent created by one person is made available to a wider group of colleagues. It can inadvertently give them access to data that is beyond their permission level.

Dear believes data governance needs to include configuring data boundaries, restricting who can access what data according to job role and sensitivity level. The governance framework should also specify which data resources the AI agent can pull from.

In addition, he says AI agents should be built for a purpose, using principles of least privilege: “Just like any other business-critical application, it needs to be adequately tested and ‘red-teamed’. Perform penetration testing to identify what data the agent can surface, to whom, and how accurate the data is. Track and audit which agents are accessing which data and for what purpose, and implement real-time alerts to flag unusual access patterns.”

A bumpy ride ahead

What these conversations with technology experts illustrate is that there is no straightforward path to achieving a measurable business benefit from agentic AI workflows – and what’s more, these systems need to be secure by design.

Organisations need to have the right data strategy in place, and they should already be well ahead on their path to full digitisation, where automation through RPA is being used to connect many disparate workflows. Agentic AI is the next stage of this automation, where an AI is tasked with making decisions in a way that would have previously been too clunky using RPA.

However, automation of workflows and business processes are just pieces of an overall jigsaw. There is a growing realisation that the conversation in the boardroom needs to move beyond the people and processes.

BCG’s Apotheker believes business leaders should reassess what is important to their organisation and what they want to focus on going forward. This goes beyond the build versus buy debate: some processes and tasks should be owned by the business; some may be outsourced to a provider that may well use AI; and some will be automated through agentic AI workflows internally.

It is rather like business process engineering, where elements powered by AI sit alongside tasks outsourced to an external service provider. For Apotheker, this means businesses need to have a firm grasp of what part of the business process is strategically important and can be transformed internally.

Business leaders then need to figure out how to connect the strategically important part of the workflow to what the business actually outsources or potentially automates in-house.



Source link

Continue Reading

Tech

Media professor says AI’s superior ability to formulate thoughts for us weakens our ability to think critically

Published

on

Media professor says AI’s superior ability to formulate thoughts for us weakens our ability to think critically


Credit: CC0 Public Domain

AI’s superior ability to formulate thoughts and statements for us weakens our judgment and ability to think critically, says media professor Petter Bae Brandtzæg.

No one knew about Chat GPT just three years ago. Today, 800 million people use the technology. The speed at which AI is rolling out breaks all records and has become the new normal.

Many AI researchers, like Brandtzæg, are skeptical. AI is a technology that interferes with our ability to think, read, and write. “We can largely avoid , but not AI. It is integrated into social media, Word, online newspapers, email programs, and the like. We all become partners with AI—whether we want to or not,” says Brandtzæg.

The professor of media innovations at the University of Oslo has examined how AI affects us in the recently completed project “An AI-Powered Society.”

The freedom of expression commission overlooked AI

The project has been conducted in collaboration with the research institute SINTEF. It is the first of its kind in Norway to research generative AI, that is, AI that creates content, and how it affects both users and the public.

The background was that Brandtzæg reacted to the fact that the report from the Norwegian Commission for Freedom of Expression, which was presented in 2022, did not sufficiently address the impact of AI on society—at least not generative AI.

“There are studies that show that AI can weaken . It affects our language, how we think, understand the world, and our moral judgment,” says Brandtzæg.

A few months after the Commission for Freedom of Expression report, ChatGPT was launched, making his research even more relevant.

“We wanted to understand how such generative AI affects society, and especially how AI changes social structures and relationships.”

AI-Individualism

The social implications of generative AI is a relatively new field that still lacks theory and concepts, and the researchers have therefore launched the concept of “AI-individualism.” It builds on “network individualism,” a framework which was launched in the early 2000s.

Back then, the need was to express how smartphones, the Internet, and social media enabled people to create and tailor their social networks beyond family, friends, and neighbors.

Networked individualism showed how technology weakened the old limits of time and place, enabling flexible, personalized networks. With AI, something new happens: the line between people and systems also starts to blur, as AI begins to take on roles that used to belong to humans.

“AI can also meet personal, social, and emotional needs,” says Brandtzæg.

With a background in psychology, he has for a long time studied human-AI relationships with chatbots like Replika. ChatGPT and similar social AIs can provide immediate, personal support for any number of things.

“It strengthens individualism by enabling more autonomous behavior and reducing our dependence on people around us. While it can enhance personal autonomy, it may also weaken community ties. A shift toward AI-individualism could therefore reshape core social structures.”

He argues that the concept of “AI-individualism” offers a new perspective for understanding and explaining how relationships change in society with AI. “We use it as a relational partner, a collaborative partner at work, to make decisions,” says Brandtzæg.

Students choose chatbot

The project is based on several investigations, including a questionnaire with open-ended answers to 166 on how they use AI.

“They (ChatGPT and MyAI) go straight to the point regarding what we ask, so we don’t have to search endlessly in the books or online,” said one high school student about the benefits of AI.

“ChatGPT helps me with problems, I can open up and talk about difficult things, get comfort and good advice,” responded a student.

In another study, using an online experiment with a blind test, it turned out that many preferred answers from a chatbot over a professional when they had questions about mental health. More than half preferred answers from a chatbot, less than 20% said a professional, while 30% responded both.

“This shows how powerful this technology is, and that we sometimes prefer AI-generated content over human-generated,” says Brandtzæg.

‘Model power’

The theory of “model power” is another concept they’ve launched. It builds on a power relationship theory developed by sociologist Stein Bråten 50 years ago.

Model power is the influence one has by being in possession of a model of reality that has impact, and which others must accept in the absence of equivalent models of power of their own, according to the article “Modellmakt og styring” (online newspaper Panorama—in Norwegian).

In the 1970s, it was about how media, science, and various groups with authority could influence people, and had model power. Now it’s AI.

Brandtzæg’s point is that AI-generated content no longer operates in a vacuum. It spreads everywhere, in public reports, new media, in research, and in encyclopedias. When we perform Google searches, we first get an AI-generated summary.

“A kind of AI layer is covering everything. We suggest that the model power of social AI can lead to model monopolies, significantly affecting human beliefs and behavior.”

Because AI models, like ChatGPT, are based on dialog, they call them social AI. But how genuine is a dialog with a machine fed with enormous amounts of text?

“Social AI can promote an illusion of real conversation and independence—a pseudo-autonomy through pseudo-dialog,” says Brandtzæg.

Critical but still following AI advice

According to a survey from The Norwegian Communications Authority (Nkom) from August 2025, 91% of Norwegians are concerned about the spread of false information from AI services like Copilot, ChatGPT, and Gemini.

AI can hallucinate. A known example is a report the municipality of Tromsø used as a basis for a proposal to close eight schools, was based on sources that AI had fabricated. Thus, AI may contribute to misinformation, and may undermine user trust in both AI, service providers and public institutions.

Brandtzæg asks how many other smaller municipalities and public institutions have done the same and he is worried about the spread of this unintentional spread of misinformation.

He and his researcher colleagues have reviewed various studies indicating that although we like to say we are critical, we nevertheless follow AI’s advice, which highlights the model power in such AI systems.

“It’s perhaps not surprising that we follow the advice that we get. It’s the first time in history that we’re talking to a kind of almighty entity that has read so much. But it gives a model power that is scary. We believe we are in a dialog, that it’s cooperation, but it’s one-way communication.”

American monoculture

Another aspect of this model power is that the AI companies are based in the U.S. and built on vast amounts of American data.

“We estimate that as little as 0.1% is Norwegian in AI models like ChatGPT. This means that it is American information we relate to, which can affect our values, norms and decisions.”

What does this mean for diversity? The principle is that “the winner takes it all.” AI does not consider minority interests. Brandtzæg points out that the world has never before faced such an intrusive technology, which necessitates regulation and balancing against real human needs and values.

“We must not forget that AI is not a public, democratic project. It’s commercial, and behind it are a few American companies and billionaires,” says Brandtzæg.

More information:
Marita Skjuve et al, Unge og helseinformasjon, Tidsskrift for velferdsforskning (2025). DOI: 10.18261/tfv.27.4.2

Petter Bae Brandtzaeg et al, AI Individualism, Oxford Intersections: AI in Society (2025). DOI: 10.1093/9780198945215.003.0099

Provided by
University of Oslo


Citation:
Media professor says AI’s superior ability to formulate thoughts for us weakens our ability to think critically (2025, November 16)
retrieved 16 November 2025
from https://techxplore.com/news/2025-11-media-professor-ai-superior-ability.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Trending