Connect with us

Tech

Fake or the real thing? How AI can make it harder to trust the pictures we see

Published

on

Fake or the real thing? How AI can make it harder to trust the pictures we see


Top row features genuine pictures of people with AI generated versions underneath. Credit: Swansea University

A new study has revealed that artificial intelligence can now generate images of real people that are virtually impossible to tell apart from genuine photographs.

Using AI models ChatGPT and DALL·E, a team of researchers from Swansea University, the University of Lincoln and Ariel University in Israel, created highly realistic images of both fictional and famous faces, including celebrities.

They found that participants were unable to reliably distinguish them from authentic photos—even when they were familiar with the person’s appearance.

Across four , the researchers noted that adding comparison photos or the participants’ prior familiarity with the faces provided only limited help.

The research has just been published in the journal Cognitive Research: Principles and Implications and the team say their findings highlight a new level of “deepfake realism,” showing that AI can now produce convincing fake images of real people which could erode trust in visual media.

Professor Jeremy Tree, from the School of Psychology, said, “Studies have shown that face images of fictional people generated using AI are indistinguishable from real photographs. But for this research we went further by generating synthetic images of real people.

“The fact that everyday AI tools can do this not only raises urgent concerns about misinformation and trust in but also the need for reliable detection methods as a matter of urgency.”

One of the experiments, which involved participants from the US, Canada, the UK, Australia and New Zealand, saw subjects shown a series of facial images, both real and artificially generated, and they were asked to identify which was which. The team say the fact the participants mistook the AI-generated novel faces for real photos indicated just how plausible they were.

Another experiment saw participants asked if they could tell genuine pictures of Hollywood stars such as Paul Rudd and Olivia Wilde from computer-generated versions. Again, the study’s results showed just how difficult individuals can find it to spot the authentic version.

The researchers say AI’s ability to produce novel/synthetic images of real people opens up a number of avenues for use and abuse. For instance, creators might generate images of a celebrity endorsing a certain product or political stance, which could influence of both the identity and the brand/organization they are portrayed as supporting.

Professor Tree added, “This study shows that AI can create synthetic images of both new and known faces that most people can’t tell apart from real photos. Familiarity with a face or having reference images didn’t help much in spotting the fakes. That is why we urgently need to find new ways to detect them.

“While automated systems may eventually outperform humans at this task, for now, it’s up to viewers to judge what’s real.”

More information:
Robin S. S. Kramer et al, AI-generated images of familiar faces are indistinguishable from real photographs, Cognitive Research: Principles and Implications (2025). DOI: 10.1186/s41235-025-00683-w

Provided by
Swansea University


Citation:
Fake or the real thing? How AI can make it harder to trust the pictures we see (2025, November 6)
retrieved 6 November 2025
from https://techxplore.com/news/2025-11-fake-real-ai-harder-pictures.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Study uncovers oxygen trapping as cause of voltage loss in sodium cathodes

Published

on

Study uncovers oxygen trapping as cause of voltage loss in sodium cathodes


by Li Jingxin; Zhao Weiwei, Hefei Institutes of Physical Science, Chinese Academy of Sciences

Evolution of intermediate oxygen species during the activation cycles. Credit: Li Chao

A research team led by Prof. Li Chao from East China Normal University has uncovered the origin of voltage decay in P2-type layered oxide cathodes. Using electron paramagnetic resonance (EPR) spectroscopy at the Steady-State Strong Magnetic Field Facility (SHMFF), the Hefei Institutes of Physical Science of the Chinese Academy of Science, the team tracked the dynamic evolution of oxygen species and clarified their direct role in structural degradation.

The findings, published in Advanced Energy Materials, provide new guidance for designing more stable sodium-ion cathodes.

P2-type sodium layered oxides (NaxAyTM1-yO2) are long considered stable for anion redox reactions compared to Li-rich O3-type counterparts, with suppressed voltage . However, the team observed significant voltage decay in the high Na-content P2-type Na0.8Li0.26Mn0.74O2 during cycling—an anomaly unexplainable by existing theories.

The researchers identified a clear sequence of oxygen transformations upon charging, eventually leading to the formation of molecular O2. While early cycles showed that this oxygen could still be reduced during discharge, with continued cycling a growing fraction of O2 remained trapped in the discharged state. This irreversible accumulation was pinpointed as the primary driver of voltage decay and capacity loss.

In this study, EPR proved critical as it enabled noninvasive monitoring of oxygen redox behavior and revealed how reactive oxygen intermediates gradually evolve and accumulate during cycling.

EPR further exposed local structural changes: signals associated with spin interactions between manganese and oxidized oxygen became more pronounced with cycling, consistent with the development of Mn-rich and Li-rich domains. These segregation effects, exacerbated by unreduced O2, aggravated the performance degradation.

SHMFF users reveal a new mechanism for abnormal voltage attenuation of P2-type layered oxide cathodes
Evolution of intermediate oxygen species over cycling and the accompanying structural rearrangements. Credit: Li Chao

Importantly, the team also explained why high sodium-content cathodes behave differently from their low sodium-content counterparts. In high-Na materials, insufficient interlayer spacing allows migration and vacancy growth, making them vulnerable to oxygen trapping.

By contrast, low-Na cathodes with larger spacing remain stable and show no evidence of trapped oxygen.

This study highlights the unique value of EPR in and suggests that bulk modification strategies are key to mitigating decay and developing high-performance cathodes for next-generation batteries, according to the team.

More information:
Chunjing Hu et al, Accumulation of Unreduced Molecular O2Explains Abnormal Voltage Decay in P2‐Type Layered Oxide Cathode, Advanced Energy Materials (2025). DOI: 10.1002/aenm.202503491

Provided by
Hefei Institutes of Physical Science, Chinese Academy of Sciences

Citation:
Study uncovers oxygen trapping as cause of voltage loss in sodium cathodes (2025, November 6)
retrieved 6 November 2025
from https://techxplore.com/news/2025-11-uncovers-oxygen-voltage-loss-sodium.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Tech

New testing scheme could work for chips and clinics

Published

on

New testing scheme could work for chips and clinics


Illustration of Our Approach for SSClassNotes. (a) In this case, Condition (2) is satisfied for class j, and testing can stop. (b) Here, Condition (2) is not satisfied; f(X) could be j or j + 1, so testing must continue. Credit: Operations Research (2025). DOI: 10.1287/opre.2023.0431

Diagnostic testing is big business. The global market for testing semiconductors for defects is estimated at $39 billion in 2025. For medical lab tests, the market is even bigger: $125 billion.

Both kinds of tests have something in common, says Rohan Ghuge, assistant professor of decision science in the information, risk, and operations management department at Texas McCombs. They involve with vast numbers of components, whether they’re evaluating computer chips or human bodies.

New research from Texas McCombs suggests a new approach to testing complex systems that might save time by eliminating some unnecessary and expensive steps. “Nonadaptive Stochastic Score Classification and Explainable Half-Space Evaluation” is published in Operations Research.

Currently, a common shortcut is to conduct sequences of tests. Instead of testing every component—which isn’t practical for complex systems—a clinician might test certain components first. Each round rules out some possible problems and sets up a new round of tests.

That approach has time-consuming drawbacks, Ghuge says. “First, you might check the vital signs. Then, you come back the next day and do an ECG [electrocardiogram], then we do blood work, step by step. That’s going to take a lot of time, which we don’t really want to waste for a patient.”

What if, he wondered, a single round of tests could provide the most critical information in a fraction of the time? What if the same protocol could prove useful for chips or in clinics?

“We want something that’s highly scalable, deployable, and uniform,” he says. “You need to have it in a way that can be deployed on thousands of kinds of chips, or a first step that you give to clinicians for every patient of that kind.”

Merging success and failure

The key, Ghuge theorized, was to choose a small number of tests that could quickly classify a system’s risk level: low, medium, or high. With Anupam Gupta of New York University and Viswanath Nagarajan of the University of Michigan, he set out to design such a protocol.

Their solution was to combine two sets of tests with opposite goals. One set diagnoses whether a system is working, while the other diagnoses whether it’s failing. Together, they can provide a snapshot of risk.

“You create two lists, say, a success list and a failure list,” Ghuge says. “You combine a fraction of the first list and a fraction of the second list. You want to come up with a single batch of tests that tell you at the same time whether the system is working or failing.”

An existing medical example, he says, is the HEART Score. It rates five factors, such as age and ECG results, to quickly assess the risk that a patient with will have a major cardiac event within six weeks.

In simulations, Ghuge tested his algorithm against a sequential one on the same sets of data. His got results over 100 times as fast as the sequential algorithm, at a cost that averaged 22% higher.

“The tests are a bit more costly,” he says. “The trade-off is that you can get them done a lot faster.”

But he also notes that a single batch of tests might reduce setup costs, he says, compared with the expenses of setting up one test after another.

A next step, Ghuge hopes, is to try out his algorithm on real-life testing. A broadband internet network, such as Google Fiber or Spectrum, might use it for daily testing, to rapidly diagnose whether a system or subsystem is working.

“I come from a more theoretical background that focuses on the right model,” he says. “There’s a gap between that and applying it in practice. I’m excited to speak with people, to talk to practitioners and see if these can be applied.”

More information:
Rohan Ghuge et al, Nonadaptive Stochastic Score Classification and Explainable Half-Space Evaluation, Operations Research (2025). DOI: 10.1287/opre.2023.0431

Citation:
New testing scheme could work for chips and clinics (2025, November 6)
retrieved 6 November 2025
from https://techxplore.com/news/2025-11-scheme-chips-clinics.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Tech

Researchers propose a new model for legible, modular software

Published

on

Researchers propose a new model for legible, modular software


Credit: CC0 Public Domain

Coding with large language models (LLMs) holds huge promise, but it also exposes some long-standing flaws in software: code that’s messy, hard to change safely, and often opaque about what’s really happening under the hood. Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) are charting a more “modular” path ahead.

Their new approach breaks systems into “concepts,” separate pieces of a system, each designed to do one job well, and “synchronizations,” explicit rules that describe exactly how those pieces fit together. The result is software that’s more modular, transparent, and easier to understand.

A small domain-specific language (DSL) makes it possible to express synchronizations simply, in a form that LLMs can reliably generate. In a real-world case study, the team showed how this method can bring together features that would otherwise be scattered across multiple services. The paper is published in the Proceedings of the 2025 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software.

The team, including Daniel Jackson, an MIT professor of electrical engineering and computer science (EECS) and CSAIL associate director, and Eagon Meng, an EECS Ph.D. student, CSAIL affiliate, and designer of the new synchronization DSL, explore this approach in their paper “What You See Is What It Does: A Structural Pattern for Legible Software,” which they presented at the Splash Conference in Singapore in October.

The challenge, they explain, is that in most modern systems, a single feature is never fully self-contained. Adding a “share” button to a social platform like Instagram, for example, doesn’t live in just one service. Its functionality is split across code that handles posting, notification, authenticating users, and more. All these pieces, despite being scattered across the code, must be carefully aligned, and any change risks unintended side effects elsewhere.

Jackson calls this “feature fragmentation,” a central obstacle to software reliability. “The way we build software today, the functionality is not localized. You want to understand how ‘sharing’ works, but you have to hunt for it in three or four different places, and when you find it, the connections are buried in low-level code,” says Jackson.

Concepts and synchronizations are meant to tackle this problem. A concept bundles up a single, coherent piece of functionality, like sharing, liking, or following, along with its state and the actions it can take. Synchronizations, on the other hand, describe at a higher level how those concepts interact.

Rather than writing messy low-level integration code, developers can use a small domain-specific language to spell out these connections directly. In this DSL, the rules are simple and clear: one concept’s action can trigger another, so that a change in one piece of state can be kept in sync with another.

“Think of concepts as modules that are completely clean and independent. Synchronizations then act like contracts—they say exactly how concepts are supposed to interact. That’s powerful because it makes the system both easier for humans to understand and easier for tools like LLMs to generate correctly,” says Jackson.

“Why can’t we read code like a book? We believe that software should be legible and written in terms of our understanding: our hope is that concepts map to familiar phenomena, and synchronizations represent our intuition about what happens when they come together,” says Meng.

The benefits extend beyond clarity. Because synchronizations are explicit and declarative, they can be analyzed, verified, and of course generated by an LLM. This opens the door to safer, more automated software development, where AI assistants can propose new features without introducing hidden side effects.

In their , the researchers assigned features like liking, commenting, and sharing each to a single concept—like a microservices architecture, but more modular. Without this pattern, these features were spread across many services, making them hard to locate and test. Using the concepts-and-synchronizations approach, each feature became centralized and legible, while the synchronizations spelled out exactly how the concepts interacted.

The study also showed how synchronizations can factor out common concerns like error handling, response formatting, or persistent storage. Instead of embedding these details in every service, synchronization can handle them once, ensuring consistency across the system.

More advanced directions are also possible. Synchronizations could coordinate distributed systems, keeping replicas on different servers in step, or allow shared databases to interact cleanly. Weakening semantics could enable eventual consistency while still preserving clarity at the architectural level.

Jackson sees potential for a broader cultural shift in software development. One idea is the creation of “concept catalogs,” shared libraries of well-tested, domain-specific concepts. Application development could then become less about stitching code together from scratch and more about selecting the right concepts and writing the synchronizations between them.

“Concepts could become a new kind of high-level programming language, with synchronizations as the programs written in that language. It’s a way of making the connections in software visible,” says Jackson. “Today, we hide those connections in code. But if you can see them explicitly, you can reason about the software at a much higher level. You still have to deal with the inherent complexity of features interacting. But now it’s out in the open, not scattered and obscured.”

“Building software for on abstractions from underlying computing machines has burdened the world with software that is all too often costly, frustrating, even dangerous, to understand and use,” says University of Virginia Associate Professor Kevin Sullivan, who wasn’t involved in the research.

“The impacts (such as in health care) have been devastating. Meng and Jackson flip the script and insist on building interactive software on abstractions from human understanding, which they call ‘concepts.’ They combine expressive mathematical logic and natural language to specify such purposeful abstractions, providing a basis for verifying their meanings, composing them into systems, and refining them into programs fit for human use. It’s a new and important direction in the theory and practice of software design that bears watching.”

“It’s been clear for many years that we need better ways to describe and specify what we want software to do,” adds Thomas Ball, Lancaster University honorary professor and University of Washington affiliate faculty, who also wasn’t involved in the research. “LLMs’ ability to generate code has only added fuel to the specification fire. Meng and Jackson’s work on concept design provides a promising way to describe what we want from software in a modular manner. Their concepts and specifications are well-suited to be paired with LLMs to achieve the designer’s intent.”

Looking ahead, the researchers hope their work can influence how both industry and academia think about software architecture in the age of AI. “If software is to become more trustworthy, we need ways of writing it that make its intentions transparent,” says Jackson. “Concepts and synchronizations are one step toward that goal.”

More information:
Eagon Meng et al, What You See Is What It Does: A Structural Pattern for Legible Software, Proceedings of the 2025 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software (2025). DOI: 10.1145/3759429.3762628

Citation:
Researchers propose a new model for legible, modular software (2025, November 6)
retrieved 6 November 2025
from https://techxplore.com/news/2025-11-legible-modular-software.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Trending