Tech
How do ‘AI detection’ tools actually work? And are they effective?
As nearly half of all Australians say they have recently used artificial intelligence (AI) tools, knowing when and how they’re being used is becoming more important.
Consultancy firm Deloitte recently partially refunded the Australian government after a report they published had AI-generated errors in it.
A lawyer also recently faced disciplinary action after false AI-generated citations were discovered in a formal court document. And many universities are concerned about how their students use AI.
Amid these examples, a range of “AI detection” tools have emerged to try to address people’s need for identifying accurate, trustworthy and verified content.
But how do these tools actually work? And are they effective at spotting AI-generated material?
How do AI detectors work?
Several approaches exist, and their effectiveness can depend on which types of content are involved.
Detectors for text often try to infer AI involvement by looking for “signature” patterns in sentence structure, writing style, and the predictability of certain words or phrases being used. For example, the use of “delves” and “showcasing” has skyrocketed since AI writing tools became more available.
However the difference between AI and human patterns is getting smaller and smaller. This means signature-based tools can be highly unreliable.
Detectors for images sometimes work by analyzing embedded metadata which some AI tools add to the image file.
For example, the Content Credentials inspect tool allows people to view how a user has edited a piece of content, provided it was created and edited with compatible software. Like text, images can also be compared against verified datasets of AI-generated content (such as deepfakes).
Finally, some AI developers have started adding watermarks to the outputs of their AI systems. These are hidden patterns in any kind of content which are imperceptible to humans but can be detected by the AI developer. None of the large developers have shared their detection tools with the public yet, though.
Each of these methods has its drawbacks and limitations.
How effective are AI detectors?
The effectiveness of AI detectors can depend on several factors. These include which tools were used to make the content and whether the content was edited or modified after generation.
The tools’ training data can also affect results.
For example, key datasets used to detect AI-generated pictures do not have enough full-body pictures of people or images from people of certain cultures. This means successful detection is already limited in many ways.
Watermark-based detection can be quite good at detecting content made by AI tools from the same company. For example, if you use one of Google’s AI models such as Imagen, Google’s SynthID watermark tool claims to be able to spot the resulting outputs.
But SynthID is not publicly available yet. It also doesn’t work if, for example, you generate content using ChatGPT, which isn’t made by Google. Interoperability across AI developers is a major issue.
AI detectors can also be fooled when the output is edited. For example, if you use a voice cloning app and then add noise or reduce the quality (by making it smaller), this can trip up voice AI detectors. The same is true with AI image detectors.
Explainability is another major issue. Many AI detectors will give the user a “confidence estimate” of how certain it is that something is AI-generated. But they usually don’t explain their reasoning or why they think something is AI-generated.
It is important to realize that it is still early days for AI detection, especially when it comes to automatic detection.
A good example of this can be seen in recent attempts to detect deepfakes. The winner of Meta’s Deepfake Detection Challenge identified four out of five deepfakes. However, the model was trained on the same data it was tested on—a bit like having seen the answers before it took the quiz.
When tested against new content, the model’s success rate dropped. It only correctly identified three out of five deepfakes in the new dataset.
All this means AI detectors can and do get things wrong. They can result in false positives (claiming something is AI generated when it’s not) and false negatives (claiming something is human-generated when it’s not).
For the users involved, these mistakes can be devastating—such as a student whose essay is dismissed as AI-generated when they wrote it themselves, or someone who mistakenly believes an AI-written email came from a real human.
It’s an arms race as new technologies are developed or refined, and detectors are struggling to keep up.
Where to from here?
Relying on a single tool is problematic and risky. It’s generally safer and better to use a variety of methods to assess the authenticity of a piece of content.
You can do so by cross-referencing sources and double-checking facts in written content. Or for visual content, you might compare suspect images to other images purported to be taken during the same time or place. You might also ask for additional evidence or explanation if something looks or sounds dodgy.
But ultimately, trusted relationships with individuals and institutions will remain one of the most important factors when detection tools fall short or other options aren’t available.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Citation:
How do ‘AI detection’ tools actually work? And are they effective? (2025, November 16)
retrieved 16 November 2025
from https://techxplore.com/news/2025-11-ai-tools-effective.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
Mark Zuckerberg Tries to Play It Safe in Social Media Addiction Trial Testimony
Zuckerberg repeatedly fell back on accusing Lanier of “mischaracterizing” his previous statements. When it came to emails, Zuckerberg typically objected based on how old the message was, or his lack of familiarity with the Meta employees involved. “I don’t think so, no,” he replied when directed to clarify if he knew Karina Newton, Instagram’s head of public policy in 2021. And Zuckerberg never failed to point out when he wasn’t actually on an email thread entered as evidence.
Perhaps anticipating these detached and repetitive talking points from Zuckerberg—who claimed over and over that any increased engagement from a user on Facebook or Instagram merely reflected the “value” of those apps—Lanier early on suggested that the CEO has been coached to address these issues. “You have extensive media training,” he said. “I think I’m sort of well-known to be pretty bad at this,” Zuckerberg protested, getting a rare laugh from the courtroom. Lanier went on to present Meta documents outlining communication strategies for Zuckerberg, describing his team as “telling you what kind of answers to give,” including in a context such as testifying under oath. “I’m not sure what you’re trying to imply,” Zuckerberg said. In the afternoon, Meta counsel Paul Schmidt returned to that line of questioning, asking if Zuckerberg had to speak to the media because of his role as head of a major business. “More than I would like,” Zuckerberg said, to more laughter.
In an even more, well, “meta” moment after the court had returned from lunch, Kuhl struck a stern tone by warning all in the room that anyone wearing “glasses that record”—such as the AI-equipped Oakley and Ray-Ban glasses sold by Meta for up to $499—had to remove them while attending the proceedings, where both video and audio recordings are prohibited.
K.G.M.’s suit and the others to follow are novel in their sidestepping of Section 230, a law that has protected tech companies from liability for content created by users on their platforms. As such, Zuckerberg stuck to a playbook that framed the lawsuit as a fundamental misunderstanding of how Meta works. When Lanier presented evidence that Meta teams were working on increasing the minutes users spent on their platforms each day, Zuckerberg countered that the company had long ago moved on from those objectives, or that those numbers were not even “goals” per se, just metrics of competitiveness within the industry. When Lanier questioned if Meta was merely hiding behind an age limit policy that was “unenforced” and maybe “unenforceable,” per an email from Nick Clegg, Meta’s former president of global affairs, Zuckerberg calmly deflected with a narrative about people circumventing their safeguards despite continual improvements on that front.
Lanier, though, could always return to K.G.M., who he said had signed up for Instagram at the age of 9, some five years before the app started asking users for their birthday in 2019. While Zuckerberg could more or less brush off internal data on, say, the need to convert tweens into loyal teen users, or Meta’s apparent rejection of the alarming expert analysis they had commissioned on the risks of Instagram’s “beauty filters,” he didn’t have a prepackaged response to Lanier’s grand finale: a billboard-sized tarp, which took up half the width of the courtroom and required seven people to hold, of hundreds of posts from K.G.M.’s Instagram account. As Zuckerberg blinked hard at the vast display, visible only to himself, Kuhl, and the jury, Lanier said it was a measure of the sheer amount of time K.G.M. had poured into the app. “In a sense, y’all own these pictures,” he added. “I’m not sure that’s accurate,” Zuckerberg replied.
When Lanier had finished and Schmidt was given the chance to set Zuckerberg up for an alternate vision of Meta as a utopia of connection and free expression, the founder quickly gained his stride again. “I wanted people to have a good experience with it,” he said of the company’s platforms. Then, a moment later: “People shift their time naturally according to what they find valuable.”
Tech
The Best Bose Noise-Canceling Headphones Are Discounted Right Now
Bose helped write the book on noise canceling when it entered the market way back in the 1970s. Lately, the brand has been on a tear, with the goal of creating the ultimate in sonic solitude. The QuietComfort Ultra Gen 2 are Bose’s latest and greatest creation, offering among the very best noise canceling we’ve ever tested.
Just as importantly, they’re currently on sale for $50 off. Now, this might not seem like a huge discount on a $450 pair of headphones, but this is the lowest price we’ve seen on these headphones outside of a major shopping holiday. So if you missed your chance during Black Friday but you have a spring break trip to Mexico or Hawaii on the calendar, this is your best bet.
The Best Noise Canceling Headphones Are on Sale
I’ve wondered over the last few years if the best noise cancelers even needed another potency upgrade. Previous efforts like Sony’s WH-1000XM5, Apple’s AirPods Max, and Bose’s own QuietComfort 45 offering enough silence that my own wife gives me a jump scare when she walks up behind me.
Then I had a kid.
Bose’s properly named QuietComfort Ultra not only do a fantastic job quelling the many squeaks, squawks, and adorable pre-nap protests my baby makes. Now that my wife and I have turned my solo office into a shared space, I can go about my business in near total sonic freedom, even as she sits in on a loud Zoom call.
In testing, we found Sony’s latest WH-1000XM6 offered a slight bump in noise canceling performance over Bose’s latest, due in part to their zippy response time when attacking unwanted sounds. But both were within a hair of each other when tested across frequencies. I prefer Bose’s pair for travel, due to their more cushy design that lets me listen for a full cross-country flight in luxe comfort.
Upgrades to the latest generation, like the ability to sleep them and quickly wake them, make these headphones surprisingly more intuitive to use daily. The new built-in USB-C audio interface lets you listen to lossless audio directly from supported devices, a nice touch now that Spotify has joined Apple Music and other services with lossless audio support.
Speaking of audio, the QC Ultra Gen 2’s performance is impressive, providing clear and crisp detail and dialog, with a lively touch that brings some added excitement to instruments like percussion or zippy guitar tones. It’s a lovely overall presentation. I’m not a huge fan of the new spatial audio mode (what Bose calls Cinema mode), but it’s always nice to have options.
These headphones often bounce between full price and this $50 discount, so if you’ve been waiting for the dip, now’s the time to buy. If you’ve deal with daily distractions like me, whether at home or in a busy office space, you’ll appreciate the latest level of sound-smashing solitude Bose’s best noise-cancelers ever can provide.
Power up with unlimited access to WIRED. Get best-in-class reporting and exclusive subscriber content that’s too important to ignore. Subscribe Today.
Tech
This Defense Company Made AI Agents That Blow Things Up
Like many Silicon Valley companies today, Scout AI is training large AI models and agents to automate chores. The big difference is that instead of writing code, answering emails, or buying stuff online, Scout AI’s agents are designed to seek and destroy things in the physical world with exploding drones.
In a recent demonstration, held at an undisclosed military base in central California, Scout AI’s technology was put in charge of a self-driving off-road vehicle and a pair of lethal drones. The agents used these systems to find a truck hiding in the area, and then blew it to bits using an explosive charge.
“We need to bring next-generation AI to the military,” Colby Adcock, Scout AI’s CEO, told me in a recent interview. (Adcock’s brother, Brett Adcock, is the CEO of Figure AI, a startup working on humanoid robots). “We take a hyperscaler foundation model and we train it to go from being a generalized chatbot or agentic assistant to being a warfighter.”
Adcock’s company is part of a new generation of startups racing to adapt technology from big AI labs for the battlefield. Many policymakers believe that harnessing AI will be the key to future military dominance. The combat potential of AI is one reason why the US government has sought to limit the sale of advanced AI chips and chipmaking equipment to China, although the Trump administration recently chose to loosen those controls.
“It’s good for defense tech startups to push the envelope with AI integration,” says Michael Horowitz, a professor at the University of Pennsylvania who previously served in the Pentagon as deputy assistant secretary of defense for force development and emerging capabilities. “That’s exactly what they should be doing if the US is going to lead in military adoption of AI.”
Horowitz also notes, though, that harnessing the latest AI advances can prove particularly difficult in practice.
Large language models are inherently unpredictable and AI agents—like the ones that control the popular AI assistant OpenClaw—can misbehave when given even relatively benign tasks like ordering goods online. Horowitz says it may be especially hard to demonstrate that such systems are robust from a cybersecurity standpoint—something that would be required for widespread military use.
Scout AI’s recent demo involved several steps where AI had free rein over combat systems.
At the outset of the mission the following command was fed into a Scout AI system known as Fury Orchestrator:
A relatively large AI model with over a 100 billion parameters, which can run either on a secure cloud platform or an air-gapped computer on-site, interprets the initial command. Scout AI uses an undisclosed open source model with its restrictions removed. This model then acts as an agent, issuing commands to smaller, 10-billion-parameter models running on the ground vehicles and the drones involved in the exercise. The smaller models also act as agents themselves, issuing their own commands to lower-level AI systems that control the vehicles’ movements.
Seconds after receiving marching orders, the ground vehicle zipped off along a dirt road that winds between brush and trees. A few minutes later, the vehicle came to a stop and dispatched the pair of drones, which flew into the area where it had been instructed that the target was waiting. After spotting the truck, an AI agent running on one of the drones issued an order to fly toward it and detonate an explosive charge just before impact.
-
Business1 week agoGold price today: How much 18K, 22K and 24K gold costs in Delhi, Mumbai & more – Check rates for your city – The Times of India
-
Business1 week agoAye Finance IPO Day 2: GMP Remains Zero; Apply Or Not? Check Price, GMP, Financials, Recommendations
-
Fashion1 week agoComment: Tariffs, capacity and timing reshape sourcing decisions
-
Business6 days agoTop stocks to buy today: Stock recommendations for February 13, 2026 – check list – The Times of India
-
Fashion1 week agoIndia’s PDS Q3 revenue up 2% as margins remain under pressure
-
Politics7 days agoIndia clears proposal to buy French Rafale jets
-
Fashion1 week agoSaint Laurent retains top spot as hottest brand in Q4 2025 Lyst Index
-
Tech1 week agoRemoving barriers to tech careers
