Tech
Scams and frauds: Here are tactics criminals use on you in the age of AI and cryptocurrencies
Scams are nothing new—fraud has existed as long as human greed, but what changes are the tools.
Scammers thrive on exploiting vulnerable, uninformed users, and they adapt to whatever technologies or trends dominate the moment. In 2025, that means AI, cryptocurrencies and stolen personal data are their weapons of choice.
And, as always, the duty, fear and hope of their targets provide openings. Today, duty often means following instructions from bosses or co-workers, who scammers can impersonate. Fear is that a loved one, who scammers can also impersonate, is in danger. And hope is often for an investment scheme or job opportunity to pay off.
AI-powered scams and deepfakes
Artificial intelligence is no longer niche—it’s cheap, accessible and effective. While businesses use AI for advertising and customer support, scammers exploit the same tools to mimic reality, with disturbing precision.
Criminals are using AI-generated audio or video to impersonate CEOs, managers or even family members in distress. Employees have been tricked into transferring money or leaking sensitive data. Over 105,000 such deepfake attacks were recorded in the U.S. in 2024, costing more than US$200 million in the first quarter of 2025 alone. Victims often cannot distinguish synthetic voices or faces from real ones.
Fraudsters are also using emotional manipulation. The scammers make phone calls or send convincing AI-written texts posing as relatives or friends in distress. Elderly victims in particular fall prey when they believe a grandchild or other family member is in urgent trouble. The Federal Trade Commission has outlined how scammers use fake emergencies to pose as relatives.
Cryptocurrency scams
Crypto remains the Wild West of finance—fast, unregulated and ripe for exploitation.
Pump-and-dump scammers artificially inflate the price of a cryptocurrency through hype on social media to lure investors with promises of huge returns—the pump—and then sell off their holdings—the dump—leaving victims with worthless tokens.
Pig butchering is a hybrid of romance scams and crypto fraud. Scammers build trust over weeks or months before persuading victims to invest in fake crypto platforms. Once the scammers have extracted enough money from the victim, they vanish.
Scammers also use cryptocurrencies as a means of extracting money from people in impersonation scams and other forms of fraud. For example, scammers direct victims to bitcoin ATMs to deposit large sums of cash and convert it to the untraceable cryptocurrency as payment for fictitious fines.
Phishing, smishing, tech support and jobs
Old scams don’t die; they evolve.
Phishing and smishing have been around for years. Victims are tricked into clicking links in emails or text messages, leading to malware downloads, credential theft or ransomware attacks. AI has made these lures eerily realistic, mimicking corporate tone, grammar and even video content.
Tech support scams often start with pop-ups on computer screens that warn of viruses or identity theft, urging users to call a number. Sometimes they begin with a direct cold call to the victim. Once the victim is on a call with the fake tech support, the scammers convince victims to grant remote access to their supposedly compromised computers. Once inside, scammers install malware, steal data, demand payment or all three.
Fake websites and listings are another current type of scam. Fraudulent sites impersonating universities or ticket sellers trick victims into paying for fake admissions, concerts or goods.
One example is when a website for “Southeastern Michigan University” came online and started offering details about admission. There is no such university. Eastern Michigan University filed a complaint that Southeastern Michigan University was copying its website and defrauding unsuspecting victims.
The rise of remote and gig work has opened new fraud avenues.
Victims are offered fake jobs with promises of high pay and flexible hours. In reality, scammers extract “placement fees” or harvest sensitive personal data such as Social Security numbers and bank details, which are later used for identity theft.
How you can protect yourself
Technology has changed, but the basic principles remain the same: Never click on suspicious links or download attachments from unknown senders, and enter personal information only if you are sure that the website is legitimate. Avoid using third-party apps or links. Legitimate businesses have apps or real websites of their own.
Enable two-factor authentication wherever possible. It provides security against stolen passwords. Keep software updated to patch security holes. Most software allows for automatic updates or warns about applying a patch.
Remember that a legitimate business will never ask for personal information or a money transfer. Such requests are a red flag.
Relationships are a trickier matter. The state of California provides details on how people can avoid being victims of pig butchering.
Technology has supercharged age-old fraud. AI makes deception virtually indistinguishable from reality, crypto enables anonymous theft, and the remote-work era expands opportunities to trick people. The constant: Scammers prey on trust, urgency and ignorance. Awareness and skepticism remain your best defense.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Citation:
Scams and frauds: Here are tactics criminals use on you in the age of AI and cryptocurrencies (2025, September 18)
retrieved 18 September 2025
from https://techxplore.com/news/2025-09-scams-frauds-tactics-criminals-age.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
Study uncovers oxygen trapping as cause of voltage loss in sodium cathodes
by Li Jingxin; Zhao Weiwei, Hefei Institutes of Physical Science, Chinese Academy of Sciences
A research team led by Prof. Li Chao from East China Normal University has uncovered the origin of voltage decay in P2-type layered oxide cathodes. Using electron paramagnetic resonance (EPR) spectroscopy at the Steady-State Strong Magnetic Field Facility (SHMFF), the Hefei Institutes of Physical Science of the Chinese Academy of Science, the team tracked the dynamic evolution of oxygen species and clarified their direct role in structural degradation.
The findings, published in Advanced Energy Materials, provide new guidance for designing more stable sodium-ion cathodes.
P2-type sodium layered oxides (NaxAyTM1-yO2) are long considered stable for anion redox reactions compared to Li-rich O3-type counterparts, with suppressed voltage decay. However, the team observed significant voltage decay in the high Na-content P2-type Na0.8Li0.26Mn0.74O2 during cycling—an anomaly unexplainable by existing theories.
The researchers identified a clear sequence of oxygen transformations upon charging, eventually leading to the formation of molecular O2. While early cycles showed that this oxygen could still be reduced during discharge, with continued cycling a growing fraction of O2 remained trapped in the discharged state. This irreversible accumulation was pinpointed as the primary driver of voltage decay and capacity loss.
In this study, EPR proved critical as it enabled noninvasive monitoring of oxygen redox behavior and revealed how reactive oxygen intermediates gradually evolve and accumulate during cycling.
EPR further exposed local structural changes: signals associated with spin interactions between manganese and oxidized oxygen became more pronounced with cycling, consistent with the development of Mn-rich and Li-rich domains. These segregation effects, exacerbated by unreduced O2, aggravated the performance degradation.

Importantly, the team also explained why high sodium-content cathodes behave differently from their low sodium-content counterparts. In high-Na materials, insufficient interlayer spacing allows migration and vacancy growth, making them vulnerable to oxygen trapping.
By contrast, low-Na cathodes with larger spacing remain stable and show no evidence of trapped oxygen.
This study highlights the unique value of EPR in battery research and suggests that bulk modification strategies are key to mitigating voltage decay and developing high-performance cathodes for next-generation batteries, according to the team.
More information:
Chunjing Hu et al, Accumulation of Unreduced Molecular O2Explains Abnormal Voltage Decay in P2‐Type Layered Oxide Cathode, Advanced Energy Materials (2025). DOI: 10.1002/aenm.202503491
Provided by
Hefei Institutes of Physical Science, Chinese Academy of Sciences
Citation:
Study uncovers oxygen trapping as cause of voltage loss in sodium cathodes (2025, November 6)
retrieved 6 November 2025
from https://techxplore.com/news/2025-11-uncovers-oxygen-voltage-loss-sodium.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
New testing scheme could work for chips and clinics
Diagnostic testing is big business. The global market for testing semiconductors for defects is estimated at $39 billion in 2025. For medical lab tests, the market is even bigger: $125 billion.
Both kinds of tests have something in common, says Rohan Ghuge, assistant professor of decision science in the information, risk, and operations management department at Texas McCombs. They involve complex systems with vast numbers of components, whether they’re evaluating computer chips or human bodies.
New research from Texas McCombs suggests a new approach to testing complex systems that might save time by eliminating some unnecessary and expensive steps. “Nonadaptive Stochastic Score Classification and Explainable Half-Space Evaluation” is published in Operations Research.
Currently, a common shortcut is to conduct sequences of tests. Instead of testing every component—which isn’t practical for complex systems—a clinician might test certain components first. Each round rules out some possible problems and sets up a new round of tests.
That approach has time-consuming drawbacks, Ghuge says. “First, you might check the vital signs. Then, you come back the next day and do an ECG [electrocardiogram], then we do blood work, step by step. That’s going to take a lot of time, which we don’t really want to waste for a patient.”
What if, he wondered, a single round of tests could provide the most critical information in a fraction of the time? What if the same protocol could prove useful for chips or in clinics?
“We want something that’s highly scalable, deployable, and uniform,” he says. “You need to have it in a way that can be deployed on thousands of kinds of chips, or a first step that you give to clinicians for every patient of that kind.”
Merging success and failure
The key, Ghuge theorized, was to choose a small number of tests that could quickly classify a system’s risk level: low, medium, or high. With Anupam Gupta of New York University and Viswanath Nagarajan of the University of Michigan, he set out to design such a protocol.
Their solution was to combine two sets of tests with opposite goals. One set diagnoses whether a system is working, while the other diagnoses whether it’s failing. Together, they can provide a snapshot of risk.
“You create two lists, say, a success list and a failure list,” Ghuge says. “You combine a fraction of the first list and a fraction of the second list. You want to come up with a single batch of tests that tell you at the same time whether the system is working or failing.”
An existing medical example, he says, is the HEART Score. It rates five factors, such as age and ECG results, to quickly assess the risk that a patient with chest pain will have a major cardiac event within six weeks.
In simulations, Ghuge tested his algorithm against a sequential one on the same sets of data. His got results over 100 times as fast as the sequential algorithm, at a cost that averaged 22% higher.
“The tests are a bit more costly,” he says. “The trade-off is that you can get them done a lot faster.”
But he also notes that a single batch of tests might reduce setup costs, he says, compared with the expenses of setting up one test after another.
A next step, Ghuge hopes, is to try out his algorithm on real-life testing. A broadband internet network, such as Google Fiber or Spectrum, might use it for daily testing, to rapidly diagnose whether a system or subsystem is working.
“I come from a more theoretical background that focuses on the right model,” he says. “There’s a gap between that and applying it in practice. I’m excited to speak with people, to talk to practitioners and see if these can be applied.”
More information:
Rohan Ghuge et al, Nonadaptive Stochastic Score Classification and Explainable Half-Space Evaluation, Operations Research (2025). DOI: 10.1287/opre.2023.0431
Citation:
New testing scheme could work for chips and clinics (2025, November 6)
retrieved 6 November 2025
from https://techxplore.com/news/2025-11-scheme-chips-clinics.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
Fake or the real thing? How AI can make it harder to trust the pictures we see
A new study has revealed that artificial intelligence can now generate images of real people that are virtually impossible to tell apart from genuine photographs.
Using AI models ChatGPT and DALL·E, a team of researchers from Swansea University, the University of Lincoln and Ariel University in Israel, created highly realistic images of both fictional and famous faces, including celebrities.
They found that participants were unable to reliably distinguish them from authentic photos—even when they were familiar with the person’s appearance.
Across four separate experiments, the researchers noted that adding comparison photos or the participants’ prior familiarity with the faces provided only limited help.
The research has just been published in the journal Cognitive Research: Principles and Implications and the team say their findings highlight a new level of “deepfake realism,” showing that AI can now produce convincing fake images of real people which could erode trust in visual media.
Professor Jeremy Tree, from the School of Psychology, said, “Studies have shown that face images of fictional people generated using AI are indistinguishable from real photographs. But for this research we went further by generating synthetic images of real people.
“The fact that everyday AI tools can do this not only raises urgent concerns about misinformation and trust in visual media but also the need for reliable detection methods as a matter of urgency.”
One of the experiments, which involved participants from the US, Canada, the UK, Australia and New Zealand, saw subjects shown a series of facial images, both real and artificially generated, and they were asked to identify which was which. The team say the fact the participants mistook the AI-generated novel faces for real photos indicated just how plausible they were.
Another experiment saw participants asked if they could tell genuine pictures of Hollywood stars such as Paul Rudd and Olivia Wilde from computer-generated versions. Again, the study’s results showed just how difficult individuals can find it to spot the authentic version.
The researchers say AI’s ability to produce novel/synthetic images of real people opens up a number of avenues for use and abuse. For instance, creators might generate images of a celebrity endorsing a certain product or political stance, which could influence public opinion of both the identity and the brand/organization they are portrayed as supporting.
Professor Tree added, “This study shows that AI can create synthetic images of both new and known faces that most people can’t tell apart from real photos. Familiarity with a face or having reference images didn’t help much in spotting the fakes. That is why we urgently need to find new ways to detect them.
“While automated systems may eventually outperform humans at this task, for now, it’s up to viewers to judge what’s real.”
More information:
Robin S. S. Kramer et al, AI-generated images of familiar faces are indistinguishable from real photographs, Cognitive Research: Principles and Implications (2025). DOI: 10.1186/s41235-025-00683-w
Citation:
Fake or the real thing? How AI can make it harder to trust the pictures we see (2025, November 6)
retrieved 6 November 2025
from https://techxplore.com/news/2025-11-fake-real-ai-harder-pictures.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
-
Tech1 week agoUS Ralph Lauren partners with Microsoft for AI shopping experience
-
Sports1 week agoBilly Bob Thornton dishes on Cowboys owner Jerry Jones’ acting prowess after ‘Landman’ cameo
-
Tech1 week agoAI chatbots are becoming everyday tools for mundane tasks, use data shows
-
Fashion1 week agoCalvin Klein launches Re-Calvin take-back programme across the US
-
Tech1 week agoNvidia, Cisco look to deepen AI innovation across 6G, telecoms | Computer Weekly
-
Sports1 week agoGiannis savors beating Knicks after season sweep
-
Sports1 week agoMiami extends Bright deal, DP spot still open
-
Sports1 week agoNCAA delays date when bets on pro sports allowed
