Tech
Scammers in China Are Using AI-Generated Images to Get Refunds
I don’t want to admit it, but I did spend a lot of money online this holiday shopping season. And unsurprisingly, some of those purchases didn’t meet my expectations. A photobook I bought was damaged in transit, so I snapped a few pictures, emailed them to the merchant, and got a refund. Online shopping platforms have long depended on photos submitted by customers to confirm that refund requests are legitimate. But generative AI is now starting to break that system.
A Pinch Too Suspicious
On the Chinese social media app RedNote, WIRED found at least a dozen posts from ecommerce sellers and customer service representatives complaining about allegedly AI-generated refund claims they’ve received. In one case, a customer complained that the bed sheet they purchased was torn to pieces, but the Chinese characters on the shipping label looked like gibberish. In another, the buyer sent a picture of a coffee mug with cracks that looked like paper tears. “This is a ceramic cup, not a cardboard cup. Who could tear apart a ceramic cup into layers like this?” the seller wrote.
The merchants reported that there are a few product categories where AI-generated damage photos are being abused the most: fresh groceries, low-cost beauty products, and fragile items like ceramic cups. Sellers often don’t ask customers to return these goods before issuing a refund, making them more prone to return scams.
In November, a merchant who sells live crabs on Douyin, the Chinese version of TikTok, received a photo from a customer that made it look like most of the crabs she bought arrived already dead, while two others had escaped. The buyer even sent videos showing the dead crabs being poked by a human finger. But something was off.
“My family has farmed crabs for over 30 years. We’ve never seen a dead crab whose legs are pointing up,” Gao Jing, the seller, said in a video she later posted on Douyin. But what ultimately gave away the con was the sexes of the crabs. There were two males and four females in the first video, while the second clip had three males and three females. One of them also had nine instead of eight legs.
Gao later reported the fraud to the police, who determined the videos were indeed fabricated and detained the buyer for eight days, according to a police notice Gao shared online. The case drew widespread attention on Chinese social media, in part because it was the first known AI refund scam of its kind to trigger a regulatory response.
Lowering Barriers
This problem isn’t unique to China. Forter, a New York-based fraud detection company, estimates that AI-doctored images used in refund claims have increased by more than 15 percent since the start of the year, and are continuing to rise globally.
“This trend started in mid-2024, but has accelerated over the past year as image-generation tools have become widely accessible and incredibly easy to use.” says Michael Reitblat, CEO and cofounder of Forter. He adds that the AI doesn’t have to get everything right, as frontline retail workers and refund review teams may not have the time to closely scrutinize each picture.
Tech
What’s an E-Bike? California Wants You to Know
A few months ago, a family came into Pasadena Cyclery in Pasadena, California, for a repair on what they thought was their teenager’s e-bike. “I can’t fix that here,’ Daniel Purnell, a store manager and technician, remembers telling them. “That’s a motorcycle.” The mother got upset. She didn’t realize that what she thought was an e-bike could go much faster, perhaps up to 55 miles per hour.
“There’s definitely an education problem,” Purnell says. In California, bike advocates are pushing a new bill designed to clear up that confusion around what counts as an electric bicycle—and what doesn’t.
It’s a tricky balance. On one hand, backers want to allow riders access to new, faster, and more affordable non-car transportation options, ones that don’t require licenses and are emission-free. On the other hand, people, and especially kids, seem to be getting hurt. E-bike-related injuries jumped more than 1,020 percent nationwide between 2020 and 2024, according to hospital data, though it’s not clear if the stats-keepers can routinely distinguish between e-bikes and their faster, “e-moto” cousins. (Moped and powered-assisted cycle injuries jumped 67 percent in that same period.)
“We’re overdue to have better e-bike regulation,” says California state senator Catherine Blakespear, a Democrat who sponsored the bill and represents parts of North County in San Diego. “This has been an ongoing and growing issue for years.”
Senate Bill 1167 would make it illegal for retailers to label higher-powered, electric-powered vehicles as e-bikes. It would clarify that e-bikes have fully operative pedals and electric motors that don’t exceed 750 watts, enough to hit top speeds between 20 and 28 mph.
“We’re not against these devices,” says Kendra Ramsey, the executive director of the California Bicycle Coalition, which represents riders and is promoting the legislation. “People think they’re e-bikes and they’re not really e-bikes.”
Bill backers say they hope the fix, if it passes, makes a difference, especially for teenagers, who love the freedom that electric motors give them but can get into trouble if something goes wrong at higher speeds. Kids 17 and younger accounted for 20 percent of US e-bike injuries from 2020 to 2024, about in line with the share of the total population. But headlines—and the laws that follow them—have focused on teen injuries and even deaths.
There are no national laws governing e-bike riding. But bike backers spent years moving between states to pass laws that put e-bikes into three classes: Class 1, which have pedal-assist that only works when they’re actually pedaled, and goes up to 20 mph; Class 2, which have throttles that work without pedaling but still only reach 20 mph; and Class 3, which use pedal-assist to move up to 28 mph. Plenty of states and cities restrict the most powerful Class 3 bikes to people older than 16. (In a complicated twist, some e-bikes have different “modes,” allowing riders to toggle between Class 2 and Class 3.)
Last year, researchers visited 19 San Francisco Bay Area middle and high schools and found that 88 percent of the electric two-wheeled devices parked there were so high-powered and high-speed that they didn’t comply with the three-class system at all.
E-bikes have clearly struck a chord with state policymakers: At least 10 bills introduced this year deal with e-bikes, according to Ramsey.
Some bike advocates believe injuries have less to do with e-bikes than “e-motos,” a category that’s less likely to appear in retail stores or the sort of social media ads attracting teens to the tech. These have more powerful motors and can travel in excess of 30 mph. Vehicles, like the Surron Ultra Bee, which can hit top speeds of 55 mph, or Tuttio ICT, which can hit 50, are often marketed by retailers as “electric bikes.” Because so many sales happen online, it can be hard for people, and especially parents, to know what they’re getting into.
Tech
OpenAI Fires an Employee for Prediction Market Insider Trading
OpenAI has fired an employee following an investigation into their activity on prediction market platforms including Polymarket, WIRED has learned.
OpenAI CEO of Applications, Fidji Simo, disclosed the termination in an internal message to employees earlier this year. The employee, she said, “used confidential OpenAI information in connection with external prediction markets (e.g. Polymarket).”
“Our policies prohibit employees from using confidential OpenAI information for personal gain, including in prediction markets,” says spokesperson Kayla Wood. OpenAI has not revealed the name of the employee or the specifics of their trades.
Evidence suggests that this was not an isolated event. Polymarket runs on the Polygon blockchain network, so its trading ledger is pseudonymous but traceable. According to an analysis by the financial data platform Unusual Whales, there have been clusters of activities, which the service flagged as suspicious, around OpenAI-themed events since March 2023.
Unusual Whales flagged 77 positions in 60 wallet addresses as suspected insider trades, looking at the age of the account, trading history, and significance of investment, among other factors. Suspicious trades hinged on the release dates of products like Sora, GPT-5, and the ChatGPT Browser, as well as CEO Sam Altman’s employment status. In November 2023, two days after Altman was dramatically ousted from the company, a new wallet placed a significant bet that he would return, netting over $16,000 in profits. The account never placed another bet.
The behavior fits into patterns typical of insider trades. “The tell is the clustering. In the 40 hours before OpenAI launched its browser, 13 brand-new wallets with zero trading history appeared on the site for the first time to collectively bet $309,486 on the right outcome,” says Unusual Whales CEO Matt Saincome. “When you see that many fresh wallets making the same bet at the same time, it raises a real question about whether the secret is getting out.”
Prediction markets have exploded in popularity in recent years. These platforms allow customers to buy “event contracts” on the outcomes of future events ranging from the winner of the Super Bowl to the daily price of Bitcoin to whether the United States will go to war with Iran. There are a wide array of markets tied to events in the technology sector; you can trade on what Nvidia’s quarterly earnings will be, or when Tesla will launch a new car, or which AI companies will IPO in 2026.
As the platforms have grown, so have concerns that they allow traders to profit from insider knowledge. “This prediction market world makes the Wild West look tame in comparison,” says Jeff Edelstein, a senior analyst at the betting news site InGame. “If there’s a market that exists where the answer is known, somebody’s going to trade on it.”
Earlier this week, Kalshi announced that it had reported several suspicious insider trading cases to the Commodity Futures Trading Commission, the government agency overseeing these markets. In one instance, an employee of the popular YouTuber Mr. Beast was suspended for two years and fined $20,000 for making trades related to the streamer’s activities; in another, the far-right political candidate Kyle Langford was banned from the platform for making a trade on his own campaign. The company also announced a number of initiatives to prevent insider trading and market manipulation.
While Kalshi has heavily promoted its crackdown on insider trading, Polymarket has stayed silent on the matter. The company did not return requests for comments.
In the past, major trades on technology-themed markets have sparked speculation that there are Big Tech employees profiting by using their insider knowledge to gain an edge. One notorious example is the so-called “Google whale,” a pseudonymous account on Polymarket that made over $1 million trading on Google-related events, including a market on who the most-searched person of the year would be in 2025. (It was the singer D4vd, who is best known for his connection to an ongoing murder investigation after a young fan’s remains were found in a vehicle registered to him.)
Tech
Wall Street Has AI Psychosis
Before last week the name Alap Shah didn’t ring a bell for many people. The 45-year-old financial analyst and tech entrepreneur had spent the past two decades working in relative obscurity. Then last weekend he coauthored a blog with the research firm Citrini titled “The 2028 Global Intelligence Crisis.” It was a “thought exercise” about the impacts of artificial intelligence, and it predicted that in June of that year, AI would jack up unemployment past 10 percent and force the Dow down, down, down. Writing in a confident, Nostradamic tone—as if auditioning for starring roles in the next Michael Lewis book—the authors painted a picture of a flywheel in reverse: AI agents take jobs from workers, people spend less, and struggling corporations conduct layoffs on top of layoffs.
There wasn’t much in it that hadn’t been previously heard, or speculated about. Tech leaders like Anthropic CEO Dario Amodei have already estimated that half the entry level white collar jobs will soon be gone, and earlier this year, Anthropic’s release of new agentic tools spurred a Wall Street selloff. Nonetheless the report hit with the force of the blizzard blowing through lower Manhattan. When the closing chimes sounded on the New York Stock Exchange, the Dow was down 800 points. The name Alap Shah was now ringing bells.
The achievement is less impressive than it seems. Wall Street, like the rest of us, is in a persistent state of anxiety about AI, and it doesn’t take much to trigger a mini-panic. Financial markets don’t necessarily map to reality, but the jitters reflect a wider disquiet. The AI future is in a William Gibson zone—it’s here, but unevenly distributed—and the news from those already living in the agent-packed, AI code-writing universe is both exciting and unsettling. Emphasis on unsettling.
No one—no one!—knows exactly how AI will impact the economy, but clearly it will be significant. Right now stocks are soaring, so it seems to make sense to keep the party going. But then along comes the latest doom manifesto, or a paper indicating that a traditional business sector might be threatened by AI, and suddenly money managers are reminded that the biggest issue of our time is totally unresolved. Case in point: earlier this month, a tiny company (valuation under $6 million) that had previously sold karaoke machines pivoted to AI-powered shipping logistics and put out a report saying that it had discovered some efficiencies in loading semi-trucks. That was enough to erase billions of dollars from the share prices of several major logistics companies, none of which had karaoke experience.
After it did its job on Wall Street, the Citrini report came under considerable fire. Critics climbed over each other to proclaim its flimsiness. For one thing, they pointed out, AI has had very little discernable impact on the economy so far. Others cited the long history of resilience after technological upheavals. A mocking response by the respected trading firm Citadel Securities read, “For AI to produce a sustained negative demand shock, the economy must see a material acceleration in adoption, experience near-total labor substitution, no fiscal response, negligible investment absorption, and unconstrained scaling of compute.”
The most withering critiques disputed the report’s contention that much of the economy involves non-productive “rent-seeking” by middlemen and market makers, taking advantage of the laziness of the general population. When everyone has a few dozen AI agents working on their behalf, writes Shah, consumers will be able to effortlessly find the best goods for the best prices. Apps will be rendered unnecessary—just type what you want into the LLM and an army of agents will do everything for you. The “poster child” for this phenomenon, Shah says, is DoorDash. Instead of being limited to the restaurants on the app, consumers will send out AI agents to find their ideal meal options, contracting directly with restaurants and delivery people—no apps needed. Zero friction! The DoorDashes of the world are avocado toast!
-
Tech1 week agoA $10K Bounty Awaits Anyone Who Can Hack Ring Cameras to Stop Sharing Data With Amazon
-
Business1 week agoUS Top Court Blocks Trump’s Tariff Orders: Does It Mean Zero Duties For Indian Goods?
-
Fashion1 week agoICE cotton ticks higher on crude oil rally
-
Tech1 week agoDonald Trump Jr.’s Private DC Club Has Mysterious Ties to an Ex-Cop With a Controversial Past
-
Politics1 week agoUS Supreme Court strikes down Trump’s trade tariff measures
-
Fashion1 week agoIndia’s $28 bn reset: How 5 trade deals will reprice its T&A exports
-
Sports1 week agoBrett Favre blasts NFL for no longer appealing to ‘true’ fans: ‘There’s been a slight shift’
-
Entertainment1 week agoThe White Lotus” creator Mike White reflects on his time on “Survivor
