Tech
AI Chatbots can be exploited to extract more personal information, study indicates
AI chatbots that provide human-like interactions are used by millions of people every day, however new research has revealed that they can be easily manipulated to encourage users to reveal even more personal information.
Intentionally malicious AI chatbots can influence users to reveal up to 12.5 times more of their personal information, a new study by King’s College London has found.
For the first time, the research shows how conversational AI (CAIs) programmed to deliberately extract data can successfully encourage users to reveal private information using known prompt techniques and psychological tools. The study was presented for the first time at the 34th USENIX security symposium in Seattle.
The study tested three types of malicious AIs that used different strategies (direct, user-benefit and reciprocal) to encourage disclosure of personal information from users. These were built using “off the shelf” large language models, including Mistral and two different versions of Llama.
The researchers then asked 502 people to test the models, only telling them the goal of the study afterward.
They found that the CAIs using reciprocal strategies to extract information emerged as the most effective, with users having minimal awareness of the privacy risks. This strategy reflects on users’ input by offering empathetic responses and emotional support, sharing relatable stories from others’ experiences, acknowledging and validating user feelings, and being non-judgmental while assuring confidentiality.
These findings show the serious risk of bad actors, like scammers, gathering large amounts of personal information from people—without them knowing how or where it might be used.
LLM-based CAIs are being used across a variety of sectors, from customer service to health care, to provide human-like interactions through text or voice.
However, previous research shows these types of models don’t keep information secure, a limitation rooted in their architecture and training methods. LLMs typically require extensive training data sets, which often leads to personally identifiable information being memorized by the models.
The researchers are keen to emphasize that manipulating these models is not a difficult process. Many companies allow access to the base models underpinning their CAIs and people can easily adjust them without much programming knowledge or experience.
Dr. Xiao Zhan, a postdoctoral researcher in the Department of Informatics at King’s College London, said, “AI chatbots are widespread in many different sectors as they can provide natural and engaging interactions.
“We already know these models aren’t good at protecting information. Our study shows that manipulated AI chatbots could pose an even bigger risk to people’s privacy—and, unfortunately, it’s surprisingly easy to take advantage of.”
Dr. William Seymour, a lecturer in cybersecurity at King’s College London, said, “These AI chatbots are still relatively novel, which can make people less aware that there might be an ulterior motive to an interaction.
“Our study shows the huge gap between users’ awareness of the privacy risks and how they then share information. More needs to be done to help people spot the signs that there might be more to an online conversation than first seems. Regulators and platform providers can also help by doing early audits, being more transparent, and putting tighter rules in place to stop covert data collection.”
Citation:
AI Chatbots can be exploited to extract more personal information, study indicates (2025, August 14)
retrieved 14 August 2025
from https://techxplore.com/news/2025-08-ai-chatbots-exploited-personal.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
Asus Made a Split Keyboard for Gamers—and Spared No Expense
The wheel on the left side has options to adjust actuation distance, rapid-trigger sensitivity, and RGB brightness. You can also adjust volume and media playback, and turn it into a scroll wheel. The LED matrix below it is designed to display adjustments to actuation distance but feels a bit awkward: Each 0.1 mm of adjustment fills its own bar, and it only uses the bottom nine bars, so the screen will roll over four times when adjusting (the top three bars, with dots next to them, illuminate to show how many times the screen has rolled over during the adjustment). The saving grace of this is that, when adjusting the actuation distance, you can press down any switch to see a visualization of how far you’re pressing it, then tweak the actuation distance to match.
Alongside all of this, the Falcata (and, by extension, the Falchion) now has an aftermarket switch option: TTC Gold magnetic switches. While this is still only two switches, it’s an improvement over the singular switch option of most Hall effect keyboards.
Split Apart
Photograph: Henri Robbins
The internal assembly of this keyboard is straightforward yet interesting. Instead of a standard tray mount, where the PCB and plate bolt directly into the bottom half of the shell, the Falcata is more comparable to a bottom-mount. The PCB screws into the plate from underneath, and the plate is screwed onto the bottom half of the case along the edges. While the difference between the two mounting methods is minimal, it does improve typing experience by eliminating the “dead zones” caused by a post in the middle of the keyboard, along with slightly isolating typing from the case (which creates fewer vibrations when typing).
The top and bottom halves can easily be split apart by removing the screws on the plate (no breakable plastic clips here!), but on the left half, four cables connect the top and bottom halves of the keyboard, all of which need to be disconnected before fully separating the two sections. Once this is done, the internal silicone sound-dampening can easily be removed. The foam dampening, however, was adhered strongly enough that removing it left chunks of foam stuck to the PCB, making it impossible to readhere without using new adhesive. This wasn’t a huge issue, since the foam could simply be placed into the keyboard, but it is still frustrating to see when most manufacturers have figured this out.
Tech
These Sub-$300 Hearing Aids From Lizn Have a Painful Fit
Don’t call them hearing aids. They’re hearpieces, intended as a blurring of the lines between hearing aid and earbuds—or “earpieces” in the parlance of Lizn, a Danish operation.
The company was founded in 2015, and it haltingly developed its launch product through the 2010s, only to scrap it in 2020 when, according to Lizn’s history page, the hearing aid/earbud combo idea didn’t work out. But the company is seemingly nothing if not persistent, and four years later, a new Lizn was born. The revamped Hearpieces finally made it to US shores in the last couple of weeks.
Half Domes
Photograph: Chris Null
Lizn Hearpieces are the company’s only product, and their inspiration from the pro audio world is instantly palpable. Out of the box, these look nothing like any other hearing aids on the market, with a bulbous design that, while self-contained within the ear, is far from unobtrusive—particularly if you opt for the graphite or ruby red color scheme. (I received the relatively innocuous sand-hued devices.)
At 4.58 grams per bud, they’re as heavy as they look; within the in-the-ear space, few other models are more weighty, including the Kingwell Melodia and Apple AirPods Pro 3. The units come with four sets of ear tips in different sizes; the default mediums worked well for me.
The bigger issue isn’t how the tip of the device fits into your ear, though; it’s how the rest of the unit does. Lizn Hearpieces need to be delicately twisted into the ear canal so that one edge of the unit fits snugly behind the tragus, filling the concha. My ears may be tighter than others, but I found this no easy feat, as the device is so large that I really had to work at it to wedge it into place. As you might have guessed, over time, this became rather painful, especially because the unit has no hardware controls. All functions are performed by various combinations of taps on the outside of either of the Hearpieces, and the more I smacked the side of my head, the more uncomfortable things got.
Tech
Two Thinking Machines Lab Cofounders Are Leaving to Rejoin OpenAI
Thinking Machines cofounders Barret Zoph and Luke Metz are leaving the fledgling AI lab and rejoining OpenAI, the ChatGPT-maker announced on Thursday. OpenAI’s CEO of applications, Fidji Simo, shared the news in a memo to staff Thursday afternoon.
The news was first reported on X by technology reporter Kylie Robison, who wrote that Zoph was fired for “unethical conduct.”
A source close to Thinking Machines said that Zoph had shared confidential company information with competitors. WIRED was unable to verify this information with Zoph, who did not immediately respond to WIRED’s request for comment.
Zoph told Thinking Machines CEO Mira Murati on Monday he was considering leaving, then was fired today, according to the memo from Simo. She goes on to write that OpenAI doesn’t share the same concerns about Zoph as Murati.
The personnel shake-up is a major win for OpenAI, which recently lost its VP of research, Jerry Tworek.
Another Thinking Machines Lab staffer, Sam Schoenholz, is also rejoining OpenAI, the source said.
Zoph and Metz left OpenAI in late 2024 to start Thinking Machines with Murati, who had been the ChatGPT-maker’s chief technology officer.
This is a developing story. Please check back for updates.
-
Politics1 week agoUK says provided assistance in US-led tanker seizure
-
Entertainment1 week agoDoes new US food pyramid put too much steak on your plate?
-
Entertainment1 week agoWhy did Nick Reiner’s lawyer Alan Jackson withdraw from case?
-
Business1 week agoTrump moves to ban home purchases by institutional investors
-
Sports1 week agoPGA of America CEO steps down after one year to take care of mother and mother-in-law
-
Sports4 days agoClock is ticking for Frank at Spurs, with dwindling evidence he deserves extra time
-
Business1 week agoBulls dominate as KSE-100 breaks past 186,000 mark – SUCH TV
-
Sports5 days ago
Commanders go young, promote David Blough to be offensive coordinator
