Tech
AI Chatbots can be exploited to extract more personal information, study indicates
AI chatbots that provide human-like interactions are used by millions of people every day, however new research has revealed that they can be easily manipulated to encourage users to reveal even more personal information.
Intentionally malicious AI chatbots can influence users to reveal up to 12.5 times more of their personal information, a new study by King’s College London has found.
For the first time, the research shows how conversational AI (CAIs) programmed to deliberately extract data can successfully encourage users to reveal private information using known prompt techniques and psychological tools. The study was presented for the first time at the 34th USENIX security symposium in Seattle.
The study tested three types of malicious AIs that used different strategies (direct, user-benefit and reciprocal) to encourage disclosure of personal information from users. These were built using “off the shelf” large language models, including Mistral and two different versions of Llama.
The researchers then asked 502 people to test the models, only telling them the goal of the study afterward.
They found that the CAIs using reciprocal strategies to extract information emerged as the most effective, with users having minimal awareness of the privacy risks. This strategy reflects on users’ input by offering empathetic responses and emotional support, sharing relatable stories from others’ experiences, acknowledging and validating user feelings, and being non-judgmental while assuring confidentiality.
These findings show the serious risk of bad actors, like scammers, gathering large amounts of personal information from people—without them knowing how or where it might be used.
LLM-based CAIs are being used across a variety of sectors, from customer service to health care, to provide human-like interactions through text or voice.
However, previous research shows these types of models don’t keep information secure, a limitation rooted in their architecture and training methods. LLMs typically require extensive training data sets, which often leads to personally identifiable information being memorized by the models.
The researchers are keen to emphasize that manipulating these models is not a difficult process. Many companies allow access to the base models underpinning their CAIs and people can easily adjust them without much programming knowledge or experience.
Dr. Xiao Zhan, a postdoctoral researcher in the Department of Informatics at King’s College London, said, “AI chatbots are widespread in many different sectors as they can provide natural and engaging interactions.
“We already know these models aren’t good at protecting information. Our study shows that manipulated AI chatbots could pose an even bigger risk to people’s privacy—and, unfortunately, it’s surprisingly easy to take advantage of.”
Dr. William Seymour, a lecturer in cybersecurity at King’s College London, said, “These AI chatbots are still relatively novel, which can make people less aware that there might be an ulterior motive to an interaction.
“Our study shows the huge gap between users’ awareness of the privacy risks and how they then share information. More needs to be done to help people spot the signs that there might be more to an online conversation than first seems. Regulators and platform providers can also help by doing early audits, being more transparent, and putting tighter rules in place to stop covert data collection.”
Citation:
AI Chatbots can be exploited to extract more personal information, study indicates (2025, August 14)
retrieved 14 August 2025
from https://techxplore.com/news/2025-08-ai-chatbots-exploited-personal.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
Asus Made a Split Keyboard for Gamers—and Spared No Expense
The wheel on the left side has options to adjust actuation distance, rapid-trigger sensitivity, and RGB brightness. You can also adjust volume and media playback, and turn it into a scroll wheel. The LED matrix below it is designed to display adjustments to actuation distance but feels a bit awkward: Each 0.1 mm of adjustment fills its own bar, and it only uses the bottom nine bars, so the screen will roll over four times when adjusting (the top three bars, with dots next to them, illuminate to show how many times the screen has rolled over during the adjustment). The saving grace of this is that, when adjusting the actuation distance, you can press down any switch to see a visualization of how far you’re pressing it, then tweak the actuation distance to match.
Alongside all of this, the Falcata (and, by extension, the Falchion) now has an aftermarket switch option: TTC Gold magnetic switches. While this is still only two switches, it’s an improvement over the singular switch option of most Hall effect keyboards.
Split Apart
Photograph: Henri Robbins
The internal assembly of this keyboard is straightforward yet interesting. Instead of a standard tray mount, where the PCB and plate bolt directly into the bottom half of the shell, the Falcata is more comparable to a bottom-mount. The PCB screws into the plate from underneath, and the plate is screwed onto the bottom half of the case along the edges. While the difference between the two mounting methods is minimal, it does improve typing experience by eliminating the “dead zones” caused by a post in the middle of the keyboard, along with slightly isolating typing from the case (which creates fewer vibrations when typing).
The top and bottom halves can easily be split apart by removing the screws on the plate (no breakable plastic clips here!), but on the left half, four cables connect the top and bottom halves of the keyboard, all of which need to be disconnected before fully separating the two sections. Once this is done, the internal silicone sound-dampening can easily be removed. The foam dampening, however, was adhered strongly enough that removing it left chunks of foam stuck to the PCB, making it impossible to readhere without using new adhesive. This wasn’t a huge issue, since the foam could simply be placed into the keyboard, but it is still frustrating to see when most manufacturers have figured this out.
Tech
These Sub-$300 Hearing Aids From Lizn Have a Painful Fit
Don’t call them hearing aids. They’re hearpieces, intended as a blurring of the lines between hearing aid and earbuds—or “earpieces” in the parlance of Lizn, a Danish operation.
The company was founded in 2015, and it haltingly developed its launch product through the 2010s, only to scrap it in 2020 when, according to Lizn’s history page, the hearing aid/earbud combo idea didn’t work out. But the company is seemingly nothing if not persistent, and four years later, a new Lizn was born. The revamped Hearpieces finally made it to US shores in the last couple of weeks.
Half Domes
Photograph: Chris Null
Lizn Hearpieces are the company’s only product, and their inspiration from the pro audio world is instantly palpable. Out of the box, these look nothing like any other hearing aids on the market, with a bulbous design that, while self-contained within the ear, is far from unobtrusive—particularly if you opt for the graphite or ruby red color scheme. (I received the relatively innocuous sand-hued devices.)
At 4.58 grams per bud, they’re as heavy as they look; within the in-the-ear space, few other models are more weighty, including the Kingwell Melodia and Apple AirPods Pro 3. The units come with four sets of ear tips in different sizes; the default mediums worked well for me.
The bigger issue isn’t how the tip of the device fits into your ear, though; it’s how the rest of the unit does. Lizn Hearpieces need to be delicately twisted into the ear canal so that one edge of the unit fits snugly behind the tragus, filling the concha. My ears may be tighter than others, but I found this no easy feat, as the device is so large that I really had to work at it to wedge it into place. As you might have guessed, over time, this became rather painful, especially because the unit has no hardware controls. All functions are performed by various combinations of taps on the outside of either of the Hearpieces, and the more I smacked the side of my head, the more uncomfortable things got.
Tech
CEOs are taking the lead on AI initiatives | Computer Weekly
The AI radar 2026 study from Boston Consulting Group (BCG) has reported that artificial intelligence (AI) investment is set to double in 2026 compared with 2025. The study, based on a survey of 2,400 business executives, of which 640 are CEOs, found that almost every chief executive polled (94%) is committed to continuing investments even if returns take time to materialise.
In fact, almost all (90%) of the CEOs polled believe AI agents will deliver a measurable return on investment (ROI) by 2026.
The study found that over two-thirds (72%) of CEOs now act as the primary decision-maker for AI in their organisation, taking responsibility from CIOs, who were previously the main lead in AI projects.
Christoph Schweizer, CEO of BCG, said: “Corporate investment in AI is here to stay. 94% of our survey respondents say they will continue to invest in 2026, even if it takes time to see the return. They intend to spend 1.7% of revenue on AI comprehensively. That is more than twice of what it was a year ago.”
BCG’s research suggests that companies leading the way in AI deployments are investing 60% of their AI budgets on agentic AI (AI agents). “We tell CEOs that they need to make AI a key priority,” he said. “The way they own it, the way they talk about it, the way they bring their organisation along. They need to spend time on deepening their own AI literacy.”
BCG recommends that CEOs understand the tools, the technology, and keep in touch with technology suppliers and partners. “Ultimately, you need to know what you talk about so that you can bring your organisation along and steer for maximum return,” added Schweizer.
With regards to the adoption of agentic AI, BCG found that more than 30% of the CEOs investing in AI during 2026 said they would be building agents to deploy in the work environment. Vladimir Lukic, global leader of BCG’s Technology and Digital Advantage, said: “AI agents will truly be something that will unlock organisations and deliver a return on investment within 2026.”
Sylvain Duranton, head of BCG X, said the research highlights differences in CEOs’ AI confidence in different regions. BCG reported that UK businesses are less likely than global peers to make large-scale investments in AI in 2026.
The study found that only 24% of UK companies plan to invest more than $50m in AI, compared with much higher shares in countries leading the AI race, such as Greater China (68%), Japan (53%), the European Union (38%) and the Middle East (41%). BCG also reported that British CEOs are the most sceptical of AI’s potential return on investment and less involved in decision-making on AI.
Discussing the regional differences, Duranton said: “CEOs in the East, in India, in China, in Japan, the Middle East and Africa tend to be highly confident that AI is going to be a positive return on investment move. In the global West – Europe, the US and the UK – there’s a bit more caution.”
In his experience, many Asian companies have huge confidence and boldness in moving forward with AI. However, many European and US firms operate in a different way. “There’s some more skepticism in their workforce,” said Duranton. “There potentially is some more regulation that they deal with.”
Firms leading the way with AI deployments, which BCG categorise as “trailblazers”, tend to focus heavily on upskilling the workforce. Jessica Apotheker, chief marketing officer and managing director at BCG, said: “Trailblazers are putting 60% of their AI budget behind upskilling and retraining their workforce. So, they’re really wanting to go deep in the organisation, changing the way people work, putting people behind this new technology.”
BCG reported that in these organisations, 70% of the workforce has been upskilled or reskilled on AI.
-
Politics1 week agoUK says provided assistance in US-led tanker seizure
-
Entertainment1 week agoDoes new US food pyramid put too much steak on your plate?
-
Entertainment1 week agoWhy did Nick Reiner’s lawyer Alan Jackson withdraw from case?
-
Business1 week agoTrump moves to ban home purchases by institutional investors
-
Sports1 week agoPGA of America CEO steps down after one year to take care of mother and mother-in-law
-
Sports4 days agoClock is ticking for Frank at Spurs, with dwindling evidence he deserves extra time
-
Business1 week agoBulls dominate as KSE-100 breaks past 186,000 mark – SUCH TV
-
Business1 week agoGold prices declined in the local market – SUCH TV
