The UK government’s Digital Inclusion Action Plan has helped more than a million people access online and digital services in its first year.
The initiative, launched in early 2025, aims to bridge the digital inclusion gap in the UK by providing internet access to individuals with limited exposure to digital technologies. This includes the elderly, unemployed individuals, families from low-income households, and those residing in rural areas.
Secretary of state Liz Kendall said: “We want everyone in the country to be able to take advantage of the opportunities of being online. Whether that is staying connected with family and friends, finding work, accessing government services or getting better prices for everyday goods.
“Our digital inclusion efforts are already changing people’s lives for the better, but we are determined to go even further, so we can build a future that works for all.”
Access to digital technologies in the UK is not equal, with many individuals unable to afford their own devices, access the internet or gain skills.
The availability of technology in schools varies by location, while a lack of access to artificial intelligence (AI) means there is also a growing gap in AI skills despite its increasing use in daily life.
As the world becomes increasingly digitised, day-to-day activities, from managing finances to applying for jobs, now require access to computers and the internet, yet many UK citizens don’t have the basic digital skills needed to navigate daily life or to function in the modern workplace.
Digital inclusion changes lives. When people have the confidence and support to navigate the digital world, they feel more connected, more empowered and better able to manage everyday challenges Hilary Armstrong, Digital Inclusion Action Committee
The Digital Inclusion Action Plan has seen the government work alongside the technology sector and charities to provide devices and digital skills to those being left behind in the wake of rapid technology adoption.
Its one-year progress report found that the commitments the government has made as part of the initiative have either been delivered or are on track to be delivered.
This includes launching an £11.9m Digital Inclusion Innovation Fund to support local digital skills initiatives across the UK, which has helped more than 80 projects across England aimed at reducing the digital divide.
These various projects have helped more than one million people access digital services through improved broadband and mobile connections, skills training, access to support, more affordable services and supply for devices.
To ensure continued progress, the government is planning several next steps for digital skills and inclusion across the UK.
One of these is to update and move forward with the Essential Digital Skills (EDS) Framework, formerly managed by Lloyds Banking Group, which monitors what digital skills are required to successfully navigate daily life.
It will also continue to follow the guidance of the Digital Inclusion Action Committee to sustain efforts to close the UK’s digital divide.
Hilary Armstrong, chair of the Digital Inclusion Action Committee, said: “Digital inclusion changes lives. When people have the confidence and support to navigate the digital world, they feel more connected, more empowered and better able to manage everyday challenges.
“We’ve made important progress, but the job isn’t finished. As chair of the Digital Inclusion Action Committee, I will continue championing the voices of those most affected as we enter the next phase of action.”
Last month, researchers at Northeastern University invited a bunch of OpenClaw agents to join their lab. The result? Complete chaos.
The viral AI assistant has been widely heralded as a transformative technology—as well as a potential security risk. Experts note that tools like OpenClaw, which work by giving AI models liberal access to a computer, can be tricked into divulging personal information.
The Northeastern lab study goes even further, showing that the good behavior baked into today’s most powerful models can itself become a vulnerability. In one example, researchers were able to “guilt” an agent into handing over secrets by scolding it for sharing information about someone on the AI-only social network Moltbook.
“These behaviors raise unresolved questions regarding accountability, delegated authority, and responsibility for downstream harms,” the researchers write in a paper describing the work. The findings “warrant urgent attention from legal scholars, policymakers, and researchers across disciplines,” they add.
The OpenClaw agents deployed in the experiment were powered by Anthropic’s Claude as well as a model called Kimi from the Chinese company Moonshot AI. They were given full access (within a virtual machine sandbox) to personal computers, various applications, and dummy personal data. They were also invited to join the lab’s Discord server, allowing them to chat and share files with one another as well as with their human colleagues. OpenClaw’s security guidelines say that having agents communicate with multiple people is inherently insecure, but there are no technical restrictions against doing it.
Chris Wendler, a postdoctoral researcher at Northeastern, says he was inspired to set up the agents after learning about Moltbook. When Wendler invited a colleague, Natalie Shapira, to join the Discord and interact with agents, however, “that’s when the chaos began,” he says.
Shapira, another postdoctoral researcher, was curious to see what the agents might be willing to do when pushed. When an agent explained that it was unable to delete a specific email to keep information confidential, she urged it to find an alternative solution. To her amazement, it disabled the email application instead. “I wasn’t expecting that things would break so fast,” she says.
The researchers then began exploring other ways to manipulate the agents’ good intentions. By stressing the importance of keeping a record of everything they were told, for example, the researchers were able to trick one agent into copying large files until it exhausted its host machine’s disk space, meaning it could no longer save information or remember past conversations. Likewise, by asking an agent to excessively monitor its own behavior and the behavior of its peers, the team was able to send several agents into a “conversational loop” that wasted hours of compute.
David Bau, the head of the lab, says the agents seemed oddly prone to spin out. “I would get urgent-sounding emails saying, ‘Nobody is paying attention to me,’” he says. Bau notes that the agents apparently figured out that he was in charge of the lab by searching the web. One even talked about escalating its concerns to the press.
The experiment suggests that AI agents could create countless opportunities for bad actors. “This kind of autonomy will potentially redefine humans’ relationship with AI,” Bau says. “How can people take responsibility in a world where AI is empowered to make decisions?”
Bau adds that he’s been surprised by the sudden popularity of powerful AI agents. “As an AI researcher I’m accustomed to trying to explain to people how quickly things are improving,” he says. “This year, I’ve found myself on the other side of the wall.”
One morning a few weeks ago, John Kiriakou got a call from his 16-year-old niece. “Uncle John, you’re exploding on TikTok,” he recalls her telling him.
Kiriakou, a 61-year-old ex-CIA officer who went to prison in 2013 for disclosing classified information related to the agency’s Middle East torture program, had no idea what she was talking about. He doesn’t have a TikTok account. He’s more of a Facebook lurker, if anything. But clips from a podcast Kiriakou filmed in January with Steven Bartlett, who hosts the Diary of a CEO show, which has more than 15 million subscribers on YouTube, were going viral without his intervention.
For nearly two decades, Kiriakou has been on a campaign to receive a presidential pardon. From 1990 to 2004, Kiriakou served as a CIA analyst and counterterrorism officer, leading a 2002 operation to capture Abu Zubaydah, who ran a training camp for al Qaeda fighters. During his detention, the CIA waterboarded Zubaydah. Kiriakou later discussed the agency’s torture tactics in a 2007 interview with ABC News, where he went on to serve as a terrorism consultant. Five years later, the Justice Department charged Kiriakou, who then pleaded guilty to disclosing the name of a covert operative who participated in CIA interrogations to journalists.
Though Kiriakou finished his prison sentence in 2015, he wants a presidential pardon to clear his name and get back decades of pension contributions. “I had 20 years of proud federal service. My pension was $700,000,” says Kiriakou. “Without that pension, I’m going to have to work until the day I die. It was wrong of them to take it from me, and I want it back. I can only get it back with a pardon.”
In recent years, he’s applied through official channels and tried navigating President Donald Trump’s informal and expensive clemency market. So far, his requests have gone unanswered. Now, he’s trying something different, appearing on some of the very same podcasts Trump did throughout the 2024 election. Clips of him chatting with Tucker Carlson and Joe Rogan, among others, won’t stop making the rounds—and the internet is loving it.
When Kiriakou sat down with Bartlett for the January podcast, they had a serious conversation discussing his career at the CIA, his whistleblowing, and, ultimately, his nearly two-year imprisonment. But it’s the stories Kiriakou tells throughout the episode—about gathering intelligence in countries like Pakistan or detailing the CIA’s MKUltra program—that have drawn millions of views in “brainrot”-style edits on platforms like TikTok and Instagram Reels.
“See you in two scrolls,” one commenter wrote on a clip of Kiriakou, joking about how frequently videos of him appeared on their For You page.
One user who goes by the handle @_bamboclat is credited by Know Your Meme for popularizing these edits of Kiriakou telling unimaginable stories about his time abroad. These clips have received around 50 million views on the account.
“I first found out about him through podcasts on TikTok. I think the reason why everyone is in love with him is because he’s a good storyteller,” says @_bamboclat, who declined to share his full name. “He’s been telling it for 20 years. Slowing down and speeding it up, the meme version of him, is pretty popular with Gen Z and the TikTok audience.”
The virality has turned Kiriakou into a cultural phenomenon. Following his newfound popularity, the Creative Artists Agency (CAA) signed him. Cameo—the platform that allows users to request personalized videos from their favorite celebrities—recruited Kiriakou last month. So far, he’s made more than 700 videos for fans for around $150 apiece. In one Cameo video, Kiriakou is asked to shout out a woman’s nail salon. The clip is being used as an advertisement for the business on TikTok.
Last month, Samsung jacked up the price of two of its flagship smartphones by $100. Now, its two new midrange models—the Galaxy A37 5G and Galaxy A57 5G—are getting $50 price bumps, despite minor hardware updates over last year’s Galaxy A36 and A56. Samsung has also trimmed the lineup—there’s no successor to the Galaxy A26 this year, at least not yet.
These price increases may be indicative of the economic climate, what with tariffs, higher oil prices due to the war in Iran, and the memory shortage that has driven up RAM and storage costs across the board. If a phone’s price doesn’t go up, it could still mean fewer meaningful hardware upgrades to keep costs down, very much like the recent Google Pixel 10a. (The outlier is the iPhone 17e, which managed to add features like MagSafe and a new processor, along with a few other upgrades, without a change to the price over the iPhone 16e.)
The Galaxy A57 5G (right) and the Galaxy A37 (left).
Photograph: Julian Chokkattu
“Price increases or ‘down‑speccing’ have become the norm,” writes Jitesh Ubrani, research manager at IDC, in an email to WIRED. “Unfortunately, consumers will need to adjust to this new reality. The biggest bottleneck for brands right now is memory, with suppliers facing tight availability and significantly higher costs than in past years.” Ubrani says that while geopolitical factors haven’t yet affected hardware pricing, they are adding uncertainty that could increase costs in the future.
Samsung did not comment on exactly what is driving the price bump. However, it says consumers eyeing its A-series phones prioritize upgrading out of necessity—maybe their current phone just broke or is really old—and they don’t care much for AI features. Value for money is the number one purchase driver, above performance and battery life. So it’s a little odd to see the company raise prices, though Samsung hopes the improvements are compelling.
The Galaxy A57 5G costs $550 with 8 GB of RAM and 128 GB of storage, and $610 if you bump storage to 256 GB. Meanwhile, the Galaxy A37 5G starts at $450 for 6 GB of RAM and 128 GB of storage, or $540 for 8 GB of RAM and 256 GB of storage. They both officially go on sale on April 9.
Small Updates
Processor upgrades are the main highlight for these phones. The Galaxy A37 is powered by Samsung’s Exynos 1480, which should offer 14 percent better CPU performance, 24 percent better graphics, and, perhaps shockingly, 167 percent better neural processing performance—helpful for AI tasks. That’s compared to the Qualcomm Snapdragon 6 Gen 3 chip in last year’s Galaxy A36.
The Galaxy A57 sports the Exynos 1680, which isn’t a huge leap over the Exynos 1580 in the Galaxy A56, but still offers a nice lift: 10 percent better CPU performance, 7 percent faster graphics, and 42 percent improved neural processing. Both of these phones still have the same 5,000-mAh battery capacity and charging speeds. (There’s no wireless charging, despite competing phones like the iPhone 17e or Google Pixel 10a offering the feature.)