Tech
Swedish welfare authorities suspend ‘discriminatory’ AI model | Computer Weekly
A “discriminatory” artificial intelligence (AI) model used by Sweden’s social security agency to flag people for benefit fraud investigations has been suspended, following an intervention by the country’s Data Protection Authority (IMY).
Starting in June 2025, IMY’s involvement was prompted after a joint investigation from Lighthouse Reports and Svenska Dagbladet (SvB) revealed in November 2024 that a machine learning (ML) system being used by Försäkringskassan, Sweden’s Social Insurance Agency was disproportionally and wrongly flagging certain groups for further investigation over social benefits fraud.
This included women, individuals with “foreign” backgrounds, low-income earners and people without university degrees. The media outlets also found the same system was largely ineffective at identifying men and rich people that actually had committed some kind of social security fraud.
These findings prompted Amnesty International to publicly call for the system’s immediate discontinuation in November 2024, which it described at the time as “dehumanising” and “akin to a witch hunt”.
Introduced by Försäkringskassan in 2013, the ML-based system assigns risk scores to social security applicants, which then automatically triggers an investigation if the risk score is high enough.
According to a blog published by IMY on 18 November 2025, Försäkringskassan was specifically using the system to conduct targeted checks on recipients of temporary child support benefits – which are designed to compensate parents for taking time off work when they have to care for their sick children – but took it out of use over the course of the Authorities investigation.
“While the inspection was ongoing, the Swedish Social Insurance Agency took the AI system out of use,” said IMY lawyer Måns Lysén. “Since the system is no longer in use and any risks with the system have ceased, we have assessed that we can close the case. Personal data is increasingly being processed with AI, so it is welcome that this use is being recognised and discussed. Both authorities and others need to ensure that AI use complies with the [General Data Protection Regulation] GDPR and now also the AI regulation, which is gradually coming into force.”
IMY added that Försäkringskassan “does not currently plan to resume the current risk profile”.
Under the European Union’s AI Act, which came into force on 1 August 2024, the use of AI systems by public authorities to determine access to essential public services and benefits must meet strict technical, transparency and governance rules, including an obligation by deployers to carry out an assessment of human rights risks and guarantee there are mitigation measures in place before using them. Specific systems that are considered as tools for social scoring are prohibited.
Computer Weekly contacted Försäkringskassan about the suspension of the system, and why it elected to discontinue before IMY’s inspection had concluded.
“We discontinued the use of the risk assessment profile in order to assess whether it complies with the new European AI regulation,” said a spokesperson. “We have at the moment no plans to put it back into use since we now receive absence data from employers among other data, which is expected to provide a relatively good accuracy.”
Försäkringskassan previously told Computer Weekly in November 2024 that “the system operates in full compliance with Swedish law”, and that applicants entitled to benefits “will receive them regardless of whether their application was flagged”.
In response to Lighthouse and SvB’s claims that the agency had not been fully transparent about the inner workings of the system, Försäkringskassan added that “revealing the specifics of how the system operates could enable individuals to bypass detection”.
Similar systems
Similar AI-based systems used by other countries to distribute benefits or investigate fraud have faced similar problems.
In November 2024, for example, Amnesty International exposed how AI tools used by Denmark’s welfare agency are creating pernicious mass surveillance, risking discrimination against people with disabilities, racialised groups, migrants and refugees.
In the UK, an internal assessment by the Department for Work and Pensions (DWP) – released under Freedom of Information (FoI) rules to the Public Law Project – found that an ML system used to vet thousands of Universal Credit benefit payments was showing “statistically significant” disparities when selecting who to investigate for possible fraud.
Carried out in February 2024, the assessment showed there is a “statistically significant referral … and outcome disparity for all the protected characteristics analysed”, which included people’s age, disability, marital status and nationality.
Civil rights groups later criticised DWP in July 2025 for a “worrying lack of transparency” over how it is embedding AI throughout the UK’s social security system, which is being used to determine people’s eligibility for social security schemes such as Universal Credit or Personal Independence Payment.
In separate reports published around the same time, both Amnesty International and Big Brother Watch highlighted the clear risks of bias associated with the use of AI in this context, and how the technology can exacerbate pre-existing discriminatory outcomes in the UK’s benefits system.
Tech
‘Uncanny Valley’: Pentagon vs. ‘Woke’ Anthropic, Agentic vs. Mimetic, and Trump vs. State of the Union
Guys, before we go to break, there’s something very near and dear to my heart that WIRED wrote about this week. It’s something I love even more than biathlon. It is undersea internet cables.
Leah Feiger: I love when you talk about this. I think that the first time you brought this up to me was approximately one week into your tenure as executive editor, and you’re like, “Leah, do you know what I love?” and it’s undersea internet cables.
Brian Barrett: Yeah. I was like, “Number one, undersea internet cables. Number two, my children. Number three …” that was sort of the gist of it. That’s how I always introduce myself. I want to take everybody back to December 14th, 1988. The top movie in theaters is Twins starring Arnold Schwarzenegger and Danny DeVito.
Zoë Schiffer: Legitimately never heard of it.
Leah Feiger: Wait, Zoë. What?
Brian Barrett: What? Anyway, Arnold is agentic and Danny DeVito’s mimetic. The top song—
Zoë Schiffer: Now I get it.
Brian Barrett: —the top song is “Look Away” by Chicago. Now that, I also am not—I don’t remember that one at all. And the first undersea fiber optic cable connecting the United States, UK and France went live. This was the day that the internet went global, which is crazy—
Zoë Schiffer: That is crazy.
Brian Barrett: —that it was relatively recent. The reason we’re writing about it now is that that original cable, which is called TAT-8, is being pulled up. It’s out of commission. It’s old, it’s decrepit, so I identify, and it’s being pulled up and put out to pasture because the technology’s gotten better. But in this great feature that we published, it is a look at how this changed the world basically, and how we take for granted—but the reason I am so into undersea cable stories is because it’s so easy to forget that the internet is a physical thing and that the maintenance of those things is really what makes all this connectivity happen. So yeah, TAT-8. Any other fond memories of TAT-8? Or, no. What did you guys think reading this feature?
Zoë Schiffer: Well, famously we were not alive in 1988.
Leah Feiger: Yeah. Sorry, Brian. You’re older than us. Just a reminder.
Brian Barrett: Hurts.
Zoë Schiffer: But the part of this story that I wanted to talk about, which felt like a real intersection of both of your interests was the myth of the shark attacks.
Brian Barrett: Oh, yeah.
Leah Feiger: OK. So to back up a little bit, these cables, at the very beginning, when they were put in, Brian would be able to talk about this way more because he’s kind of a freak about cables if you haven’t realized already. These cables would sometimes have unexplained damage, and looking back on it years later, engineers figured out that this kind of happens, that if you are putting cables underseas, there will be wind, there will be changes, things will get moved around. Of course, there will be damages, but that is not how they felt at the time. These engineers assumed that it was sharks, that sharks were biting their cables, that they were destroying the internet. The cables were reinforced with all these protective layers, all of these things, because they were like, “Oh, my God, the sharks are quite literally ending all of this for us.” But this article goes into great detail of how they figured out it wasn’t the sharks, and by thinking that it was the sharks, it actually helped make all of this technology that much better and stronger, but the sharks were innocent, you guys. The sharks were innocent.
Tech
This AI Agent Is Designed to Not Go Rogue
AI agents like OpenClaw have recently exploded in popularity precisely because they can take the reins of your digital life. Whether you want a personalized morning news digest, a proxy that can fight with your cable company’s customer service, or a to-do list auditor that will do some tasks for you and prod you to resolve the rest, agentic assistants are built to access your digital accounts and carry out your commands. This is helpful—but has also caused a lot of chaos. The bots are out there mass-deleting emails they’ve been instructed to preserve, writing hit pieces over perceived snubs, and launching phishing attacks against their owners.
Watching the pandemonium unfold in recent weeks, longtime security engineer and researcher Niels Provos decided to try something new. Today he is launching an open source, secure AI assistant called IronCurtain designed to add a critical layer of control. Instead of the agent directly interacting with the user’s systems and accounts, it runs in an isolated virtual machine. And its ability to take any action is mediated by a policy—you could even think of it as a constitution—that the owner writes to govern the system. Crucially, IronCurtain is also designed to receive these overarching policies in plain English and then runs them through a multistep process that uses a large language model (LLM) to convert the natural language into an enforceable security policy.
“Services like OpenClaw are at peak hype right now, but my hope is that there’s an opportunity to say, ‘Well, this is probably not how we want to do it,’” Provos says. “Instead, let’s develop something that still gives you very high utility, but is not going to go into these completely uncharted, sometimes destructive, paths.”
IronCurtain’s ability to take intuitive, straightforward statements and turn them into enforceable, deterministic—or predictable—red lines is vital, Provos says, because LLMs are famously “stochastic” and probabilistic. In other words, they don’t necessarily always generate the same content or give the same information in response to the same prompt. This creates challenges for AI guardrails, because AI systems can evolve over time such that they revise how they interpret a control or constraint mechanism, which can result in rogue activity.
An IronCurtain policy, Provos says, could be as simple as: “The agent may read all my email. It may send email to people in my contacts without asking. For anyone else, ask me first. Never delete anything permanently.”
IronCurtain takes these instructions, turns them into an enforceable policy, and then mediates between the assistant agent in the virtual machine and what’s known as the model context protocol server that gives LLMs access to data and other digital services to carry out tasks. Being able to constrain an agent this way adds an important component of access control that web platforms like email providers don’t currently offer because they weren’t built for the scenario where both a human owner and AI agent bots are all using one account.
Provos notes that IronCurtain is designed to refine and improve each user’s “constitution” over time as the system encounters edge cases and asks for human input about how to proceed. The system, which is model-independent and can be used with any LLM, is also designed to maintain an audit log of all policy decisions over time.
IronCurtain is a research prototype, not a consumer product, and Provos hopes that people will contribute to the project to explore and help it evolve. Dino Dai Zovi, a well-known cybersecurity researcher who has been experimenting with early versions of IronCurtain, says that the conceptual approach the project takes aligns with his own intuition about how agentic AI needs to be constrained.
Tech
OpenAI Announces Major Expansion of London Office
OpenAI has announced plans to turn its London office into its largest research hub outside of the United States.
The company—which established a UK office in 2023—says it will expand its London-based research team, scooping up talent emerging from leading British universities. It has not indicated how many researchers it will hire.
“The UK brings together world-class talent and leading scientific institutions and universities, making it an ideal place to deliver the important research which will ensure our AI is safe, useful, and benefits everyone,” said Mark Chen, chief research officer at OpenAI, in a statement.
The plans bring OpenAI into direct competition for top research talent with Google DeepMind, the AI lab run by British researcher Demis Hassabis, which is headquartered in London. DeepMind has long-running partnerships with Oxford University and the University of Cambridge, where it sponsors professorships, funds research, and works alongside researchers.
At the latest careers fair at Oxford University, the floor was packed with undergraduates looking for technical roles and recruiters hiring for AI-related positions. “The demand and supply is increasing on both sides, even within a year,” says Jonathan Black, director of the careers service at Oxford University. “To have something like this turn up is a really positive sign.”
OpenAI’s expansion in London could have a sort-of flywheel effect, whereby the researchers it hires early in their careers go on to start new labs in the UK, says Tom Wilson, partner at venture capital firm Seedcamp. “We’ve seen many examples over the years,” he says. “That’s where these kinds of announcements can have even more impact than the initial hires … the second-order effects can be great.”
OpenAI’s team in London will continue to contribute to products like Codex and GPT-5.2, the company says, but will now “own” certain aspects of model development relating to safety, reliability, and performance evaluation.
In a statement, the UK’s science and technology secretary, Liz Kendall, described the announcement as “a huge vote of confidence in the UK’s world-leading position at the cutting edge of AI research.”
The announcement coincides with a push in the UK to scale the nation’s data center and power infrastructure to meet the voracious demand for compute among AI companies, including OpenAI.
-
Tech7 days agoA $10K Bounty Awaits Anyone Who Can Hack Ring Cameras to Stop Sharing Data With Amazon
-
Business6 days agoUS Top Court Blocks Trump’s Tariff Orders: Does It Mean Zero Duties For Indian Goods?
-
Fashion6 days agoICE cotton ticks higher on crude oil rally
-
Business5 days agoEye-popping rise in one year: Betting on just gold and silver for long-term wealth creation? Think again! – The Times of India
-
Entertainment5 days agoViral monkey Punch makes IKEA toy global sensation: Here’s what it costs
-
Sports6 days agoBrett Favre blasts NFL for no longer appealing to ‘true’ fans: ‘There’s been a slight shift’
-
Entertainment6 days agoThe White Lotus” creator Mike White reflects on his time on “Survivor
-
Sports5 days agoKansas’ Darryn Peterson misses most of 2nd half with cramping
