Connect with us

Tech

Swedish welfare authorities suspend ‘discriminatory’ AI model | Computer Weekly

Published

on

Swedish welfare authorities suspend ‘discriminatory’ AI model | Computer Weekly


A “discriminatory” artificial intelligence (AI) model used by Sweden’s social security agency to flag people for benefit fraud investigations has been suspended, following an intervention by the country’s Data Protection Authority (IMY).

Starting in June 2025, IMY’s involvement was prompted after a joint investigation from Lighthouse Reports and Svenska Dagbladet (SvB) revealed in November 2024 that a machine learning (ML) system being used by Försäkringskassan, Sweden’s Social Insurance Agency was disproportionally and wrongly flagging certain groups for further investigation over social benefits fraud.

This included women, individuals with “foreign” backgrounds, low-income earners and people without university degrees. The media outlets also found the same system was largely ineffective at identifying men and rich people that actually had committed some kind of social security fraud.

These findings prompted Amnesty International to publicly call for the system’s immediate discontinuation in November 2024, which it described at the time as “dehumanising” and “akin to a witch hunt”.

Introduced by Försäkringskassan in 2013, the ML-based system assigns risk scores to social security applicants, which then automatically triggers an investigation if the risk score is high enough.

According to a blog published by IMY on 18 November 2025, Försäkringskassan was specifically using the system to conduct targeted checks on recipients of temporary child support benefits – which are designed to compensate parents for taking time off work when they have to care for their sick children – but took it out of use over the course of the Authorities investigation.

“While the inspection was ongoing, the Swedish Social Insurance Agency took the AI ​​system out of use,” said IMY lawyer Måns Lysén. “Since the system is no longer in use and any risks with the system have ceased, we have assessed that we can close the case. Personal data is increasingly being processed with AI, so it is welcome that this use is being recognised and discussed. Both authorities and others need to ensure that AI use complies with the [General Data Protection Regulation] GDPR and now also the AI ​​regulation, which is gradually coming into force.”

IMY added that Försäkringskassan “does not currently plan to resume the current risk profile”.

Under the European Union’s AI Act, which came into force on 1 August 2024, the use of AI systems by public authorities to determine access to essential public services and benefits must meet strict technical, transparency and governance rules, including an obligation by deployers to carry out an assessment of human rights risks and guarantee there are mitigation measures in place before using them. Specific systems that are considered as tools for social scoring are prohibited.

Computer Weekly contacted Försäkringskassan about the suspension of the system, and why it elected to discontinue before IMY’s inspection had concluded.

“We discontinued the use of the risk assessment profile in order to assess whether it complies with the new European AI regulation,” said a spokesperson. “We have at the moment no plans to put it back into use since we now receive absence data from employers among other data, which is expected to provide a relatively good accuracy.”

Försäkringskassan previously told Computer Weekly in November 2024 that “the system operates in full compliance with Swedish law”, and that applicants entitled to benefits “will receive them regardless of whether their application was flagged”.

In response to Lighthouse and SvB’s claims that the agency had not been fully transparent about the inner workings of the system, Försäkringskassan added that “revealing the specifics of how the system operates could enable individuals to bypass detection”.

Similar systems

Similar AI-based systems used by other countries to distribute benefits or investigate fraud have faced similar problems.

In November 2024, for example, Amnesty International exposed how AI tools used by Denmark’s welfare agency are creating pernicious mass surveillance, risking discrimination against people with disabilities, racialised groups, migrants and refugees.

In the UK, an internal assessment by the Department for Work and Pensions (DWP) – released under Freedom of Information (FoI) rules to the Public Law Project – found that an ML system used to vet thousands of Universal Credit benefit payments was showing “statistically significant” disparities when selecting who to investigate for possible fraud.

Carried out in February 2024, the assessment showed there is a “statistically significant referral … and outcome disparity for all the protected characteristics analysed”, which included people’s age, disability, marital status and nationality.

Civil rights groups later criticised DWP in July 2025 for a “worrying lack of transparency” over how it is embedding AI throughout the UK’s social security system, which is being used to determine people’s eligibility for social security schemes such as Universal Credit or Personal Independence Payment.

In separate reports published around the same time, both Amnesty International and Big Brother Watch highlighted the clear risks of bias associated with the use of AI in this context, and how the technology can exacerbate pre-existing discriminatory outcomes in the UK’s benefits system.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

We Gave These Android-Ready Earbuds a 9/10, and They’re Just $180

Published

on

We Gave These Android-Ready Earbuds a 9/10, and They’re Just 0


If you’re an esteemed Android user like me, and you felt left out of yesterday’s deal on the AirPods Pro 3, I’ve got you covered today with an even bigger discount on the Pixel Buds Pro 2. Both Amazon and Best Buy have the hazel color marked down from $229 to $180, a $49 discount on Google’s most upgraded wireless earbuds.

  • Photograph: Julian Chokkattu

The first change you’ll notice from the previous generation Pixel Buds Pro is that the newer model is much lighter, and the buds are 27 percent smaller. As a result, these are an excellent choice for anyone with small ears, and they stay put super well. Reviewer Parker Hall “had no problem doing hours of tree pruning and going on long sweaty runs in Portland’s early fall heat wave.”

With some help from top-notch physical sound isolation, the active noise-canceling on these is just as good as Apple’s and even goes toe-to-toe with big hitters like Bose and Sony. The transparency mode works just as well, too, with a wider range and clearer audio than a lot of other headphones offer. When it’s time to actually turn up the tunes, you can enjoy a wide, natural soundstage that has excellent detail in the midrange and clear, sparkling treble.

The Gemini integration, unfortunately, leaves a bit to be desired. It’s not the smoothest experience, particularly when asking multiple questions, and the Pixel Buds Pro 2 aren’t offering anything that other earbuds can’t do. Apple’s live translations and heart rate monitors are more useful features, but if you’re on Android, you’re locked out of them anyway.

If you’re interested in upgrading your earbud game, and you already have a Pixel, you can grab the Pixel Buds Pro 2 in hazel for $180 from either Amazon or Best Buy. If that color doesn’t suit you, I also spotted lesser discounts on the peony color for $189, or the porcelain color for $210. For anyone who isn’t already sold on the Pixel Buds Pro 2, make sure to swing by our guide to the best wireless earbuds, with picks for both Apple and Android owners.



Source link

Continue Reading

Tech

‘Uncanny Valley’: Pentagon vs. ‘Woke’ Anthropic, Agentic vs. Mimetic, and Trump vs. State of the Union

Published

on

‘Uncanny Valley’: Pentagon vs. ‘Woke’ Anthropic, Agentic vs. Mimetic, and Trump vs. State of the Union


Guys, before we go to break, there’s something very near and dear to my heart that WIRED wrote about this week. It’s something I love even more than biathlon. It is undersea internet cables.

Leah Feiger: I love when you talk about this. I think that the first time you brought this up to me was approximately one week into your tenure as executive editor, and you’re like, “Leah, do you know what I love?” and it’s undersea internet cables.

Brian Barrett: Yeah. I was like, “Number one, undersea internet cables. Number two, my children. Number three …” that was sort of the gist of it. That’s how I always introduce myself. I want to take everybody back to December 14th, 1988. The top movie in theaters is Twins starring Arnold Schwarzenegger and Danny DeVito.

Zoë Schiffer: Legitimately never heard of it.

Leah Feiger: Wait, Zoë. What?

Brian Barrett: What? Anyway, Arnold is agentic and Danny DeVito’s mimetic. The top song—

Zoë Schiffer: Now I get it.

Brian Barrett: —the top song is “Look Away” by Chicago. Now that, I also am not—I don’t remember that one at all. And the first undersea fiber optic cable connecting the United States, UK and France went live. This was the day that the internet went global, which is crazy—

Zoë Schiffer: That is crazy.

Brian Barrett: —that it was relatively recent. The reason we’re writing about it now is that that original cable, which is called TAT-8, is being pulled up. It’s out of commission. It’s old, it’s decrepit, so I identify, and it’s being pulled up and put out to pasture because the technology’s gotten better. But in this great feature that we published, it is a look at how this changed the world basically, and how we take for granted—but the reason I am so into undersea cable stories is because it’s so easy to forget that the internet is a physical thing and that the maintenance of those things is really what makes all this connectivity happen. So yeah, TAT-8. Any other fond memories of TAT-8? Or, no. What did you guys think reading this feature?

Zoë Schiffer: Well, famously we were not alive in 1988.

Leah Feiger: Yeah. Sorry, Brian. You’re older than us. Just a reminder.

Brian Barrett: Hurts.

Zoë Schiffer: But the part of this story that I wanted to talk about, which felt like a real intersection of both of your interests was the myth of the shark attacks.

Brian Barrett: Oh, yeah.

Leah Feiger: OK. So to back up a little bit, these cables, at the very beginning, when they were put in, Brian would be able to talk about this way more because he’s kind of a freak about cables if you haven’t realized already. These cables would sometimes have unexplained damage, and looking back on it years later, engineers figured out that this kind of happens, that if you are putting cables underseas, there will be wind, there will be changes, things will get moved around. Of course, there will be damages, but that is not how they felt at the time. These engineers assumed that it was sharks, that sharks were biting their cables, that they were destroying the internet. The cables were reinforced with all these protective layers, all of these things, because they were like, “Oh, my God, the sharks are quite literally ending all of this for us.” But this article goes into great detail of how they figured out it wasn’t the sharks, and by thinking that it was the sharks, it actually helped make all of this technology that much better and stronger, but the sharks were innocent, you guys. The sharks were innocent.



Source link

Continue Reading

Tech

This AI Agent Is Designed to Not Go Rogue

Published

on

This AI Agent Is Designed to Not Go Rogue


AI agents like OpenClaw have recently exploded in popularity precisely because they can take the reins of your digital life. Whether you want a personalized morning news digest, a proxy that can fight with your cable company’s customer service, or a to-do list auditor that will do some tasks for you and prod you to resolve the rest, agentic assistants are built to access your digital accounts and carry out your commands. This is helpful—but has also caused a lot of chaos. The bots are out there mass-deleting emails they’ve been instructed to preserve, writing hit pieces over perceived snubs, and launching phishing attacks against their owners.

Watching the pandemonium unfold in recent weeks, longtime security engineer and researcher Niels Provos decided to try something new. Today he is launching an open source, secure AI assistant called IronCurtain designed to add a critical layer of control. Instead of the agent directly interacting with the user’s systems and accounts, it runs in an isolated virtual machine. And its ability to take any action is mediated by a policy—you could even think of it as a constitution—that the owner writes to govern the system. Crucially, IronCurtain is also designed to receive these overarching policies in plain English and then runs them through a multistep process that uses a large language model (LLM) to convert the natural language into an enforceable security policy.

“Services like OpenClaw are at peak hype right now, but my hope is that there’s an opportunity to say, ‘Well, this is probably not how we want to do it,’” Provos says. “Instead, let’s develop something that still gives you very high utility, but is not going to go into these completely uncharted, sometimes destructive, paths.”

IronCurtain’s ability to take intuitive, straightforward statements and turn them into enforceable, deterministic—or predictable—red lines is vital, Provos says, because LLMs are famously “stochastic” and probabilistic. In other words, they don’t necessarily always generate the same content or give the same information in response to the same prompt. This creates challenges for AI guardrails, because AI systems can evolve over time such that they revise how they interpret a control or constraint mechanism, which can result in rogue activity.

An IronCurtain policy, Provos says, could be as simple as: “The agent may read all my email. It may send email to people in my contacts without asking. For anyone else, ask me first. Never delete anything permanently.”

IronCurtain takes these instructions, turns them into an enforceable policy, and then mediates between the assistant agent in the virtual machine and what’s known as the model context protocol server that gives LLMs access to data and other digital services to carry out tasks. Being able to constrain an agent this way adds an important component of access control that web platforms like email providers don’t currently offer because they weren’t built for the scenario where both a human owner and AI agent bots are all using one account.

Provos notes that IronCurtain is designed to refine and improve each user’s “constitution” over time as the system encounters edge cases and asks for human input about how to proceed. The system, which is model-independent and can be used with any LLM, is also designed to maintain an audit log of all policy decisions over time.

IronCurtain is a research prototype, not a consumer product, and Provos hopes that people will contribute to the project to explore and help it evolve. Dino Dai Zovi, a well-known cybersecurity researcher who has been experimenting with early versions of IronCurtain, says that the conceptual approach the project takes aligns with his own intuition about how agentic AI needs to be constrained.



Source link

Continue Reading

Trending