Health workers are urging NHS decision-makers not to sign contracts with controversial US data analytics firm Palantir, citing ethical concerns around human rights and data privacy.
In 2023, Palantir won a seven-year, £330m NHS England contract to deliver the Federated Data Platform (FDP), a nationwide system intended to connect disparate healthcare data from across the NHS while maintaining security and patient privacy.
While the system is not yet fully operational, many hospital trusts and integrated care boards (ICBs) have already signed up to use the platform.
Highlighting how Palantir’s operations around the world have allegedly contributed to “human rights abuses, war crimes, discriminatory policing practices and mass surveillance”, Medact said the firm’s cosiness with law enforcement and border agencies could lead to “data-driven state abuses of power” if people’s sensitive health information is shared with these bodies.
“This report is concerned that the FDP, by bringing together disparate health datasets onto a single platform run by Palantir, could enable UK government departments, such as the Home Office and police departments, to more easily access patient data,” it said.
Medact added that Palantir’s services to other governments, including in its contract with US Immigration and Customs Enforcement (ICE), have “involved significant cross-departmental data compiling and analysis”, enabling data given to one government department to be repurposed for profiling and surveillance by others.
[We are] concerned that the FDP, by bringing together disparate health datasets onto a single platform run by Palantir, could enable UK government departments to more easily access patient data Medact report
“As well as the potential risk for a current or future UK government to attempt to emulate US-style cross-governmental data sharing, there is a serious risk of Palantir’s contract alienating patients most affected by health inequalities due to this perceived risk,” said Medact, adding that during the pandemic, health advocacy group Patients not Passports found that around 57% of migrants avoided seeking healthcare because they were concerned about being reported to or identified by the Home Office.
Medact said it is concerned that this situation will be made worse by the involvement of Palantir, given its enthusiasm for working with ICE and the existing data-sharing agreements in place between the UK Home Office and the NHS.
These concerns are compounded by the prospect of a potential Reform UK government, as the party has already pledged to facilitate “mass deportations” if it wins power.
According to a Reform policy document published in August 2025, titled Operation restoring justice, the party is aiming to implement an “uncompromising legal reset” and promises to “relentlessly identify and detain all illegal migrants in the UK”. It stated: “Using powers granted by the new legislation, it will automatically share data between the Home Office, NHS, HMRC, DVLA, banks and the police.”
Alongside Palantir’s stated intention to dominate national software provision in the US and allied countries, as well as its active contracts with UK police forces and the Ministry of Defence (MoD), Medact warned that there is a real threat of its involvement undermining data privacy and public trust in UK healthcare institutions.
Policing and military contracts
Outside of its close collaboration with ICE – which is currently engaged in aggressive mass deportation efforts across the US, using unidentified masked agents to conduct operations, and employing fascist rhetoric in its communications and recruitment drives – Medact also highlighted how Palantir assists violent military and policing institutions.
This includes supplying software to the US military during its illegal wars of aggression in Iraq and Afghanistan, providing police forces across the US and Europe with widely critiqued digital “predictive policing” tools, and supplying artificial intelligence (AI) products to the Israeli military.
Storebrand Asset Management, one of the largest asset managers in the Nordic region, divested its holding in Palantir in October 2024, stating that its research indicates that Palantir’s “AI-based predictive policing systems” support Israeli surveillance of Palestinians in the West Bank and Gaza.
Given Palantir’s penchant for working with defence and policing organisations, Medact reiterated that the firm’s involvement in the FDP and other NHS systems represents a clash of values that could undermine public trust.
It added that Palantir is also “likely to benefit reputationally” from NHS contracts, by essentially allowing the firm to launder its own public image by associating with a popular institution.
We argue that NHS England’s contract with Palantir is likely to strengthen Palantir’s software and reputation as a company. Given the highly interoperable nature of Palantir’s different civil and military products, this could indirectly result in the NHS contributing to the advancement of militarised technology used to commit alleged human rights abuses Medact report
“We argue that NHS England’s contract with Palantir is likely to strengthen Palantir’s software and reputation as a company,” said Medact. “Given the highly interoperable nature of Palantir’s different civil and military products, this could indirectly result in the NHS contributing to the advancement of militarised technology used to commit alleged human rights abuses.”
Medact added that, given Palantir’s questionable track record on surveillance and human rights around the world, adopting its technology could see hospital trusts, ICBs and NHS England fall foul of their own ethical procurement policies.
It added that there is a risk of trusts and ICBs being locked into a single supplier, reducing their “ability to transfer to a different supplier or retain full autonomy over the code behind their data management systems”.
In particular, the CDAON cited issues of public trust associated with Palantir’s handling of sensitive health data, and highlighted that viable alternatives already exist.
“We already have similar tools in use that presently exceed the capability and application of what the FDP is currently trying to develop or roll out at a system level,” they wrote.
Medact’s report has been sent to decision-makers sitting across the NHS, including trust boards, ICBs, health scrutiny committees and the Health Data Governance Committee.
Recommendations and Palantir response
To alleviate the concerns identified in its report, Medact has recommended that NHS decision-makers decline to implement the FDP or any other Palantir products in their local data systems, scrutinise their current contracts with the supplier, and investigate the feasibility of in-house or open source alternatives.
Medact has called for NHS England to immediately terminate its Palantir contract.
A spokesperson for Palantir said the firm’s “software is playing an important role in improving patient care – helping to deliver 100,000 additional operations, a 12% reduction in discharge delays and the removal of 675,000 patients from waiting lists”.
They added: “How that software is used is entirely under the control of the NHS, with data only able to be processed in accordance with their strict instructions.”
The spokesperson said the firm also has “no intention of and no means of using the data in the way that the Medact report is suggesting”, adding that “to do so would be illegal and in breach of contract”.
This includes claiming that it is “a matter of company policy” not to support predictive policing applications, that it’s work with ICE is long-standing and dates back to the Obama administration, and that there are “comprehensive” data processing safeguards and controls in place for the FDP.
“Palantir engineers are only able to access NHS data under the direction of the data controllers. This only takes place for appropriate engineering activities like data pipeline deployment and product support tasks,” the company said.
“The technology includes granular access controls and full auditability, ensuring that individuals within the institutions we serve can access only the information necessary to perform their roles. It also provides a clear, traceable record of who accessed specific data, when they accessed it, and for what purpose.”
Palantir added that while it has not been involved in the most high-profile Israel Defense Forces (IDF) artificial intelligence (AI) targeting systems, “we are, however, very proud of the work and support we have provided to Israel following the vicious attacks of October 7th”.
Higher education has long been a target of ransomware gangs and data extortion attacks. But never before, perhaps, has a cyberattack against a single software platform so thoroughly disrupted the daily operations of thousands of schools across the United States.
The widely used digital learning platform Canvas was put into “maintenance mode” on Thursday after its maker, the education tech giant Instructure, suffered a data breach and faced an extortion attempt by attackers using the recognizable moniker “ShinyHunters.” Though the hackers have been advertising the breach and attempting to extract a ransom payment from Instructure since May 1, the situation took on additional immediacy for regular people across the US and beyond on Thursday because the Canvas downtime caused chaos at schools, including those in the midst of finals and end-of-year assignments.
Universities like Harvard, Columbia, Rutgers, and Georgetown sent alerts to students about the situation in recent days; other institutions, including school districts in at least a dozen states, also appear to have been affected. In a list published by the hackers behind the attack on their ransom-focused dark web site, they claim the breach affected more than 8,800 schools. The exact scale and reach of the breach is currently unclear, though. And the fact that Canvas was down throughout Thursday afternoon and evening further complicated the picture.
In a running incident update log that began on May 1, Steve Proud, Instructure’s chief information security officer, said that the company had “recently experienced a cybersecurity incident perpetrated by a criminal threat actor.” He added on May 2 that “the information involved” for “users at affected institutions” included names, email addresses, student ID numbers, and messages exchanged by users on the platform.
The situation was ultimately marked as “Resolved” on Wednesday, with Proud writing that “Canvas is fully operational, and we are not seeing any ongoing unauthorized activity.” At midday on Thursday, though, the Instructure status page registered an “issue” where “some users are having difficulties logging into Student ePortfolios.” Within a few hours, the company had added another status update: “Instructure has placed Canvas, Canvas Beta and Canvas Test in maintenance mode.” Late Thursday evening, the company said that Canvas was available again “for most users.”
TechCrunch reported on Thursday that the hackers launched a secondary wave of attacks, defacing some schools’ Canvas portals by injecting an HTML file to display their own message on the schools’ Canvas login pages. According to The Harvard Crimson, attackers modified the Harvard Canvas login page to show a message that included a list of schools that the hackers claim were impacted by the breach.
The message from attackers “urged schools included on the affected list to consult with a cyber advisory firm and contact the group privately to negotiate a settlement before the end of the day on May 12—or else risk their data being leaked,” The Crimson reported. “It is unclear what information tied to Harvard affiliates was included in the alleged breach.”
Instructure did not immediately respond to a request for comment about Thursday’s outages and how they fit into the bigger picture of the breach. But the situation is significant given that a massive trove of student information has potentially been exposed, and the visibility of the incident across the country makes it a key example of a longstanding, yet endlessly escalating problem of data extortion and ransomware attacks.
The ShinyHunters name is associated with massive data dumps and has been linked to the infamous hacker collective known as the Com. But as the constellation of actors has shifted over the years, numerous attackers have taken up the most prominent Com-related monikers. A number of recent attacks have invoked other names, such as Lapsus$, with little or no connection to the original group that operated under the name.
But Microsoft executives had reservations about sending additional funding to OpenAI as far back as 2018 when it was just a small nonprofit research lab, according to emails between more than a dozen Microsoft executives, including CEO Satya Nadella, shown in a federal court on Thursday during the Musk v. Altman trial.
The emails show how Microsoft, at the time, wavered over what has since been held up as one of the most successful corporate partnerships in tech history. Several Microsoft executives said in the emails their visits to OpenAI did not indicate any imminent breakthroughs in developing artificial general intelligence. In 2017, much of OpenAI’s work was focused on building AI systems that could play video games, which showed early signs of success. But OpenAI needed five times more computing power than it had originally secured from Microsoft to continue the project.
Microsoft worried that not providing support could push OpenAI into the arms of Amazon, the world’s dominant cloud computing provider at the time. Roughly 18 months after the emails were sent, Microsoft announced a landmark $1 billion investment in OpenAI after the lab created a for-profit arm that provided the tech giant with the potential to generate a return of $20 billion.
Microsoft declined to comment.
Elon Musk’s attorneys introduced the emails to show Microsoft’s evolving relationship with OpenAI. After Musk reached out to Nadella, Microsoft in 2016 agreed to provide $60 million worth of cloud computing services to OpenAI at a steep discount. OpenAI consumed the services twice as fast as expected.
The email chain kicked off on August 11, 2017, with Nadella reaching out to OpenAI CEO Sam Altman to congratulate the lab on winning a video game competition using AI to mimic a human player. Ten days later, Altman responded seeking $300 million worth of Microsoft Azure cloud computing services.
“We could figure how to fund some of it but not that much,” Altman wrote, apparently seeking a financial handout and engineering help. “I think it will be the most impressive thing yet in the history of AI.”
Nadella asked four lieutenants for their input on how to respond three days later. Microsoft’s AI team saw “no value in engaging,” according to a response from Jason Zander, Microsoft’s executive vice president, that also documented how other teams felt. Its research team thought its own work was “more advanced,” while the public relation teams didn’t like the idea of supporting a group pushing the idea of “machines beating humans.” Ultimately, Zander suggested that Azure would benefit from associating with Musk and Altman but that he wouldn’t want to “take a complete bath,” or large financial hit, in doing so.
A subsequent analysis showed that Microsoft stood to lose about $150 million over several years if it provided the services Altman wanted, according to one email. “Unless he can help us draw a more direct networking effect with OpenAI -> Microsoft business value, we will wind up having to pass,” Zander wrote.
The thread went dark for several months, but was revived on January 10, 2018, with an email to Nadella from Brett Tanzer—who signed off his emails with “Brettt”—then a director on the Azure cloud unit. Altman had told Tanzer that OpenAI could license its gaming AI to Microsoft’s Xbox video game division in exchange for “$35-50 million in Azure Credits.” But Xbox couldn’t commit that much money. Microsoft planned to tell Altman there would be no more discounts after that March, per Tanzer’s email.
Brian Barrett: This is the first time I’ve thought about contact tracing in many years, and I was so happy not thinking about it for so long, because it is such a complicated process and something that is really hard work to do. Emily, given all of that, what is the level of concern here, given what the World Health Organization has said and other organizations? It sounds like cautious about it, but maybe not freak out time yet, but I defer to you because maybe that’s just me trying to make myself feel better.
Emily Mullin: No, I think you’re right. The hantavirus expert I spoke with said there have been past clusters of the Andes strain before, but not big outbreaks. And these clusters have tended to involve prolonged close contact with people suffering from the disease. This is a virus that does not spread nearly as efficiently as other respiratory viruses that we’re used to like Covid or flu, for instance. Hantavirus symptoms are also typically pretty severe. So this is not a virus, again, like Covid where lots of people are going around infected with the disease, spreading it asymptomatically without knowing about it. So that’s at least a little bit of comfort, even though the flip side of that is that the disease is quite severe. So the World Health Organization says the risk to the general public is currently low, and this is probably not another Covid situation.
Brian Barrett: Leah, how we feeling?
Leah Feiger: Not good, you guys. I don’t know. Are you kidding? How are you feeling? Maybe this is my moment to go, “Are you with me yet?”
Brian Barrett: No, I was good, but then Emily hit that probably pretty hard in a way that I suddenly felt a little more anxious.
Leah Feiger: Yeah, it was the swallowing of the probably.
Emily Mullin: That was me editorializing. The World Health Organization did not include the probably.
Brian Barrett: OK. What if they had it just in italics or big quotation marks? Like it’s “probably” fine.
Leah Feiger: I don’t know, guys. I think, one, I’m fascinated that there’s different strains of this. And it brought me back so early on to the armchair scientists in early Covid who were like, “No, no, no, this is totally fine.” So for there to officially be announced, yes, this is the strain that can get passed between humans, I think is notable at the very least. Got to give me that.
Brian Barrett: Oh, I think that’s true. And I think my open questions are, how long do these people have to stay on this ship before everyone says, “OK, you can go now,” or do they send them back to shore and just have them isolate for a certain amount of time? The contact tracing is concerning because again, I’m having flashbacks. But I do think the things that, Emily, that you said about how this is different from Covid in important ways in terms of how quickly it can spread, how easily it can spread, especially now that we have the mechanisms in place to do these contact tracing things, I’m going to remain on my not too worried yet.