The largest western AI labs are taking a break from sniping at one another to partner on a new accelerator program for European startups building applications on top of their models. Paris-based incubator Station F will run the program, named F/ai.
On Tuesday, Station F announced it had partnered with Meta, Microsoft, Google, Anthropic, OpenAI and Mistral, which it says marks the first time the firms are all participating in a single accelerator. Other partners include cloud and semiconductor companies AWS, AMD, Qualcomm, and OVH Cloud.
An accelerator is effectively a crash course for early-stage startups, whereby founders attend classes and lectures, consult with specialists, and receive introductions to potential investors and customers. The broad aim is to help startups bring ideas to market as quickly as possible.
The 20 startups in each F/ai cohort will undergo a curriculum geared specifically toward helping European AI startups generate revenue earlier in their lifecycle, in turn making it easier to secure the funding required to expand into the largest global markets. “We’re focusing on rapid commercialization,” says Roxanne Varza, director at Station F, in an interview with WIRED. “Investors are starting to feel like, ‘European companies are nice, but they’re not hitting the $1 million revenue mark fast enough.’”
The accelerator will run for three months, twice a year. The first edition began on January 13. Station F has not revealed which startups make up the cohort, but many were recommended by Sequoia Capital, General Catalyst, Lightspeed, or one of the other VC firms involved in the program. The startups are all building AI applications on top of the foundational models developed by the partnering labs, in areas ranging from agentic AI to procurement and finance.
In lieu of direct funding, participating founders will receive more than $1 million in credits that can be traded for access to AI models, compute, and other services from the partner firms.
With very few exceptions, European companies have so far lagged behind their American and Chinese counterparts at every stage of the AI production line. To try to close that gap, the UK and EU governments are throwing hundreds of millions of dollars at attempts to support homegrown AI firms, and develop the domestic data center and power infrastructure necessary to train and operate AI models and applications.
In the US, tech accelerators like Y Combinator have produced a crop of household names, including Airbnb, Stripe, DoorDash, and Reddit. OpenAI was itself established in 2015 with the help of funding from Y Combinator’s then research division. Station F intends for F/ai to have a similar impact in Europe, making domestic AI startups competitive on the international stage. “It’s for European founders with a global ambition,” says Varza.
The program also represents a chance for the US-based AI labs to sow further seeds in Europe, using subsidies to incentivize a new generation of startups to build atop their technologies.
Once a developer begins to build on top of a particular model, it is rarely straightforward to swap to an alternative, says Marta Vinaixa, partner and CEO at VC firm Ryde Ventures. “When you build on top of these systems, you’re also building for how the systems behave—their quirkiness,” she says. “Once you start with a foundation, at least for the same project, you’re not going to change to another.”
The earlier in a company’s lifecycle it begins to develop on top of a particular model, says Vinaixa, the more that effect is magnified. “The sooner that you start, the more that you accumulate, the more difficult it becomes,” she says.
There isn’t a one-size-fits-all when it comes to toys aimed at providing accessibility or inclusion, just like there isn’t one type of disability. Very few toys or brands are actually made with disability at the forefront, the exception being Cute Little Fuckers, a queer,- trans-, and disabled-owned sex toy brand. (I tested three of the brand’s toys, above.)
So instead, I thought of my own needs as someone with upper-limb disabilities, and I talked to other disabled folks, including those who use wheelchairs or have lower-body disabilities, to find out what they look for in their sex toys. This included tools like slings, pillows, and chairs that help with positioning during sex (or solo play). (More on that below.)
Since I have a vagina and upper limb disabilities, many of the toys I tested were aimed at people like me, but many, like app-connected G-spot and clitoral toys, have similar versions with the same in-app features, except for people with penises or those that prefer anal play.
I took many factors intro consideration, including weight, length, girth; whether the toy was easy to hold or could be wedged; if you could just lie on it or use in multiple positions; and if it could be controlled via buttons (and how difficult those might be to press), in-app, or with a remote control. Once the individual realizes what they need from a toy to make it work for their body and ability, it’ll be easier to narrow down the toy that’d work best.
I tested several sex toy holders, including those that fit into a pillow for mounting or lying, and a sex toy holder that suctions to surfaces or straps into place. I also tested several toys that someone can just grind against, lie on, or sit on.
I wasn’t able to test a hand harness to keep the toy in your hand, as it didn’t fit my small hand, but these can be a more controlled way to hold a sex toy rather than wedging with pillows, grinding on, or using a surface mount.
The Liberator Wedge also came highly recommended to me, but I also wasn’t able to test it. This angled pillow makes sex easier for those in non-normative bodies or for those who suffer from pain, as they can reach the angles and positions needed to relieve pressure. As I mentioned above, a pillow also helps to achieve deeper penetration with partners with smaller penises or bigger bodies, where genitals can be trickier to reach without additional help.
Brands like IntimateRider make chairs and sex accessories for wheelchair users, paraplegics, and others who have spinal cord injuries and similar disabilities where traditional sex may not be an option without these valuable tools.
On 4 February 2026, the Joint Committee on Human Rights met to discuss whether the UK’s regulators have the resources, expertise and powers to ensure that human rights are protected from new and emerging harms caused by AI.
While there are at least 13 regulators in the UK with remits relating to AI, there is no single regulator dedicated to regulating AI.
The government has stated that AI should be regulated by the UK’s existing framework, but witnesses from the Equality and Human Rights Commission (EHRC), the Information Commissioner’s Office (ICO) and Ofcom warned MPs and Lords that the current disconnected approach risks falling behind fast-moving AI without stronger coordination and resourcing.
Mary-Ann Stephenson, chair of the EHRC, stressed that resources were the greatest hurdle in regulating the technology. “There is a great deal more that we would like to do in this area if we had more resources,” she said.
Highlighting how the EHRC’s budget has remained frozen at £17.1m since 2012, which was then the minimum amount required for the commission to perform its statutory functions, Stephenson told MPs and Lords that this is equivalent to a 35% cut.
Regulators told the committee that the legal framework is largely in place to address AI-related discrimination and rights harms through the Equality Act.
The constraint is therefore in capacity and resources, not a lack of statutory powers. As a result, much of the enforcement is reactive rather than proactive.
Stephenson said: “The first thing the government should do is ensure that existing regulators are sufficiently funded, and funded to be able to work together so that we can respond swiftly when gaps are identified.”
Andrew Breeze, director for online safety technology policy at Ofcom, stressed that regulation could not keep pace with rapid AI development.
However, regulators also stressed that they are technology-neutral; their powers with regard to AI are limited to the use case and deployment level. Ofcom, the ICO and the ECHR have no power to refuse or give prior approval to new AI products.
The committee itself expressed a strong interest in having a dedicated AI regulator. Labour peer Baroness Chakrabarti compared AI regulation to the pharmaceutical industry.
“Big business, lots of jobs, capable of doing enormous good for so many people, but equally capable of doing a lot of damage,” she said. “We would not dream of not having a specific medicines regulator in this country or any developed country, even though there might be privacy issues and general human rights issues.”
Regulators were in favour of a coordinating body to bring stronger cross-regulator mechanisms rather than a single super-regulator. They stressed that because AI is a general-purpose technology, regulation works best when handled by sector regulators that cover specific domains.
Forms of coordination are already in place, such as the Digital Regulation Cooperation Forum (DRCF), formed in July 2020 to strengthen the working relationship between four regulators.
It has created cross-regulatory teams to share knowledge and develop collective views on digital issues, including algorithmic processing, design frameworks, digital advertising technologies and end-to-end encryption.
The then-outgoing information commissioner, Elizabeth Denham, told MPs and peers that information-sharing gateways between regulators and the ability to perform compulsory audits “would ensure that technology companies, some the size of nation-states, are not forum shopping or running one regulator against another”.
Spread of misinformation
Breeze made the case for greater international regulatory cooperation with regard to disinformation produced by AI.
“Parliament explicitly decided at the time the Online Safety Bill was passed not to cover content that was harmful but legal, except to the extent that it harms children,” said Breeze.
While misinformation and disinformation regulation is largely absent in UK law, it is present in the European Union’s counterpart to the Online Safety Act.
Because of the cross-border nature of large tech companies, Breeze noted that legal action on discrimination can sometimes be taken using European legislation.
Age regulation and the Online Safety Act
Regulators also addressed scepticism on age assurance safeguards in the context of the proposed social media ban for under-16s and restricting access to online pornography.
Breeze said age assurance represented a trade-off for regulators between child protection and ensuring a high degree of online privacy.
Responding to criticism that the Online Safety Act has been ineffective due to the widespread use of virtual private networks (VPNs), Breeze said: “Checks are about ensuring as many young people as possible are protected from seeing products deemed harmful to them … and there is no impregnable defence that you can create on the internet against a determined person, adult or child.”
He said that according to the evidence, the majority of children who report seeing harmful content usually weren’t looking for it.
The same committee heard in November 2025 that the UK government’s deregulatory approach to artificial intelligence would fail to deal with the technology’s highly scalable human rights harms and could lead to further public disenfranchisement.
Big Brother Watch director Silkie Carlo highlighted that the government’s “very optimistic and commercial-focused outlook on AI” and the Data Use and Access Act (DUAA) have “decimated people’s protections against automated decision-making”.
The recent exploitation of CVE-2026-21509 by Russia’s APT28 group, just days after Microsoft disclosed and patched it, isn’t merely another security incident to file away. It’s a flashing red warning indicator that the aggregation risk and our dependence on a default software platform is creating systemic risk in a world where spreadsheets and spyware are equally viable warfare tools.
APT28, also known as Fancy Bear, BlueDelta and Forest Blizzard, isn’t some shadowy newcomer. This unit of Russia’s GRU military intelligence has been wreaking havoc since at least 2007. They may have interfered in the 2016 US presidential election, compromised the World Anti-Doping Agency, targeted Nato, and they are credited with conducting countless operations against Ukrainian infrastructure. They’re sophisticated, relentless, and have a particular fondness for Microsoft’s ecosystem.
In recent years, they’ve exploited vulnerabilities in Microsoft Exchange, Outlook, and now Office itself. Their tradecraft isn’t opportunistic – it’s industrial-scale cyber warfare executed with military precision.
Severe Office vulnerability
Only recently we witnessed their latest attack. The timeline gives rise for concern as Microsoft issued an out-of-band patch for a high-severity Office vulnerability on 26 January.
Three days later, malicious documents exploiting that exact flaw started circulating in Ukraine. Phishing lure files appear to have been crafted within 24 hours of Microsoft disclosing the software flaw, a single day after the patch dropped.
Think about that timeline – this is an adversary that was either tipped off, had advance access, or was already weaponising the vulnerability before the patch even existed.
This is an adversary that was either tipped off, had advance access, or was already weaponising the vulnerability before the patch even existed Bill McCluggage
CVE-2026-21509 is a security feature bypass – the kind of flaw that tricks users into opening crafted Office files that deliver MiniDoor malware, designed to harvest and exfiltrate victims’ emails, along with PixyNetLoader malware, designed to implant malicious software on compromised systems.
The problem is structural. IT professionals know that deploying patches isn’t instantaneous. They take time, albeit in some cases automated updates can be relatively quick. But in a conflict zone wrestling with bandwidth constraints, outdated systems, and limited access to enterprise-grade licensing, that vulnerability window becomes a chasm.
If Ukrainian organisations are running older Office builds because they lack resources for restrictive, subscription-based licensing, or can’t afford IT automation for patching, they’re sitting ducks. This is a strategic liability, and other nations need to understand the systemic risk they too face.
Microsoft’s patching cadence deserves further scrutiny, and this incident highlights that recognition delays matter, even outside of active conflict zones. When vulnerabilities are actively exploited before patches arrive or are installed, we’re no longer managing risk, we’re into documenting damage and incident recovery.
Delays in Microsoft patch deployment shouldn’t be inevitable – when your patch management depends on manual schedules, restricted bandwidth, or enterprise support you can’t access, that delay becomes a shooting gallery for groups like APT28.
Recent Azure outages, whether from cyber attacks or botched updates, have demonstrated how a single point of failure implanted in Redmond can cascade globally. When national governments, critical infrastructure, and essential services run on cloud platforms controlled by one company, we’re not just talking about vendor lock-in. We’re talking about digital colonialism disguised as convenience that introduces systemic risk.
Market concentration compounds this risk. When a single platform is effectively the default across governments and corporations globally, vulnerabilities don’t fail in isolation – they fester and spread.
Licensing models and interoperability barriers that discourage diversification entrench this monoculture. The result is aggregation risk on a geopolitical scale – its bugs are potential weapons in grey-zone conflicts where every user is a potential target, and every attachment could be a trap.
This isn’t just a cyber security challenge – it’s a market structure problem. Structural risks require structural remedies. Bodies like the UK Competition and Markets Authority (CMA) and the European Commission’s Directorate-General for Competition have a clear role here, by ensuring that concentration in productivity and cloud services does not translate into national and global security vulnerabilities.
The ability to diversify and introduce real competition in secure cloud and productivity ecosystems is becoming a matter of digital sovereignty and defence resilience.
The way forward
So what’s the path forward? Microsoft must rethink vulnerability disclosure and patching for high-impact products introducing faster mitigation pathways and protective heuristics that can be deployed before formal patches are released.
Enterprises and governments need to invest in automated patch management and redundancy planning.
And regulators need to recognise that monoculture is inseparable from security risk.
The next frontier of cyber security policy isn’t just about defending networks – it’s about making markets safer by design.
Bill McCluggage was director of IT strategy and policy in the Cabinet Office and deputy government CIO from 2009 to 2012.