Connect with us

Tech

AI Industry Rivals Are Teaming Up on a Startup Accelerator

Published

on

AI Industry Rivals Are Teaming Up on a Startup Accelerator


The largest western AI labs are taking a break from sniping at one another to partner on a new accelerator program for European startups building applications on top of their models. Paris-based incubator Station F will run the program, named F/ai.

On Tuesday, Station F announced it had partnered with Meta, Microsoft, Google, Anthropic, OpenAI and Mistral, which it says marks the first time the firms are all participating in a single accelerator. Other partners include cloud and semiconductor companies AWS, AMD, Qualcomm, and OVH Cloud.

An accelerator is effectively a crash course for early-stage startups, whereby founders attend classes and lectures, consult with specialists, and receive introductions to potential investors and customers. The broad aim is to help startups bring ideas to market as quickly as possible.

The 20 startups in each F/ai cohort will undergo a curriculum geared specifically toward helping European AI startups generate revenue earlier in their lifecycle, in turn making it easier to secure the funding required to expand into the largest global markets. “We’re focusing on rapid commercialization,” says Roxanne Varza, director at Station F, in an interview with WIRED. “Investors are starting to feel like, ‘European companies are nice, but they’re not hitting the $1 million revenue mark fast enough.’”

The accelerator will run for three months, twice a year. The first edition began on January 13. Station F has not revealed which startups make up the cohort, but many were recommended by Sequoia Capital, General Catalyst, Lightspeed, or one of the other VC firms involved in the program. The startups are all building AI applications on top of the foundational models developed by the partnering labs, in areas ranging from agentic AI to procurement and finance.

In lieu of direct funding, participating founders will receive more than $1 million in credits that can be traded for access to AI models, compute, and other services from the partner firms.

With very few exceptions, European companies have so far lagged behind their American and Chinese counterparts at every stage of the AI production line. To try to close that gap, the UK and EU governments are throwing hundreds of millions of dollars at attempts to support homegrown AI firms, and develop the domestic data center and power infrastructure necessary to train and operate AI models and applications.

In the US, tech accelerators like Y Combinator have produced a crop of household names, including Airbnb, Stripe, DoorDash, and Reddit. OpenAI was itself established in 2015 with the help of funding from Y Combinator’s then research division. Station F intends for F/ai to have a similar impact in Europe, making domestic AI startups competitive on the international stage. “It’s for European founders with a global ambition,” says Varza.

The program also represents a chance for the US-based AI labs to sow further seeds in Europe, using subsidies to incentivize a new generation of startups to build atop their technologies.

Once a developer begins to build on top of a particular model, it is rarely straightforward to swap to an alternative, says Marta Vinaixa, partner and CEO at VC firm Ryde Ventures. “When you build on top of these systems, you’re also building for how the systems behave—their quirkiness,” she says. “Once you start with a foundation, at least for the same project, you’re not going to change to another.”

The earlier in a company’s lifecycle it begins to develop on top of a particular model, says Vinaixa, the more that effect is magnified. “The sooner that you start, the more that you accumulate, the more difficult it becomes,” she says.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

How to Set Up an Apple Watch for Your Kids

Published

on

How to Set Up an Apple Watch for Your Kids


Unpairing is supposed to erase all content and settings on your watch, but in my case, it did not. If it doesn’t work for you either, tap Settings on the watch, then General > Reset > Erase All Content and Settings.

At this point, you can have your kid put it on (if it’s charged). The watch will say Bring iPhone Near Apple Watch. If you open the Watch app, it lets you choose to Set Up for a Family Member. Aim the phone’s viewfinder at the slowly moving animation to pair, or select Pair Manually.

Apple’s tutorial is pretty straightforward from this point. I picked a passcode that’s easy for my daughter to remember and picked her from my family list. I continued cellular service. Then I set up all the usual features and services for an Apple Watch, including Ask to Buy so she couldn’t buy anything from the app store without my permission, Messages, and Emergency SOS.

I also chose to limit my daughter’s contacts on the watch. First, go to Settings > iCloud > Contacts on your phone and make sure it’s toggled on. Then click out, go back to Settings > Screen Time > Family Member > Communication Limits. You need to request your child’s permission to manage their contacts and approve it from the kid’s watch. On their watch, you can add and rename contacts from your contact list (Dad becomes “Grandpa,” Tim becomes “Uncle Timmy,” and so on).

The last step is turning on Schooltime, which is basically a remote-controlled version of an adult Work Focus. It blocks apps and complications, but emergency calls can still come through. The setup tutorial walks you through how to set up Schooltime on your child’s watch, but if you skip it during setup, you can manage it later. On your iPhone, tap All Watches > Your Child’s Watch > Schooltime > Edit Schedule.

I elected to turn Schooltime on when my child is in school and turn it off during afterschool care, but you can also click Add Time if you’d like to turn it on during a morning class, take a break for lunch, and then turn it back on again. Your kid can just turn the digital crown to exit Schooltime, but that’s OK—you can check their Schooltime reports on your iPhone too.

To manage your child’s watch, go to your Watch > All Watches > Family Watches > Your Kid’s Apple Watch. This is how you install updates and manage settings. For more settings that you can turn on or off, check out Apple’s full list here. For example, you can check health details, set up a Medical ID, or even edit their smart replies.

Fun for Everyone

Just as with a grown-up Apple Watch, the first thing you’ll probably want to do is switch the watch face. Hold down the screen and wait for the face to shrink, and swipe to switch. (You probably also want to buy a tiny kid-specific watch band.)

We got my daughter an Apple Watch, so I’d be able to see her on Find My, and she could contact me via phone or the Messages app, which she does with regrettable frequency.



Source link

Continue Reading

Tech

I’m Physically Disabled, and I Have a Vibrant Sex Life. These Accessible Sex Toys Help

Published

on

I’m Physically Disabled, and I Have a Vibrant Sex Life. These Accessible Sex Toys Help


There isn’t a one-size-fits-all when it comes to toys aimed at providing accessibility or inclusion, just like there isn’t one type of disability. Very few toys or brands are actually made with disability at the forefront, the exception being Cute Little Fuckers, a queer,- trans-, and disabled-owned sex toy brand. (I tested three of the brand’s toys, above.)

So instead, I thought of my own needs as someone with upper-limb disabilities, and I talked to other disabled folks, including those who use wheelchairs or have lower-body disabilities, to find out what they look for in their sex toys. This included tools like slings, pillows, and chairs that help with positioning during sex (or solo play). (More on that below.)

Since I have a vagina and upper limb disabilities, many of the toys I tested were aimed at people like me, but many, like app-connected G-spot and clitoral toys, have similar versions with the same in-app features, except for people with penises or those that prefer anal play.

I took many factors intro consideration, including weight, length, girth; whether the toy was easy to hold or could be wedged; if you could just lie on it or use in multiple positions; and if it could be controlled via buttons (and how difficult those might be to press), in-app, or with a remote control. Once the individual realizes what they need from a toy to make it work for their body and ability, it’ll be easier to narrow down the toy that’d work best.

I tested several sex toy holders, including those that fit into a pillow for mounting or lying, and a sex toy holder that suctions to surfaces or straps into place. I also tested several toys that someone can just grind against, lie on, or sit on.

I wasn’t able to test a hand harness to keep the toy in your hand, as it didn’t fit my small hand, but these can be a more controlled way to hold a sex toy rather than wedging with pillows, grinding on, or using a surface mount.

The Liberator Wedge also came highly recommended to me, but I also wasn’t able to test it. This angled pillow makes sex easier for those in non-normative bodies or for those who suffer from pain, as they can reach the angles and positions needed to relieve pressure. As I mentioned above, a pillow also helps to achieve deeper penetration with partners with smaller penises or bigger bodies, where genitals can be trickier to reach without additional help.

Brands like IntimateRider make chairs and sex accessories for wheelchair users, paraplegics, and others who have spinal cord injuries and similar disabilities where traditional sex may not be an option without these valuable tools.



Source link

Continue Reading

Tech

Lack of resources greatest hurdle for regulating AI, MPs told | Computer Weekly

Published

on

Lack of resources greatest hurdle for regulating AI, MPs told | Computer Weekly


Closer cooperation between regulators and increased funding are needed for the UK to deal effectively with the human rights harms associated with the proliferation of artificial intelligence (AI) systems. 

On 4 February 2026, the Joint Committee on Human Rights met to discuss whether the UK’s regulators have the resources, expertise and powers to ensure that human rights are protected from new and emerging harms caused by AI. 

While there are at least 13 regulators in the UK with remits relating to AI, there is no single regulator dedicated to regulating AI.

The government has stated that AI should be regulated by the UK’s existing framework, but witnesses from the Equality and Human Rights Commission (EHRC), the Information Commissioner’s Office (ICO) and Ofcom warned MPs and Lords that the current disconnected approach risks falling behind fast-moving AI without stronger coordination and resourcing. 

Mary-Ann Stephenson, chair of the EHRC, stressed that resources were the greatest hurdle in regulating the technology. “There is a great deal more that we would like to do in this area if we had more resources,” she said.

Highlighting how the EHRC’s budget has remained frozen at £17.1m since 2012, which was then the minimum amount required for the commission to perform its statutory functions, Stephenson told MPs and Lords that this is equivalent to a 35% cut.

Regulators told the committee that the legal framework is largely in place to address AI-related discrimination and rights harms through the Equality Act.  

The constraint is therefore in capacity and resources, not a lack of statutory powers. As a result, much of the enforcement is reactive rather than proactive.

Stephenson said: “The first thing the government should do is ensure that existing regulators are sufficiently funded, and funded to be able to work together so that we can respond swiftly when gaps are identified.”

Andrew Breeze, director for online safety technology policy at Ofcom, stressed that regulation could not keep pace with rapid AI development.

However, regulators also stressed that they are technology-neutral; their powers with regard to AI are limited to the use case and deployment level. Ofcom, the ICO and the ECHR have no power to refuse or give prior approval to new AI products. 

The committee itself expressed a strong interest in having a dedicated AI regulator. Labour peer Baroness Chakrabarti compared AI regulation to the pharmaceutical industry. 

“Big business, lots of jobs, capable of doing enormous good for so many people, but equally capable of doing a lot of damage,” she said. “We would not dream of not having a specific medicines regulator in this country or any developed country, even though there might be privacy issues and general human rights issues.”

Regulators were in favour of a coordinating body to bring stronger cross-regulator mechanisms rather than a single super-regulator. They stressed that because AI is a general-purpose technology, regulation works best when handled by sector regulators that cover specific domains.

Forms of coordination are already in place, such as the Digital Regulation Cooperation Forum (DRCF), formed in July 2020 to strengthen the working relationship between four regulators. 

It has created cross-regulatory teams to share knowledge and develop collective views on digital issues, including algorithmic processing, design frameworks, digital advertising technologies and end-to-end encryption. 

The then-outgoing information commissioner, Elizabeth Denham, told MPs and peers that information-sharing gateways between regulators and the ability to perform compulsory audits “would ensure that technology companies, some the size of nation-states, are not forum shopping or running one regulator against another”.

Spread of misinformation 

Breeze made the case for greater international regulatory cooperation with regard to disinformation produced by AI. 

Ofcom clarified that, under the UK’s Online Safety Act, it does not have the power to regulate the spread of misinformation on social media. 

“Parliament explicitly decided at the time the Online Safety Bill was passed not to cover content that was harmful but legal, except to the extent that it harms children,” said Breeze.

While misinformation and disinformation regulation is largely absent in UK law, it is present in the European Union’s counterpart to the Online Safety Act. 

Because of the cross-border nature of large tech companies, Breeze noted that legal action on discrimination can sometimes be taken using European legislation.

Age regulation and the Online Safety Act

Regulators also addressed scepticism on age assurance safeguards in the context of the proposed social media ban for under-16s and restricting access to online pornography.

Breeze said age assurance represented a trade-off for regulators between child protection and ensuring a high degree of online privacy.

Responding to criticism that the Online Safety Act has been ineffective due to the widespread use of virtual private networks (VPNs), Breeze said: “Checks are about ensuring as many young people as possible are protected from seeing products deemed harmful to them … and there is no impregnable defence that you can create on the internet against a determined person, adult or child.”

He said that according to the evidence, the majority of children who report seeing harmful content usually weren’t looking for it. 

The same committee heard in November 2025 that the UK government’s deregulatory approach to artificial intelligence would fail to deal with the technology’s highly scalable human rights harms and could lead to further public disenfranchisement.

Big Brother Watch director Silkie Carlo highlighted that the government’s “very optimistic and commercial-focused outlook on AI” and the Data Use and Access Act (DUAA) have “decimated people’s protections against automated decision-making”.

Carlo added that there is real potential for AI-enabled mass surveillance to “spiral out of control”, and that a system built for one purpose could easily be deployed for another “in the blink of an eye”.



Source link

Continue Reading

Trending