Connect with us

Tech

Lack of resources greatest hurdle for regulating AI, MPs told | Computer Weekly

Published

on

Lack of resources greatest hurdle for regulating AI, MPs told | Computer Weekly


Closer cooperation between regulators and increased funding are needed for the UK to deal effectively with the human rights harms associated with the proliferation of artificial intelligence (AI) systems. 

On 4 February 2026, the Joint Committee on Human Rights met to discuss whether the UK’s regulators have the resources, expertise and powers to ensure that human rights are protected from new and emerging harms caused by AI. 

While there are at least 13 regulators in the UK with remits relating to AI, there is no single regulator dedicated to regulating AI.

The government has stated that AI should be regulated by the UK’s existing framework, but witnesses from the Equality and Human Rights Commission (EHRC), the Information Commissioner’s Office (ICO) and Ofcom warned MPs and Lords that the current disconnected approach risks falling behind fast-moving AI without stronger coordination and resourcing. 

Mary-Ann Stephenson, chair of the EHRC, stressed that resources were the greatest hurdle in regulating the technology. “There is a great deal more that we would like to do in this area if we had more resources,” she said.

Highlighting how the EHRC’s budget has remained frozen at £17.1m since 2012, which was then the minimum amount required for the commission to perform its statutory functions, Stephenson told MPs and Lords that this is equivalent to a 35% cut.

Regulators told the committee that the legal framework is largely in place to address AI-related discrimination and rights harms through the Equality Act.  

The constraint is therefore in capacity and resources, not a lack of statutory powers. As a result, much of the enforcement is reactive rather than proactive.

Stephenson said: “The first thing the government should do is ensure that existing regulators are sufficiently funded, and funded to be able to work together so that we can respond swiftly when gaps are identified.”

Andrew Breeze, director for online safety technology policy at Ofcom, stressed that regulation could not keep pace with rapid AI development.

However, regulators also stressed that they are technology-neutral; their powers with regard to AI are limited to the use case and deployment level. Ofcom, the ICO and the ECHR have no power to refuse or give prior approval to new AI products. 

The committee itself expressed a strong interest in having a dedicated AI regulator. Labour peer Baroness Chakrabarti compared AI regulation to the pharmaceutical industry. 

“Big business, lots of jobs, capable of doing enormous good for so many people, but equally capable of doing a lot of damage,” she said. “We would not dream of not having a specific medicines regulator in this country or any developed country, even though there might be privacy issues and general human rights issues.”

Regulators were in favour of a coordinating body to bring stronger cross-regulator mechanisms rather than a single super-regulator. They stressed that because AI is a general-purpose technology, regulation works best when handled by sector regulators that cover specific domains.

Forms of coordination are already in place, such as the Digital Regulation Cooperation Forum (DRCF), formed in July 2020 to strengthen the working relationship between four regulators. 

It has created cross-regulatory teams to share knowledge and develop collective views on digital issues, including algorithmic processing, design frameworks, digital advertising technologies and end-to-end encryption. 

The then-outgoing information commissioner, Elizabeth Denham, told MPs and peers that information-sharing gateways between regulators and the ability to perform compulsory audits “would ensure that technology companies, some the size of nation-states, are not forum shopping or running one regulator against another”.

Spread of misinformation 

Breeze made the case for greater international regulatory cooperation with regard to disinformation produced by AI. 

Ofcom clarified that, under the UK’s Online Safety Act, it does not have the power to regulate the spread of misinformation on social media. 

“Parliament explicitly decided at the time the Online Safety Bill was passed not to cover content that was harmful but legal, except to the extent that it harms children,” said Breeze.

While misinformation and disinformation regulation is largely absent in UK law, it is present in the European Union’s counterpart to the Online Safety Act. 

Because of the cross-border nature of large tech companies, Breeze noted that legal action on discrimination can sometimes be taken using European legislation.

Age regulation and the Online Safety Act

Regulators also addressed scepticism on age assurance safeguards in the context of the proposed social media ban for under-16s and restricting access to online pornography.

Breeze said age assurance represented a trade-off for regulators between child protection and ensuring a high degree of online privacy.

Responding to criticism that the Online Safety Act has been ineffective due to the widespread use of virtual private networks (VPNs), Breeze said: “Checks are about ensuring as many young people as possible are protected from seeing products deemed harmful to them … and there is no impregnable defence that you can create on the internet against a determined person, adult or child.”

He said that according to the evidence, the majority of children who report seeing harmful content usually weren’t looking for it. 

The same committee heard in November 2025 that the UK government’s deregulatory approach to artificial intelligence would fail to deal with the technology’s highly scalable human rights harms and could lead to further public disenfranchisement.

Big Brother Watch director Silkie Carlo highlighted that the government’s “very optimistic and commercial-focused outlook on AI” and the Data Use and Access Act (DUAA) have “decimated people’s protections against automated decision-making”.

Carlo added that there is real potential for AI-enabled mass surveillance to “spiral out of control”, and that a system built for one purpose could easily be deployed for another “in the blink of an eye”.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

How to Set Up an Apple Watch for Your Kids

Published

on

How to Set Up an Apple Watch for Your Kids


Unpairing is supposed to erase all content and settings on your watch, but in my case, it did not. If it doesn’t work for you either, tap Settings on the watch, then General > Reset > Erase All Content and Settings.

At this point, you can have your kid put it on (if it’s charged). The watch will say Bring iPhone Near Apple Watch. If you open the Watch app, it lets you choose to Set Up for a Family Member. Aim the phone’s viewfinder at the slowly moving animation to pair, or select Pair Manually.

Apple’s tutorial is pretty straightforward from this point. I picked a passcode that’s easy for my daughter to remember and picked her from my family list. I continued cellular service. Then I set up all the usual features and services for an Apple Watch, including Ask to Buy so she couldn’t buy anything from the app store without my permission, Messages, and Emergency SOS.

I also chose to limit my daughter’s contacts on the watch. First, go to Settings > iCloud > Contacts on your phone and make sure it’s toggled on. Then click out, go back to Settings > Screen Time > Family Member > Communication Limits. You need to request your child’s permission to manage their contacts and approve it from the kid’s watch. On their watch, you can add and rename contacts from your contact list (Dad becomes “Grandpa,” Tim becomes “Uncle Timmy,” and so on).

The last step is turning on Schooltime, which is basically a remote-controlled version of an adult Work Focus. It blocks apps and complications, but emergency calls can still come through. The setup tutorial walks you through how to set up Schooltime on your child’s watch, but if you skip it during setup, you can manage it later. On your iPhone, tap All Watches > Your Child’s Watch > Schooltime > Edit Schedule.

I elected to turn Schooltime on when my child is in school and turn it off during afterschool care, but you can also click Add Time if you’d like to turn it on during a morning class, take a break for lunch, and then turn it back on again. Your kid can just turn the digital crown to exit Schooltime, but that’s OK—you can check their Schooltime reports on your iPhone too.

To manage your child’s watch, go to your Watch > All Watches > Family Watches > Your Kid’s Apple Watch. This is how you install updates and manage settings. For more settings that you can turn on or off, check out Apple’s full list here. For example, you can check health details, set up a Medical ID, or even edit their smart replies.

Fun for Everyone

Just as with a grown-up Apple Watch, the first thing you’ll probably want to do is switch the watch face. Hold down the screen and wait for the face to shrink, and swipe to switch. (You probably also want to buy a tiny kid-specific watch band.)

We got my daughter an Apple Watch, so I’d be able to see her on Find My, and she could contact me via phone or the Messages app, which she does with regrettable frequency.



Source link

Continue Reading

Tech

London Assembly member: Police should halt facial-recognition technology use | Computer Weekly

Published

on

London Assembly member: Police should halt facial-recognition technology use | Computer Weekly


The Metropolitan Police’s rapid “unchecked” expansion of live facial-recognition (LFR) technology is taking place without clear legal authority and minimal public accountability, says Green London Assembly member Zoë Garbett in a call for the force to halt its deployments of the controversial technology.

Made during an ongoing government consultation on a legal framework for the technology, Garbett’s call for the force to immediately halt its deployments of LFR is informed by concerns around its disproportionate effects on Black and brown communities, a lack of specific legal powers dictating how police can use the tech, and the Met’s opacity around the true costs of deploying.

Garbett’s intervention also comes as the High Court is considering the lawfulness of the Met’s approach to LFR, and whether it has effective safeguards or constraints in place to protect people’s human rights from the biometric surveillance being conducted.

“Live facial-recognition technology subjects everyone to constant surveillance, which goes against the democratic principle that people should not be monitored unless there is suspicion of wrongdoing,” said Garbett, adding that there have already been instances of “real harm” in children being wrongly placed on watchlists, and the disproportionate targeting and misidentification of Black Londoners.

“These invasive tools allow the police to monitor the daily lives of Londoners, entirely unregulated and without any safeguards. The Met repeatedly claim that live facial recognition is a success, yet they continue to withhold the data required to scrutinise those claims.

“It makes no sense for the home secretary to announce the expansion of live facial recognition at the same time as running a government consultation on the use of this technology. This expansion is especially concerning given that there is still no specific law authorising the use of this technology.”

Highlighting in a corresponding report how facial-recognition technology “flips the presumption of innocence” by turning public spaces into an “identification parade”, Garbett also outlined ways in which both the Met and the Home Office can make its use safer in lieu of a full-blown ban.

This includes creating primary legislation with “strict controls” that limits LFR to the most serious crimes and bans its use by non-law enforcement public authorities or the private sector; and openly publishing deployment assessments so that watchlist creation, location choice and tactical decisions are publicly available for Londoners to review.

On watchlist creation specifically, Garbett dismissed the police claim that LFR is a “precise” tool, highlighting how nearly every watchlist used is larger than the one preceding it.

Highlighting how the number of faces being scanned by the Met is “increasing at a near exponential rate”, Garbett likened the forces watchlist tactics to a “fishing trawler” that it keeps adding to so it can find people.

“Data suggests that rather than making a new unique watchlist for each deployment based on the likelihood of people being in the area of the deployment, it seems from the outside that the MPS is just adding additional people on to a base watchlist [it has],” she said.

Garbett also called on the Met to publish the true financial and operational costs of all LFR deployments, arguing that the force has not only failed to provide a compelling business case for the technology, but is actively obfuscating this information.

“The MPS has a history of a lack of transparency. This is perhaps best summarised by Baroness Casey in her review of the MPS where she said, ‘The Met itself sees scrutiny as an intrusion. This is both short-sighted and unethical. As a public body with powers over the public it needs to be transparent to Londoners for its actions to earn their trust, confidence and respect’,” said Garbett.

She added that while freedom of information requests returned in mid-2023 revealed that, up until that point, the force had spent £500,000 on the tech, without up-to-date reliable figures, it is impossible to verify the Met’s claims that it is delivering a greater impact on public safety through LFR.

“The NHS wouldn’t be able to roll out a new treatment without being able to prove it was worthwhile and effective, but it seems that the police operate under their own rules and seemingly answer to no one,” said Garbett.

Computer Weekly contacted the Met about Garbett’s report. A spokesperson said that LFR “has taken more than 1,700 dangerous offenders off the streets since the start of 2024, including those wanted for serious offences, such as violence against women and girls. This success has meant 85% Londoners support our use of the technology to keep them safe.

“It has been deployed across all 32 boroughs in London, with each use carefully planned to ensure we are deploying to areas where there is the greatest threat to public safety. A hearing into our use of live facial recognition has taken place and we look forward to receiving the High Court’s decision in due course. We remain confident our use of LFR is lawful and follows the policy which is published online.”

A lack of meaningful consultation so far

While the use of LFR by police – beginning with the Met’s deployment at Notting Hill Carnival in August 2016 – has already ramped up massively in recent years, there has so far been minimal public debate or consultation, with the Home Office claiming for years that there is already “comprehensive” legal framework in place.

The lack of meaningful engagement with the public by police and government over facial recognition is reflected in Garbett’s report. She highlights, for examples, that Newham Council unanimously passed a motion in January 2023 to suspend the use of LFR throughout the borough until biometric and anti-discrimination safeguards are in place.

While the motion highlighted the potential of LFR to “exacerbate racist outcomes in policing” – particularly in Newham, the most ethnically diverse of all local authorities in England and Wales – both the Met and the Home Office said that they would press forward with the deployments anyway.

“Since that motion was passed, LFR has been used 31 times in Newham by the MPS,” said Garbett.

On the deployment of permanent LFR cameras mounted to street furniture in Croydon, Garbett added while the Met promised it would consult with the local community, councillors from there are have told her the force did not follow through with this consultation.

The technology was similarly rolled out in Lewisham without meaningful consultation, despite the Met’s claims to the contrary.

However, in December 2025, the Home Office launched a 10-week consultation on the use of LFR by UK police, allowing interested parties and members of the public to share their views on how the controversial technology should be regulated.

The department has said that although a “patchwork” legal framework for police facial recognition exists (including for the increasing use of the retrospective and “operator-initiated” versions of the technology), it does not give police themselves the confidence to “use it at significantly greater scale…nor does it consistently give the public the confidence that it will be used responsibly”.

It added that the current rules governing police LFR use are “complicated and difficult to understand”, and that an ordinary member of the public would be required to read four pieces of legislation, police national guidance documents and a range of detailed legal or data protection documents from individual forces to fully understand the basis for LFR use on their high streets.

Consultation responses

In a section on how people can respond to the Home Office’s LFR consultation, Garbett urged people to call for its ban, adding that further protections in lieu one could include requiring a warrant to be placed on a watchlist, and limiting it to “the most serious and urgent crime purposes”.

She noted that, as it stands, the Met has not used LFR to make any terror-related arrests, with the most common offence being variations on theft or court order breaches

“In a recent press release, the lead example the MPS give for how they have used LFR is using it to arrest a 36-year-old woman who was wanted for failing to appear at court for an assault in 2004 when they were probably 15 years old,” she said. “The public might feel differently about LFR if they knew it was being used on cases such as these.”

On the permanent installation of LFR cameras in Croydon, Garbett added that while the police have said they are only switched on when an operation is taking place, “there is still the potential for 24/7 monitoring, with Londoners unable to tell if the cameras are operational or not. This makes the feeling of being under surveillance in London feel routine and begins to be a slippery slope to preventative policing and a blurry line between safety and social control.”  

Garbett concluded that the rapid deployment of LFR must stop while safeguards are in place to protect people’s rights: “I urge everyone to respond to the government consultation and use the guide I’ve prepared to make sure we have a say in how this technology is used going forward.”

Computer Weekly contacted the Home Office about the contents of Garbett’s report and its decision to massively expand facial-recognition deployments before concluding its consultation, but received no response.



Source link

Continue Reading

Tech

AI Industry Rivals Are Teaming Up on a Startup Accelerator

Published

on

AI Industry Rivals Are Teaming Up on a Startup Accelerator


The largest western AI labs are taking a break from sniping at one another to partner on a new accelerator program for European startups building applications on top of their models. Paris-based incubator Station F will run the program, named F/ai.

On Tuesday, Station F announced it had partnered with Meta, Microsoft, Google, Anthropic, OpenAI and Mistral, which it says marks the first time the firms are all participating in a single accelerator. Other partners include cloud and semiconductor companies AWS, AMD, Qualcomm, and OVH Cloud.

An accelerator is effectively a crash course for early-stage startups, whereby founders attend classes and lectures, consult with specialists, and receive introductions to potential investors and customers. The broad aim is to help startups bring ideas to market as quickly as possible.

The 20 startups in each F/ai cohort will undergo a curriculum geared specifically toward helping European AI startups generate revenue earlier in their lifecycle, in turn making it easier to secure the funding required to expand into the largest global markets. “We’re focusing on rapid commercialization,” says Roxanne Varza, director at Station F, in an interview with WIRED. “Investors are starting to feel like, ‘European companies are nice, but they’re not hitting the $1 million revenue mark fast enough.’”

The accelerator will run for three months, twice a year. The first edition began on January 13. Station F has not revealed which startups make up the cohort, but many were recommended by Sequoia Capital, General Catalyst, Lightspeed, or one of the other VC firms involved in the program. The startups are all building AI applications on top of the foundational models developed by the partnering labs, in areas ranging from agentic AI to procurement and finance.

In lieu of direct funding, participating founders will receive more than $1 million in credits that can be traded for access to AI models, compute, and other services from the partner firms.

With very few exceptions, European companies have so far lagged behind their American and Chinese counterparts at every stage of the AI production line. To try to close that gap, the UK and EU governments are throwing hundreds of millions of dollars at attempts to support homegrown AI firms, and develop the domestic data center and power infrastructure necessary to train and operate AI models and applications.

In the US, tech accelerators like Y Combinator have produced a crop of household names, including Airbnb, Stripe, DoorDash, and Reddit. OpenAI was itself established in 2015 with the help of funding from Y Combinator’s then research division. Station F intends for F/ai to have a similar impact in Europe, making domestic AI startups competitive on the international stage. “It’s for European founders with a global ambition,” says Varza.

The program also represents a chance for the US-based AI labs to sow further seeds in Europe, using subsidies to incentivize a new generation of startups to build atop their technologies.

Once a developer begins to build on top of a particular model, it is rarely straightforward to swap to an alternative, says Marta Vinaixa, partner and CEO at VC firm Ryde Ventures. “When you build on top of these systems, you’re also building for how the systems behave—their quirkiness,” she says. “Once you start with a foundation, at least for the same project, you’re not going to change to another.”

The earlier in a company’s lifecycle it begins to develop on top of a particular model, says Vinaixa, the more that effect is magnified. “The sooner that you start, the more that you accumulate, the more difficult it becomes,” she says.



Source link

Continue Reading

Trending