Tech
Lack of resources greatest hurdle for regulating AI, MPs told | Computer Weekly
Closer cooperation between regulators and increased funding are needed for the UK to deal effectively with the human rights harms associated with the proliferation of artificial intelligence (AI) systems.
On 4 February 2026, the Joint Committee on Human Rights met to discuss whether the UK’s regulators have the resources, expertise and powers to ensure that human rights are protected from new and emerging harms caused by AI.
While there are at least 13 regulators in the UK with remits relating to AI, there is no single regulator dedicated to regulating AI.
The government has stated that AI should be regulated by the UK’s existing framework, but witnesses from the Equality and Human Rights Commission (EHRC), the Information Commissioner’s Office (ICO) and Ofcom warned MPs and Lords that the current disconnected approach risks falling behind fast-moving AI without stronger coordination and resourcing.
Mary-Ann Stephenson, chair of the EHRC, stressed that resources were the greatest hurdle in regulating the technology. “There is a great deal more that we would like to do in this area if we had more resources,” she said.
Highlighting how the EHRC’s budget has remained frozen at £17.1m since 2012, which was then the minimum amount required for the commission to perform its statutory functions, Stephenson told MPs and Lords that this is equivalent to a 35% cut.
Regulators told the committee that the legal framework is largely in place to address AI-related discrimination and rights harms through the Equality Act.
The constraint is therefore in capacity and resources, not a lack of statutory powers. As a result, much of the enforcement is reactive rather than proactive.
Stephenson said: “The first thing the government should do is ensure that existing regulators are sufficiently funded, and funded to be able to work together so that we can respond swiftly when gaps are identified.”
Andrew Breeze, director for online safety technology policy at Ofcom, stressed that regulation could not keep pace with rapid AI development.
However, regulators also stressed that they are technology-neutral; their powers with regard to AI are limited to the use case and deployment level. Ofcom, the ICO and the ECHR have no power to refuse or give prior approval to new AI products.
The committee itself expressed a strong interest in having a dedicated AI regulator. Labour peer Baroness Chakrabarti compared AI regulation to the pharmaceutical industry.
“Big business, lots of jobs, capable of doing enormous good for so many people, but equally capable of doing a lot of damage,” she said. “We would not dream of not having a specific medicines regulator in this country or any developed country, even though there might be privacy issues and general human rights issues.”
Regulators were in favour of a coordinating body to bring stronger cross-regulator mechanisms rather than a single super-regulator. They stressed that because AI is a general-purpose technology, regulation works best when handled by sector regulators that cover specific domains.
Forms of coordination are already in place, such as the Digital Regulation Cooperation Forum (DRCF), formed in July 2020 to strengthen the working relationship between four regulators.
It has created cross-regulatory teams to share knowledge and develop collective views on digital issues, including algorithmic processing, design frameworks, digital advertising technologies and end-to-end encryption.
The then-outgoing information commissioner, Elizabeth Denham, told MPs and peers that information-sharing gateways between regulators and the ability to perform compulsory audits “would ensure that technology companies, some the size of nation-states, are not forum shopping or running one regulator against another”.
Spread of misinformation
Breeze made the case for greater international regulatory cooperation with regard to disinformation produced by AI.
Ofcom clarified that, under the UK’s Online Safety Act, it does not have the power to regulate the spread of misinformation on social media.
“Parliament explicitly decided at the time the Online Safety Bill was passed not to cover content that was harmful but legal, except to the extent that it harms children,” said Breeze.
While misinformation and disinformation regulation is largely absent in UK law, it is present in the European Union’s counterpart to the Online Safety Act.
Because of the cross-border nature of large tech companies, Breeze noted that legal action on discrimination can sometimes be taken using European legislation.
Age regulation and the Online Safety Act
Regulators also addressed scepticism on age assurance safeguards in the context of the proposed social media ban for under-16s and restricting access to online pornography.
Breeze said age assurance represented a trade-off for regulators between child protection and ensuring a high degree of online privacy.
Responding to criticism that the Online Safety Act has been ineffective due to the widespread use of virtual private networks (VPNs), Breeze said: “Checks are about ensuring as many young people as possible are protected from seeing products deemed harmful to them … and there is no impregnable defence that you can create on the internet against a determined person, adult or child.”
He said that according to the evidence, the majority of children who report seeing harmful content usually weren’t looking for it.
The same committee heard in November 2025 that the UK government’s deregulatory approach to artificial intelligence would fail to deal with the technology’s highly scalable human rights harms and could lead to further public disenfranchisement.
Big Brother Watch director Silkie Carlo highlighted that the government’s “very optimistic and commercial-focused outlook on AI” and the Data Use and Access Act (DUAA) have “decimated people’s protections against automated decision-making”.
Carlo added that there is real potential for AI-enabled mass surveillance to “spiral out of control”, and that a system built for one purpose could easily be deployed for another “in the blink of an eye”.
Tech
How to Set Up an Apple Watch for Your Kids
Unpairing is supposed to erase all content and settings on your watch, but in my case, it did not. If it doesn’t work for you either, tap Settings on the watch, then General > Reset > Erase All Content and Settings.
At this point, you can have your kid put it on (if it’s charged). The watch will say Bring iPhone Near Apple Watch. If you open the Watch app, it lets you choose to Set Up for a Family Member. Aim the phone’s viewfinder at the slowly moving animation to pair, or select Pair Manually.
Apple’s tutorial is pretty straightforward from this point. I picked a passcode that’s easy for my daughter to remember and picked her from my family list. I continued cellular service. Then I set up all the usual features and services for an Apple Watch, including Ask to Buy so she couldn’t buy anything from the app store without my permission, Messages, and Emergency SOS.
I also chose to limit my daughter’s contacts on the watch. First, go to Settings > iCloud > Contacts on your phone and make sure it’s toggled on. Then click out, go back to Settings > Screen Time > Family Member > Communication Limits. You need to request your child’s permission to manage their contacts and approve it from the kid’s watch. On their watch, you can add and rename contacts from your contact list (Dad becomes “Grandpa,” Tim becomes “Uncle Timmy,” and so on).
The last step is turning on Schooltime, which is basically a remote-controlled version of an adult Work Focus. It blocks apps and complications, but emergency calls can still come through. The setup tutorial walks you through how to set up Schooltime on your child’s watch, but if you skip it during setup, you can manage it later. On your iPhone, tap All Watches > Your Child’s Watch > Schooltime > Edit Schedule.
I elected to turn Schooltime on when my child is in school and turn it off during afterschool care, but you can also click Add Time if you’d like to turn it on during a morning class, take a break for lunch, and then turn it back on again. Your kid can just turn the digital crown to exit Schooltime, but that’s OK—you can check their Schooltime reports on your iPhone too.
To manage your child’s watch, go to your Watch > All Watches > Family Watches > Your Kid’s Apple Watch. This is how you install updates and manage settings. For more settings that you can turn on or off, check out Apple’s full list here. For example, you can check health details, set up a Medical ID, or even edit their smart replies.
Fun for Everyone
Just as with a grown-up Apple Watch, the first thing you’ll probably want to do is switch the watch face. Hold down the screen and wait for the face to shrink, and swipe to switch. (You probably also want to buy a tiny kid-specific watch band.)
We got my daughter an Apple Watch, so I’d be able to see her on Find My, and she could contact me via phone or the Messages app, which she does with regrettable frequency.
Tech
AI Industry Rivals Are Teaming Up on a Startup Accelerator
The largest western AI labs are taking a break from sniping at one another to partner on a new accelerator program for European startups building applications on top of their models. Paris-based incubator Station F will run the program, named F/ai.
On Tuesday, Station F announced it had partnered with Meta, Microsoft, Google, Anthropic, OpenAI and Mistral, which it says marks the first time the firms are all participating in a single accelerator. Other partners include cloud and semiconductor companies AWS, AMD, Qualcomm, and OVH Cloud.
An accelerator is effectively a crash course for early-stage startups, whereby founders attend classes and lectures, consult with specialists, and receive introductions to potential investors and customers. The broad aim is to help startups bring ideas to market as quickly as possible.
The 20 startups in each F/ai cohort will undergo a curriculum geared specifically toward helping European AI startups generate revenue earlier in their lifecycle, in turn making it easier to secure the funding required to expand into the largest global markets. “We’re focusing on rapid commercialization,” says Roxanne Varza, director at Station F, in an interview with WIRED. “Investors are starting to feel like, ‘European companies are nice, but they’re not hitting the $1 million revenue mark fast enough.’”
The accelerator will run for three months, twice a year. The first edition began on January 13. Station F has not revealed which startups make up the cohort, but many were recommended by Sequoia Capital, General Catalyst, Lightspeed, or one of the other VC firms involved in the program. The startups are all building AI applications on top of the foundational models developed by the partnering labs, in areas ranging from agentic AI to procurement and finance.
In lieu of direct funding, participating founders will receive more than $1 million in credits that can be traded for access to AI models, compute, and other services from the partner firms.
With very few exceptions, European companies have so far lagged behind their American and Chinese counterparts at every stage of the AI production line. To try to close that gap, the UK and EU governments are throwing hundreds of millions of dollars at attempts to support homegrown AI firms, and develop the domestic data center and power infrastructure necessary to train and operate AI models and applications.
In the US, tech accelerators like Y Combinator have produced a crop of household names, including Airbnb, Stripe, DoorDash, and Reddit. OpenAI was itself established in 2015 with the help of funding from Y Combinator’s then research division. Station F intends for F/ai to have a similar impact in Europe, making domestic AI startups competitive on the international stage. “It’s for European founders with a global ambition,” says Varza.
The program also represents a chance for the US-based AI labs to sow further seeds in Europe, using subsidies to incentivize a new generation of startups to build atop their technologies.
Once a developer begins to build on top of a particular model, it is rarely straightforward to swap to an alternative, says Marta Vinaixa, partner and CEO at VC firm Ryde Ventures. “When you build on top of these systems, you’re also building for how the systems behave—their quirkiness,” she says. “Once you start with a foundation, at least for the same project, you’re not going to change to another.”
The earlier in a company’s lifecycle it begins to develop on top of a particular model, says Vinaixa, the more that effect is magnified. “The sooner that you start, the more that you accumulate, the more difficult it becomes,” she says.
Tech
The Security Interviews: Mick Baccio, Splunk | Computer Weekly
A lot of people struggle to pronounce the name of American politician Pete Buttigieg. When Mick Baccio, now global security advisor at Splunk SURGe and Cisco Foundation AI, went to work for him in a previous life, it was helpfully spelled out in large letters on the office wall. Buttigieg says it ‘Boot-edge-edge’, if you were wondering.
“I was like, oh that’s clever, thank you for that,” says Baccio. “I’m going to meet the man in a second, I should know this!”
A former US Navy Reserve intelligence officer who began his political career as the mayor of South Bend in Indiana, Buttigieg served as secretary of transportation during the administration of US president Joe Biden, from 2021 to 2025.
However, before that, he had a tilt at the White House himself, running a primary campaign that won in the state of Iowa, before he dropped out at the start of March 2020 as the Democrats rallied behind Biden.
It was on this campaign that Baccio met Buttigieg, and in conversation with Computer Weekly, he reflects on the experience of bootstrapping cyber security for a US presidential campaign.
Baccio admits he was sceptical about taking the gig at first, having just escaped Washington DC himself after serving as a threat intelligence expert for the Executive Office of the President under both Barack Obama and Donald Trump.
“I got a call one day. They said, ‘Hey, do you want to come be CISO [chief information security officer] for the Buttigieg campaign?’ I said ‘no’. I was like, ‘I’m good’,” he says.
“When you look at a political campaign in the United States, win or lose, you’re going to be unemployed in November.”
Someone must have kept on at him, because the record shows he took the job, and even though “president Buttigieg” did not take the job, Baccio has no regrets about his choices.
“It’s the most fun you’ll have,” he says. “The closest thing to a political campaign, I think, is a startup, but a campaign is a most unique organisation because it’s a non-profit funded entirely by donations and its sole purpose is to elect your mascot.
“Now, I say mascot not in a mean way, but secretary Buttigieg was not involved in day-to-day operations. He didn’t run things in the campaign – he was the campaign. He’s not even the CEO, he’s who we are – we’re Pete for America.”
In such a campaign, the role of CISO takes on a fundamentally different aspect, says Baccio. To start with, most campaign staffers are volunteers, or in their first or second jobs after university. “Most of them don’t even know what a CISO is. I had to explain that a lot, why I was there and what I was doing – teaching folks how to ‘do the cybers’,” says Baccio.
Such a campaign faces challenges that large organisations with security budgets and supportive boards do not. For one thing, every dollar that a political campaign spends on something like cyber security, office furniture, or coffee and doughnuts is a dollar it is not spending on winning votes, so Baccio quickly learned he had to operate lean and operate cheaply.
But despite what tales of Russian espionage and interference in US election cycles might lead you to believe, the campaign faced a threat environment much like any ordinary business.
“I think one of the most under-appreciated threat vectors is just plain old fraud and business email compromise,” says Baccio.
“This is a $100bn a year industry, and we talk a lot about the agentic AI [artificial intelligence] threat, polymorphic-enabled malware, APT [advanced persistent threat], blah blah blah – everybody wants it to be that, but it’s generally fraud,” he adds.
“I never underestimate folks who are just trying to do their job. If your job is to process invoices, it’s all you do all day, if you get a PDF labelled ‘invoice’ you’re going to open it. Fraud is a bigger problem than any APT or AI attack, but I don’t think it’s sexy enough to get column inches.”
Five a day
Indeed, an often-neglected security message, and one Splunk is keen to repeat, is the importance of eating your cyber vegetables – that is to say, nailing the basics.
Having driven around this block several times over the years, Baccio thinks these vegetables account for at least the bottom third of the cyber food pyramid.
“You know you’re supposed to drink lots of water, you’re supposed to eat lots of green things, and if you don’t, your body reflects that,” says Baccio. “And you know you’re supposed to MFA [multifactor authenticate] all the things, you know you’re supposed to segment your network, you know you’re supposed to patch your things – and if you don’t, your network gets popped.
“I’m not saying do all these things and you’ll be okay, I’m saying do all these things and you’ll be in a better position.
“Hackers don’t hack the cloud, they log in. They’ve already bought those credentials from an access broker. They’re not hacking anything. But if I have phishing-resistant MFA available to me, they might not be able to log in, the account takeover won’t happen, and the rest of the cyber attack changes going forward. So it’s those things that I think go a long, long way towards raising that overall bar.”
Blue collar for the blue team
Splunk SURGe was set up to help defenders tackle real-world problems that they face today, with a mix of actionable guidance, in-depth analysis on cyber issues and practical solutions during fast-moving security panics. Think of its output as a cyber buffet with excellent vegetarian options.
SURGe had its genesis during one of the “headless chicken” moments, when unit founder Ryan Kovar was poring over various Slack groups one evening and spotted a lot of chatter surrounding an apparent SolarWinds compromise – heralding the now legendary Sunburst/Solorigate incident.
In the wake of this, Kovar realised there was a big gap in Splunk’s offering, in that the company had pretty good tech and processes when it came to applying data science to security, but wasn’t so hot at cutting through to the human side of things.
In short, it wasn’t being holistic enough.
That said, Kovar – in his own words – “wasn’t sure the world needed yet another security vendor research team”, so he formed SURGe to be a practical resource for users, or “blue collar for the blue team”.
Baccio was intimately involved in the unit’s creation – Kovar credits him with coming up with the “blue collar” line – and several years down the line, he still spends a lot of time helping Splunk’s customers make sense of the security landscape through blogs and other forms of outreach, as well as participating in a regular series, Coffee talk with SURGe.
He reflects: “I’m really lucky that I was in the Buttigieg campaign, that I was at the White House prior to that, the Pentagon, HHS [the Department of Health and Human Services], the CDC [Centre for Disease Control], and I’m now able to take all of that experience and bring it into SURGe and say, ‘These are the security things I’ve seen in my career – this is what I believe people want’.”
Threat intel at the foundations of AI
However, since July 2025, SURGe’s core mission has changed somewhat, after it transitioned to work within Cisco Foundation AI, a new initiative by Splunk’s network-centric parent that is developing open-weight, security-specific AI models.
In April 2025, Foundation AI launched Foundation-sec-8b, an eight-billion-parameter large language model (LLM) expressly designed to enable security teams to work faster, act more precisely and scale their operations without compromise.
You might reasonably wonder what a threat intelligence unit is doing jumping into bed with a bunch of LLM developers. Baccio himself declares he was shocked when it happened, but now he thinks it may be the smartest move Cisco has made since acquiring Splunk.
He characterises it as bringing SURGe’s collective experience as a steward of threat intelligence and a trusted advisor to customers to bear on a highly technical field and build AI tools that actually help security teams.
The advent of agentic AI in the past 12 to 18 months helps drive this narrative forward, says Baccio, and makes the promise of AI more real, at least compared to where it was a couple of years ago.
“If I throw generalised AI at a cyber problem, it’s not going to be great. But if I built a very specific model to do a very specific thing, then, yeah, that’s what I wanted a year ago when you sold me this AI hype,” he says. “Agentic is focused on one task, and it’s going to do it really well, but don’t ask it to do anything else.”
He cites the work of his colleague Shannon Davis, a principal AI researcher at Foundation AI, as a case in point. Davis created a tool called PLoB – standing for post-logon behaviour – to help detect intrusions instantaneously.
“To my point where you don’t hack the cloud, you just log in, after you have done so, PLoB detects all the activity that you’re doing and will be able to say, ‘This is a malicious actor’ or ‘This is just Mick from research’,” he says.
“Being able to do that at machine speed is something we’re going to have to lean into more when you take into account API calls, non-human identities, and all these things we’re introducing to the Rube Goldberg machine of the internet.
“Learning how agentic is applied becomes critical,” says Baccio as he looks ahead. “We have some stuff going on in the background that I can’t speak to, but we’re actively working together to brainstorm ideas and build these things to help move that Sisyphean security rock further up the hill. I’m excited about that. We’re going to help to keep someone’s security programme a little more secure.”
-
Entertainment5 days agoHow a factory error in China created a viral “crying horse” Lunar New Year trend
-
Business1 week agoNew York AG issues warning around prediction markets ahead of Super Bowl
-
Fashion1 week agoIntertextile Shanghai 2026 to debut pet boutique zone
-
Fashion1 week agoICE cotton slides as strong dollar, metal sell-off hit prices
-
Business5 days agoStock market today: Here are the top gainers and losers on NSE, BSE on February 6 – check list – The Times of India
-
Business1 week agoWhy Are Gold Prices Swinging? Nirmala Sitharaman Breaks It Down
-
Tech5 days agoNordProtect Makes ID Theft Protection a Little Easier—if You Trust That It Works
-
Tech5 days agoPrivate LTE/5G networks reached 6,500 deployments in 2025 | Computer Weekly
