Connect with us

Tech

ICO publishes summary of police facial recognition audit | Computer Weekly

Published

on

ICO publishes summary of police facial recognition audit | Computer Weekly


The Information Commissioner’s Office (ICO) has completed its first-ever data protection audit of UK police forces deploying facial recognition technologies (FRT), noting it is “encouraged” by its findings.

The ICO’s audit, which investigated how South Wales Police and Gwent Police are using and protecting people’s personal information when deploying facial recognition, marks the first time the data regulator has formally audited a UK police force for its use of the technology.

According to an executive summary published on 20 August, the scope of the facial recognition audit – which was agreed with the two police forces beforehand – focused on questions of necessity and proportionality (a key legal test for the deployment of new technologies), whether its design meets expectations around fairness and accuracy, and whether “the end-to-end process” is compliant with the UK’s data protection rules.

“We are encouraged by the findings, which provide a high level of assurance that the processes and procedures currently in place at South Wales Police and Gwent Police are compliant with data protection law,” said the deputy commissioner for regulatory policy, Emily Keaney, in a blog post.

“The forces made sure there was human oversight from trained staff to mitigate the risk of discrimination and ensure no decisions are solely automated, and a formal application process to assess the necessity and proportionality before each LFR deployment,” she wrote.

The executive summary added that South Wales Police and Gwent Police have “comprehensively mapped” their data flows, can “demonstrate the lawful provenance” of the images used to generate biometric templates, and have appropriate data protection impact assessments (DPIAs) in place.

It further added that the data collected “is adequate, relevant and limited to what is necessary for its purpose”, and that individuals are informed about its use “in a clear and accessible manner”.

However, Keaney was clear that the audit only “serves as a snapshot in time” of how the technology is being used by the two police forces in question. “It does not give the green light to all police forces, but those wishing to deploy FRT can learn from the areas of assurance and areas for improvement revealed by the audit summary,” she said.

Commenting on the audit, chief superintendent Tim Morgan of the joint South Wales and Gwent digital services department, said: “The level of oversight and independent scrutiny of facial recognition technology means that we are now in a stronger position than ever before to be able to demonstrate to the communities of South Wales and Gwent that our use of the technology is fair, legitimate, ethical and proportionate.

“We welcome the work of the Information Commissioner’s Office audit, which provides us with independent assurance of the extent to which both forces are complying with data protection legislation.”

He added: “It is important to remember that use of this has never resulted in a wrongful arrest in South Wales and there have been no false alerts for several years as the technology and our understanding has evolved.”

Lack of detail

While the ICO provided a number of recommendations to the police forces, it did not provide any specifics in the executive summary beyond the priority level of the recommendation and whether it applied to the forces’ use of live or retrospective facial recognition (LFR or RFR).

For LFR, it said it made four “medium” and one “low” priority recommendations, while for RFR, it said it made six “medium” and four “low” priority recommendations. For each, it listed one “high” priority recommendation.

Computer Weekly contacted the ICO for more information about the recommendations, but received no response on this point.

Although the summary lists some “key areas for improvement” around data retention policies and the need to periodically review various internal procedures, key questions about the deployments are left unanswered by the ICO’s published material on the audit.

For example, before they can deploy any facial recognition technology, UK police forces must ensure their deployments are “authorised by law”, that the consequent interference with rights – such as the right to privacy – is undertaken for a legally “recognised” or “legitimate” aim, and that this interference is both necessary and proportionate. This must be assessed for each individual deployment of the tech.

However, beyond noting that processes are in place, no detail was provided by the ICO on how the police forces are assessing the necessity and proportionality of their deployments, or how these are assessed in the context of watchlist creation.

Although more detail on proportionality and necessity considerations is provided in South Wales Police’s LFR DPIA, it is unclear if any of the ICO’s recommendations concern this process.  

While police forces using facial recognition have long maintained that their deployments are intelligence-led and focus exclusively on locating individuals wanted for serious crimes, senior officers from the Metropolitan Police and South Wales Police previously admitted to a Lords committee in December 2023 that both forces select images for their watchlists based on crime categories attached to people’s photos, rather than a context-specific assessment of the threat presented by a given individual.

Computer Weekly asked the ICO whether it is able to confirm if this is still the process for selecting watchlist images at South Wales Police, as well as details on how well police are assessing the proportionality and necessity of their deployments generally, but received no response on these points.

While the ICO summary claims the forces are able to demonstrate the “lawful provenance” of watchlist images, the regulator similarly did not respond to Computer Weekly’s questions about what processes are in place to ensure that the millions of unlawfully held custody images in the Police National Database (PND) are not included in facial recognition watchlists.

Computer Weekly also asked why the ICO is only beginning to audit police facial recognition use now, given that it was first deployed by the Met in August 2016 and has been controversial since its inception.

“The ICO has played an active role in the regulation of FRT since its first use by the Met and South Wales Police around 10 years ago. We investigated the use of FRT by the Met and South Wales and Gwent police and produced an accompanying opinion in 2021. We intervened in the Bridges case on the side of the claimant. We have produced follow-up guidance on our expectations of police forces,” said an ICO spokesperson.

“We are stepping up our supervision of AI [artificial intelligence] and biometric technologies – our new strategy includes a specific focus on the use of FRT by police forces. We are conducting an FRT in Policing project under our AI and biometrics strategy. Audits form a core part of this project, which aims to create clear regulatory expectations and scalable good practice that will influence the wider AI and biometrics landscape.

“Our recommendations in a given audit are context-specific, but any findings that have applicability to other police forces will be included in our Outcomes Report due in spring 2026, once we have completed the rest of the audits in this series.”

EHRC joins judicial review

In mid-August 2025, the Equality and Human Rights Commission (EHRC) was granted permission to intervene in an upcoming judicial review of the Met Police’s use of LFR technology, which it claims is being deployed unlawfully.

“The law is clear: everyone has the right to privacy, to freedom of expression and to freedom of assembly. These rights are vital for any democratic society,” said EHRC chief executive John Kirkpatrick.

“As such, there must be clear rules which guarantee that live facial recognition technology is used only where necessary, proportionate and constrained by appropriate safeguards. We believe that the Metropolitan Police’s current policy falls short of this standard.”

He added: “The Met, and other forces using this technology, need to ensure they deploy it in ways which are consistent with the law and with human rights.”

Writing in a blog about the EHRC joining the judicial review, Chris Pounder, director of data protection training firm Amberhawk, said that, in his view, the statement from Kirkpatrick is “precisely the kind of statement that should have been made by” information commissioner John Edwards.

“In addition, the ICO has stressed the need for FRT deployment ‘with appropriate safeguards in place’. If he [Edwards] joined the judicial review process as an interested party, he could get judicial approval for these much vaunted safeguards (which nobody has seen),” he wrote.

“Instead, the ICO sits on the fence whilst others determine whether or not current FRT processing by the Met Police is ‘strictly necessary’ for its law enforcement functions. The home secretary, for her part, has promised a code of practice which will contain an inevitable bias in favour of the deployment of FRT.”

In an appearance before the Lords Justice and Home Affairs Committee on 8 July, home secretary Yvette Cooper confirmed the government is actively working with police forces and unspecified “stakeholders” to draw up a new governance framework for police facial recognition.

However, she did not comment on whether any new framework would be placed on a statutory footing.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Need One Pair for Hiking, Traveling, and Working Out? Try Gravel Running Shoes

Published

on

Need One Pair for Hiking, Traveling, and Working Out? Try Gravel Running Shoes


HOKA’s max-stacked Rocket X Trail combines road race shoe energy with boosted grip from a 3-mm lugged outsole. If you’re looking for a fast shoe to go on the attack, this is it. It’s also fantastic for all round comfort. In testing, I laced up the Rocket X Trail and ran 3 hours (just short of 19 miles) fresh out of the box, across roads, forest gravel trails, some grass and through some serious water. It delivered efficiency and energy whether I was moving at marathon pace or with heavier, tired, ragged footfalls in the latter miles.

The rockered, supercritical midsole uses HOKA’s liveliest foam, similar to those you find in its race-ready road shoes, along with a carbon plate. That combines for a really fun ride that’s smooth, springy and fast and really consistent. It’s also highly cushioned, so you will sacrifice a lot of ground feel for that big stack springy softness. It’s also less stable over very lumpy terrain. But on open, flat, runnable mixed terrain, it’s excellent.

The lightweight uppers have a race-shoe-ready feel and after running through ankle-deep flooded sections, they shed water really quickly. This is a pricey road-to-trail shoe, it’s versatile and there’s plenty of winter road potential, too.

Specs
Weight 9.45 oz
Heel-to-toe drop 6 mm
Lug depth 3 mm



Source link

Continue Reading

Tech

If a Garmin Is Too Expensive, Consider Suunto’s Latest Adventure Watch

Published

on

If a Garmin Is Too Expensive, Consider Suunto’s Latest Adventure Watch


It’s always pleasing to see an array of physical buttons, and you get sizable ones too. You’re not going to miss these wide flat ones even when picking the pace up. The silicone strap has a nice stretch to it and while the button clasp is a bit awkward to get into place, this watch does not budge.

Suunto has jumped on the flashlight trend, with an LED light strip sat on the front of the case. You can adjust brightness levels and there’s SOS and alert modes to emit a very noticeable pulsating light pattern. This is a light I found useful rooting around indoors as well as on nighttime outings.

The biggest change is the introduction of a 1.5-inch, 466 x 466 AMOLED display. This replaces the dull, albeit very visible, memory-in-pixel (MIP) display. Suunto also ditched the solar charging that did require spending a significant amount of time outside to reap its battery benefits.

Adding AMOLED screens to outdoor watches has been contentious. The older MIP displays are just more power-efficient. The Vertical 2 is down by about 10 days from the older Vertical for what Suunto calls daily use.

Still, even if you’re putting its tracking and mapping features to use, you’re not going to be reaching for the charger every few days. After two hours of tracking in optimal GPS mode, the battery only dropped by 2 to 3 percent. The battery drop outside of tracking is also small and the standby performance is excellent as well.

Software Updates

Photograph: Michael Sawh

A more streamlined set of smartwatch features helps reserve battery for when it really matters. Unfortunately, I probably got better battery life because you don’t get phone notifications or responses if it’s paired to an iPhone instead of an Android. There’s also no onboard music player, but you do get a pretty slick set of music playback controls that are accessible during tracking.



Source link

Continue Reading

Tech

Edge AI: Business cost, risk and control | Computer Weekly

Published

on

Edge AI: Business cost, risk and control | Computer Weekly


Over the past few years, edge artificial intelligence (AI) has quickly transformed from a niche technology to a vital and strategic necessity. This is mainly because it helps resolve or minimise some of the key bottlenecks of traditional cloud-based AI. These include data volume, latency, privacy and cost, among others, while allowing companies to make instant decisions to keep up with modern and increasingly automated operations. 

As a result, the deployment of edge AI is no longer only a technical architecture choice, but one that is actively reshaping risk, cost, compliance and responsibility for enterprises. Businesses are increasingly choosing to store sensitive information mainly on local networks, instead of relying on cloud providers, which has further driven the growth of edge AI.

Rather than asking whether or not to adopt edge AI, the crucial question for most companies is how to do so without creating new security, cost and governance issues. As a relatively new technology still, several companies risk implementing edge AI simply to jump on the AI bandwagon, without being fully aware of which situations can most benefit from it. 

“Edge AI attracts a lot of enthusiasm because it enables real-time, autonomous decisions. However, the real danger is a false sense of technological maturity,” notes Michaël Bikard, professor of strategy at the Insead business school. “Edge AI can work well locally while producing fragile outcomes at the system level. Historically, that’s when failures occur. Not because the technology fails, but because it is trusted too early, before institutions, organisations and governance are ready.”

As such, understanding the consequences of edge AI deployment is paramount to deciding long-term strategy. 

Why businesses are moving from cloud-first to hybrid

Businesses are increasingly choosing a more hybrid AI approach over a cloud-first strategy, driven mainly by larger and more complex AI workloads. Many firms have also been disappointed by the savings achieved by adopting a full public cloud strategy, instead being faced with sharply surging operational costs. 

These costs, exacerbated by data-heavy applications, mainly arose from moving large datasets to and from the cloud and between providers. Surprise fees and unpredictable bills have further strained IT budgets and complicated budgeting and forecasts.  

Edge AI attracts a lot of enthusiasm because it enables real-time, autonomous decisions. However, the real danger is a false sense of technological maturity
Michaël Bikard, Insead

On the other hand, with edge AI, companies can run stable and predictable workloads on-premise much cheaper than in the cloud. 

Latency is another overarching concern. Edge AI can often be better than the cloud to minimise latency for applications which need real-time, high-speed processing. These include operational control systems and local analytics, among others. 

In highly regulated industries such as finance and healthcare, some data may only be stored within certain jurisdictions, which has further driven the shift to edge AI or on-premise solutions.

Major, single cloud providers can also come with supplier lock-ins, while multicloud environments are increasingly complicated to manage, also leading to hybrid approaches.

A hybrid strategy lets companies use public cloud to train and update applications which need to scale fast, while keeping high-volume, sensitive or stable data on-premise. This allows organisations to balance agility, cost efficiency and operational resilience, especially in a global context where real-time intelligence is increasingly valuable. 

Edge AI business drivers: What’s real and what’s noise 

At present, most businesses using edge AI have adopted the technology due to practical operational needs. Successful deployments have focused on solving specific, cloud-only limitations, rather than trying to overhaul entire company tech infrastructures.

The need for real-time decision-making has primarily driven edge AI adoption, especially in sectors like infrastructure, logistics, manufacturing and transport. This is especially as latency can have far-reaching operational and financial consequences, which the technology can help significantly in cutting down. 

Applying edge AI to these sectors helps companies process data closer to where it is generated, which enables them to react faster during times of lost central connectivity.

The technology also helps organisations dealing with sensitive data stay legally and financially compliant in jurisdictions with especially strict data storage laws. 

For companies working on critical operations, edge AI can greatly improve operational resilience by making sure that data and intelligence are distributed throughout a number of locations. This helps reduce dependence on centralised systems, which in turn decreases the impact of outages.

However, some business drivers are vastly overestimated when it comes to influencing the need to implement edge AI. The biggest of these is short-term cost savings. Edge AI can certainly cut down on transfer and cloud data consumption costs in the long-run.

However, it initially needs significant capital expenditure, mainly in the form of hardware device upgrades. There are also ongoing maintenance, monitoring and software update costs following implementation. In some cases, integration with legacy systems may be slower than expected and businesses may have to hire specialised labour as well. Edge AI systems also use considerable amounts of power, leading to higher energy bills.

These factors can all cause costs to be higher in the first few months, requiring businesses to have a long-term view when it comes to seeing strategic benefits from edge AI.

Another notion that is often overestimated is edge AI being able to deliver anything like “super-intelligence”, by running huge, complicated models like datacentre graphics processing units. However, given current computing and power restrictions in most cases, this scenario is highly unlikely at the moment.

Similarly, expectations of businesses being able to switch entirely to edge AI, instead of a hybrid approach, are also unrealistic, mainly because of practical deployment, integration and maintenance limitations across various locations. 

How edge AI is changing security, governance and ownership

As edge AI becomes more embedded in hybrid business tech strategies, risk management, enterprise security and governance are also changing, moving away from centralised IT control. These areas are now being shaped by local operational teams taking increasingly autonomous decisions, factoring in the real-time conditions of critical physical infrastructure.

Rising edge AI usage could heighten security concerns as well, as it widens organisational attack surfaces through multiple distributed devices and infrastructure. These then need to be protected, monitored and updated equally, following a set of standard guidelines, despite each of them presenting their own unique limitations. 

AI systems can perform exceptionally well under conditions similar to their training data, yet fail abruptly under rare, extreme, or novel scenarios – precisely the situations that matter most in critical infrastructure
Florian Stahl, Mannheim Business School

“AI systems can perform exceptionally well under conditions similar to their training data, yet fail abruptly under rare, extreme, or novel scenarios – precisely the situations that matter most in critical infrastructure,” remarks Florian Stahl, chair of quantitative marketing and consumer analytics at Mannheim Business School.

Patch management can pose more issues with edge AI as well, with thousands of endpoints and vulnerabilities causing potential delays and discrepancies in maintenance. 

With edge AI being all about local deployments, more questions around version control, oversight and audit issues can arise. This means that companies may need to maintain more in-depth and regular records about data inputs, decision-making processes and operational factors. Highly regulated industries may especially demand evidence trails and seek greater accountability, which can impact company reputations and licences. 

“Real-time AI systems, particularly those based on machine learning, often operate as ‘black boxes’, making it difficult to explain or audit decisions when failures occur. This lack of transparency is problematic in infrastructures where accountability and post-incident analysis are essential,” Stahl adds. 

As autonomous decisions taken locally can have very real financial, safety and compliance consequences, businesses may be compelled to take accountability far more seriously if they choose to use edge AI. 

Senior leadership may also need to adapt centralised organisational and governance models to a more distributed intelligence strategy, all while keeping costs low. 

These factors have led to edge AI becoming a structural change just as much as a technical one, impacting how and where decisions are taken, how risk is evaluated and overall accountability.

What leaders should consider before implementing edge AI 

Given the considerable initial investment required by most edge AI models, leaders should prioritise long-term strategic impact, rather than the hype of the latest technology. This means that while evaluating company-readiness, apart from timing, the potential scope of the intended edge AI model is paramount.

The biggest factor to consider is which processes or systems are most likely to benefit from using edge AI first and which can wait for a few more months. Ideally, businesses should prioritise any processes where latency, operational risk and data locality are most critical. By doing this, organisations can spread out costs while testing new deployments in a relatively lower-risk manner. 

“Importantly, organisations should evaluate AI deployments not only through efficiency metrics, but also through risk-adjusted performance indicators, recognising that marginal efficiency gains are rarely justified if they introduce disproportionate systemic or ethical risks,” Stahl advises.

The next question is: to scale or not to scale? In several cases, a pilot edge AI deployment is either enough for the short-term, does not deliver the expected results, or highlights many hidden costs and operational issues. 

In these cases, decision-makers need to evaluate whether it is worth taking the risk to scale, which will need more investment, specialised skills and manpower.

However, knowing when not to use edge AI, and when it could cause more harm than good, is equally important for businesses. This is primarily in cases where data volumes are still low, latency is not crucial, or the company does not have the means to appropriately handle several distributed endpoints.

“Edge AI should not be deployed in sectors where use cases are broad, stakes are high, and the consequences of errors are poorly understood,” Insead’s Bikard states. “That combination usually signals a timing problem rather than a technological one. In open, highly interconnected environments, even small mistakes can cascade before organisations have time to respond.”

In such cases, exercising strategic restraint is far more instrumental to long-term value. 

From tech choice to organisational shift

Ultimately, implementing edge AI models should be primarily focused on delivering long-term, strategic value, rather than a trend-based decision. This is especially true if latency and real-time data analysis pose real risks. Businesses need to consider that edge AI use is likely to reshape everything from cost structures and decision-making to autonomy and risk, and prepare accordingly.

“There are real potential gains from using AI for predictive maintenance, but those gains rarely come from the technology alone. For AI to pay off, the surrounding organisation – its incentives, culture, structures and skills – must also adapt. Predictions only create value if people are empowered to act on them,” Bikard concludes. 

Enterprises that treat edge AI as an entire operational shift, rather than an independent feature to be tacked onto legacy systems, will inevitably be able to take advantage of it better in the long run.  



Source link

Continue Reading

Trending