Tech
Met claims success for permanent facial recognition in Croydon | Computer Weekly
The Met Police has announced that its deployment of permanent live facial recognition (LFR) cameras in Croydon has led to 103 arrests, with the force claiming it has reduced crime in the local area by 12%.
Beginning in October 2025, the Met fixed 15 LFR-enabled cameras to street furniture in Croydon, claiming they would only be activated when officers are present and conducting an operation in the area.
The Met’s announcement comes just a week ahead of a judicial review against its use of LFR, which will assess whether it has been using the technology lawfully. The legal challenge was launched by anti-knife campaigner Shaun Thompson after he was wrongly identified as a suspect by the force’s LFR system, alongside privacy campaigners at Big Brother Watch.
While LFR is typically deployed by the force in an overt manner, with specially equipped cameras mounted atop a visibly marked police van to scan and compare people’s unique facial features against watchlists in real time, this marks the Met’s first covert deployment of the cameras that can be monitored by officers remotely.
In a press release, the Met claimed that running deployments without a van has increased the efficiency of its LFR operations, with an arrest being made on average every 34 minutes when in use, while also reducing the average time to locate wanted individuals by more than 50% when compared with van-based deployments.
Of those arrested, it added a third were for offences related to violence against women and girls, such as strangulation and sexual assault, with other arrests over recall to prison, burglary and possession of an offensive weapon.
“The increase in LFR deployments across crime hotspots in London is driven by its proven impact and success – with more than 1,700 dangerous offenders taken off London’s streets since the start of 2024, including those wanted for rape and child abuse,” said Lindsey Chiswick, the Met and national lead for LFR.
“This is why we are trialling a new and innovative pilot in Croydon,” she said. “It allows us to explore a different way of using facial recognition by operating it remotely and more efficiently. The amount of arrests we have made in just 13 deployments shows the technology is already making an impact and helping to make Croydon safer. Public support remains strong, with 85% of Londoners backing the use of LFR to keep them safe.”
The Met added that its pilot deployment of permanent LFR cameras will undergo an evaluation in the coming months to assess its effectiveness, but that there are currently no plans to expand its permanent deployment to other sites in London.
It also said the Met will continue to run engagement sessions with Croydon residents and councillors to explain how LFR works, outline the intelligence-led approach behind deployments, and set out the safeguards in place to protect privacy and rights.
However, in April 2025, in the wake of the Met’s initial announcement, local councillors previously complained that the decision to set up facial recognition cameras permanently took place without any community engagement from the force with local residents.
While the Met has further claimed that Croydon was selected for the permanent LFR deployment due to “its status as a crime hotspot”, local councillors also highlighted a pattern of racial bias in its choice of deployment locations.
“The Met’s decision to roll out facial recognition in areas of London with higher Black populations reinforces the troubling assumption that certain communities … are more likely to be criminals,” said Green Party London Assembly member Zoë Garbett at the time, adding that while nearly two million people in total had their faces scanned across the Met’s 2024 deployments, only 804 arrests were made – a rate of just 0.04%.
The Met Police’s roll-out of LFR in other boroughs has similarly taken place with little to no community engagement, and in some areas has occurred despite notable political opposition from local authorities.
Executive mayor of Croydon Jason Perry said in the Met’s press release, however, that the arrest figures show “that this pioneering technology is helping to make our streets safer”.
Broken windows in the panopticon
Perry added: “I look forward to continuing to work with the Met Police to tackle crime, as part of our zero-tolerance approach to fixing the ‘broken windows’, restoring pride in our borough and making Croydon a safer place for all our residents.”
Under the “broken windows” theory of policing, first posited by US criminologists James Wilson and George Kelling in the early 1980s, leaving even minor disorder unchecked (such as graffiti, antisocial behaviour or vandalism) encourages people to engage in more serious crimes.
While advocates of this approach therefore argue for the proactive, zero-tolerance policing of minor infractions as a way of instilling order and deterring more serious criminal conduct, critics argue it encourages aggressive or confrontational policing practices that disproportionally target poor and minoritised communities, ultimately breeding resentment against authorities.
In a recent interview with former prime minister Tony Blair, current UK home secretary Shabana Mahmood described her ambition to use technologies like artificial intelligence (AI) and LFR to achieve Jeremy Bentham’s vision of a “panopticon”, referring to his proposed prison design that would allow a single, unseen guard to silently observe every prisoner at once.
Typically used today as a metaphor for authoritarian control, the underpinning idea of the panopticon is that, by instilling a perpetual sense of being watched among the inmates, they would behave as authorities wanted.
“When I was in justice, my ultimate vision for that part of the criminal justice system was to achieve, by means of AI and technology, what Jeremy Bentham tried to do with his panopticon,” Mahmood told Blair. “That is that the eyes of the state can be on you at all times.”
LFR consultation on legal framework
In December 2025, the Home Office launched a 10-week consultation on the use of LFR by UK police, allowing interested parties and members of the public to share their views on how the controversial technology should be regulated.
While the use of LFR by police – beginning with the Met’s deployment at Notting Hill Carnival in August 2016 – has ramped up massively in recent years, there has so far been minimal public debate or consultation, with the Home Office claiming for years that there is already “comprehensive” legal framework in place.
However, the Home Office said in late 2025 that although a “patchwork” legal framework for police facial recognition exists (including for the increasing use of the retrospective and “operator-initiated” versions of the technology), it does not give police themselves the confidence to “use it at significantly greater scale … nor does it consistently give the public the confidence that it will be used responsibly”.
It added that the current rules governing police LFR use are “complicated and difficult to understand”, and that an ordinary member of the public would be required to read four pieces of legislation, police national guidance documents and a range of detailed legal or data protection documents from individual forces to fully understand the basis for LFR use on their high streets.
There have been repeated calls from both Parliament and civil society over many years for the police’s use of facial recognition to be regulated.
This includes three separate inquiries by the Justice and Home Affairs Committee into shoplifting, police algorithms and police facial recognition; two of the UK’s former biometrics commissioners, Paul Wiles and Fraser Sampson; an independent legal review by Matthew Ryder QC; the UK’s Equalities and Human Rights Commission; and the House of Commons Science and Technology Committee, which called for a moratorium on live facial recognition as far back as July 2019.
More recently, the Ada Lovelace Institute published a report in May 2025 that said the UK’s patchwork approach to regulating biometric surveillance technologies is “inadequate”, placing fundamental rights at risk and ultimately undermining public trust.
In August 2025, after being granted permission to intervene in the judicial review of the Met’s LFR use, the UK’s equality watchdog said the force is using the technology unlawfully, citing the need for its deployments to be necessary, proportionate and respectful of human rights.
Tech
Artemis II Mission Launches Successfully
At 6:36 pm Cape Canaveral time, NASA’s SLS rocket lifted off without incident with the four members of the Artemis II spacecraft aboard. During the first few hours, Orion will complete its journey into Earth orbit and, throughout the first day, will conduct critical navigation and systems tests. Around the third or fourth day, the spacecraft will begin its trajectory toward the moon and cross its gravitational sphere of influence. In total, the mission will last approximately 10 days.
The mission includes the first woman and the first Black person on a crewed mission to lunar orbit. The launch comes 53 years after Apollo 17, the last crewed mission to the Moon.
The Artemis II crew will not land on the moon (that will happen on Artemis IV ). Instead, their capsule will fly at altitudes between 6,000 and 9,000 kilometers above the surface of the far side of the moon, circle it, and begin the return journey to Earth. The mission’s main objective is to demonstrate that the space agency has the technological capability to send people to the Moon safely and without incident.
Once they achieve this, NASA will begin preparations for new moon landings in the following years, which will aim to establish the first lunar bases in history and, with them, the sustained and sustainable presence of humans on the satellite.
The launch was successful and occurred on schedule. The launch window opened on Wednesday, April 1, at 6:24 pm Eastern Time (EDT) and could have been extended for two hours, if necessary. NASA would have had five more days to attempt another launch.
Mission Details
The astronauts took off on a NASA SLS rocket and are traveling inside the Orion capsule, described as a spacecraft about the size of a large van. They will orbit Earth for at least two days to test the onboard instruments. Then they will align the spacecraft to begin its journey to the moon. By the fifth or sixth day of flight, the capsule is expected to enter the moon’s sphere of influence, where the satellite’s gravity is stronger than Earth’s, and dock with its orbit.
When the spacecraft passes “behind” the moon, the most dangerous phase will begin. The crew will be out of contact with Earth for about 50 minutes due to interference from the moon itself. During this crucial moment, the crew must capture images and data from the moon, taking advantage of the far-more-advanced technology they carry than was available during the Apollo era.
After completing the return, the capsule will head home, taking advantage of the Earth-moon gravity field to save fuel. According to NASA estimates, by the 10th day of flight the crew will be close to reaching the planet.
Tech
Arm works with IBM to deliver flexibility on mainframe | Computer Weekly
IBM has begun working with chipmaker Arm to develop what it calls dual-architecture hardware to provide flexibility when running enterprise artificial intelligence (AI) and data-intensive workloads.
Their overall goal is to combine IBM’s experience in systems reliability, security and scalability that it offers on Z-series mainframe systems with Arm’s expertise in power-efficient architectures and supporting a broad software ecosystem to build flexible and scalable computing platforms for the future.
Arm has been on a path to deliver an alternative to x86-powered servers in the datacentre. The company has introduced the Arm Agentic AI (artificial intelligence) central processor unit (CPU) which it positions as a processor that is tasked with keeping distributed AI systems operating efficiently at scale. This includes orchestrating AI accelerators, managing memory and storage, scheduling workloads and moving data across systems.
This latest collaboration appears to be focused on deliver enterprise reliability to the Arm platform. It builds on IBM’s heritage of offering coprocessors for the Z-series hardware such as the Integrated Facility for Linux, which was introduced in 2000. The mainframe manufacturer later introduced a Linux system based on the Z-series architecture, called LinuxOne, designed to let enterprise customers run Linux workloads in situ with data that resides on the mainframe system.
Christian Jacobi, chief technology officer and IBM fellow of IBM systems development, said: “This moment marks the latest step in our innovation journey for future generations of our IBM Z and LinuxOne systems, reinforcing our end-to-end system design as a powerful advantage.”
Mohamed Awad, executive vice-president of the cloud AI business unit at Arm, said: “Our collaboration with IBM builds on this progress, extending the Arm ecosystem into mission-critical enterprise environments and giving organisations greater flexibility in how they deploy and scale these workloads.”
The two companies said they are exploring how to expand virtualisation technologies that allow Arm-based software environments to operate within IBM’s enterprise computing platforms. According to IBM and Arm, this work is designed to expand software compatibility and streamline how developers and enterprises bring Arm applications into mission-critical environments.
In the security and reliability front, the pair plan to investigate new ways to support the performance and efficiency demands of modern workloads, including AI and data-intensive applications. IBM and Arm said they will be looking at how to enable enterprise systems to recognise and execute Arm applications.
The two companies also hope to provide a broader software ecosystems and greater flexibility in how applications are deployed and managed. IBM plans to offer new systems for its customers that incorporate Arm’s technology.
Tina Tarquinio, chief product officer of IBM Z and LinuxONE, said: “Our aim is to expand software choice and improve system performance while maintaining the reliability and security our clients expect.”
The collaboration is seen as a signal of how enterprises may eventually deploy scalable, flexible IT infrastructure to support different types of application workload.
Patrick Moorhead, founder, CEO and chief analyst at Moor Insights & Strategy, added: “What IBM and Arm are signaling here is a meaningful step toward that future that could broaden how enterprises think about deploying and scaling modern workloads. While the full implications will take time to unfold, it’s clear this reflects a deeper level of investment in long-term platform innovation and ecosystem expansion than we typically see at this stage.”
Tech
California Suspends Enforcement of Law Requiring VCs to Report Diversity Data
Under a new state regulation, venture capital firms operating in California were supposed to submit demographic data about their portfolio companies, including the gender and race of startup founders they backed. But amid public criticism from some tech leaders, the California agency administering the new requirement suspended it just before the Wednesday deadline for firms to make their first disclosures.
“The California Department of Financial Protection and Innovation (DFPI) has announced that it plans to initiate rulemaking in response to comments by various stakeholders relating to the Fair Investment Practices by Venture Capital Companies Law,” the state agency posted on its website in mid-March. “Implementation and enforcement of the [law] will be suspended pending completion of the rulemaking and until final regulations are in place.”
California lawmakers first passed the measure in 2023, and it was signed into law shortly thereafter by Governor Gavin Newsom. For decades, women and people of color have received only a small share of overall startup funding relative to their representation in the US population. Lawmakers hoped putting more public scrutiny on investment decisions would help foster greater equity in the market, including for people who are disabled, retired military, or LGBTQ+.
The law called for venture capital and some other investment firms to file annual reports starting March 1 of last year about the overall makeup of the founding teams they had invested in and the amount of money they provided to diverse founders. Firms were meant to collect the demographic data through a voluntary survey that was then anonymized. California authorities planned to publish the filings online. Lawmakers amended the law in 2024 to delay reporting until April 1, 2026, and enable the state to levy daily fines for noncompliance.
The California Department of Financial Protection and Innovation did not immediately respond to a request for comment on the authority it used to sidestep the deadline set by lawmakers. Newsom’s office also didn’t immediately respond to a request for comment.
Financiers focused on funding entrepreneurs from underrepresented backgrounds had supported the law. But the National Venture Capital Association, the tech investment industry’s leading trade group, opposed it. The group argued that voluntary data collection would inflate diversity statistics and that publishing inaccurate data could lead to unfair attacks on investors genuinely trying to tackle diversity issues. Over the past year, the Trump administration has defunded and attacked diversity, equity, and inclusion, or DEI, initiatives in both the public and private sectors, leading many businesses and organizations to pull back from them.
In February, the venture capital association wrote to Newsom asking for the reporting deadline to be pushed back again because, in its view, the state had bungled the process. California authorities didn’t publish the standardized survey that founders were supposed to fill out until early this year, and at the time they still hadn’t introduced a way for firms to register with regulators as required by the law, according to the association. “This administrative timeline creates an environment ripe for error and threatens to produce the misleading and counterproductive data we previously warned against,” association president and CEO Bobby Franklin wrote.
Last month, as the deadline for the first reports loomed, some entrepreneurs and investors began complaining on social media about the survey effort. “The latest California malarky is a requirement for venture investors to collect/report racial and gender statistics,” wrote Blake Scholl, the founder and CEO of venture-backed aviation startup Boom Supersonic. “I want to live in a world where merit matters—not skin color or what you have between your legs.”
-
Politics1 week agoAfghanistan announces release of detained US citizen
-
Sports1 week agoBroadcast industry CEO says consolidation is ‘essential’ to compete for NFL soaring media rights prices
-
Entertainment1 week agoUN warns migratory freshwater fish numbers are spiralling
-
Tech1 week agoCan a Home Appliance Fix the Problem of Soft-Plastic Waste?
-
Business1 week agoProperty Play: Home flippers see smallest profits since the Great Recession, real estate data firm says
-
Business1 week agoGold prices soar in Pakistan – SUCH TV
-
Business1 week agoMore women are entering wealth management, but few are in advisory roles, study finds
-
Fashion1 week agoICE cotton slips on weaker crude, profit booking
