Tech
Ethical hackers can be heroes: It’s time for the law to catch up | Computer Weekly
The last year has seen some of the costliest cyber attacks on UK businesses to date. Attacks on Marks &Spencer cost the supermarket chain hundreds of millions in lost profits and led to empty shelves. The Jaguar Land Rover attack sent shockwaves throughout its supply chain, which ultimately dragged down UK GDP in the third quarter.
While the perpetrators of cyber crime often operate across international borders, and beyond the reach of law enforcement, the M&S attack has resulted in several arrests in the UK, under the Computer Misuse Act [CMA] of 1990. With a new Cyber Security and Resilience Act on the way, it might seem UK authorities will soon have greater powers to force organisations to build better defences.
But while the UK government continues to pursue cyber criminals, it also needs to be much clearer about the crucial role of cyber security researchers and ethical hackers in defending against them.
Last week, UK security minister Dan Jarvis told a conference that the government was looking at changes to the CMA to introduce a “statutory defence” for cyber security experts who spot and share vulnerabilities.
It would mean that, as long they meet “certain safeguards”, researchers would be protected from prosecution.
To understand why this is so significant it’s worth recalling the background to the CMA. In the mid-1980s, IT journalist Steve Gold and fellow hacker Robert Schifreen were accused of accessing the Duke of Edinburgh’s BT Prestel email account.
They were prosecuted and convicted under the Forgery and Counterfeiting Act, but this was overturned on appeal, because that act didn’t specifically cover computer crimes.
This led to the CMA which set prison sentences for gaining unauthorised access to computer material.
The date is significant. At that time, most computer systems were tightly-controlled and effectively inaccessible to the majority of the population.
Very few people had a (BT-approved) modem at the time. The web had been developed just a year before. The dot com boom was years in the future, the term cyber war had yet to be coined, and the prospect of industrial level cyber crime barely considered.
The legislators who crafted the CMA can be forgiven for not anticipating the transformation of today’s digital environment, from mobile to cloud to AI. So, it’s perhaps understandable that the act didn’t anticipate the emergence of cyber security researchers, who would look for vulnerabilities and misconfigurations and share that information with the organisations concerned.
Less understandable is why this hasn’t been addressed since. As cyber crime transformed from a small niche into a worldwide epidemic over the last two decades, white hat hackers have been key to exposing and mitigating the methods and technologies cyber criminals have exploited. This has necessarily meant thinking and acting like a hacker.
Yet the CMA, and similar legislation in other countries, have proven to be a blunt instrument when it comes to deterring cyber crime.
It’s fair to point out that the number of prosecutions under the CMA and similar laws has been fairly low. But that is more because of the asymmetric nature of cyber crime: Most threats are coming from individuals beyond the reach of the UK and its allies, who are unlikely to be deterred by the CMA.
This imbalance has only become more stark as vulnerabilities and flaws have been exploited indiscriminately and at internet scale not just by criminals but by nation states willing to compromise critical national infrastructure, foreign businesses and consumers for strategic gains.
It has left researchers, and their potential clients, in a legal grey area. It has, on occasion, led to prosecutions of legitimate good guys.
Meanwhile, that ongoing threat of prosecution has an effect on another group of individuals – the next generation we need to encourage to join the industry. We are already suffering a chronic skills crisis, and the prospect of a criminal record hardly represents a golden hello.
None of this is new. The Criminal Law Reform Network highlighted in 2020 how “the CMA 1990 requires significant reform to make it fit for the 21st century.” and recommended the addition of required harms. The Home Office began a review of the act in 2021, which concluded in 2023, and did consider the question of a defence for researchers. the addition of required harms.
When the Cyber security and Resiliency Act becomes law in the UK, many more organisations will be obliged to report breaches, and be under more pressure to manage their security posture, including vulnerabilities.
They’re not going to be able to do that without the help of ethical hackers and cyber security researchers, who should be able to operate without fear of prosecution. It’s certainly do-able. Portugal has just announced built in defences for researchers in its implementation of NIS2.
Jarvis’ statement is welcome. But now we need action. We can’t wait another five years for the government to act to give cyber researchers and ethical hackers the cover they need. And we definitely can’t wait another 35.
Ed Parsons is chief operating officer at bug bounty, vulnerability disclosure and penetration testing services provider Intigriti, and a former vice president and cyber professional member association ISC2. A career risk and cyber expert, Parsons is a is a Certified Information Systems Security Professional (CISSP) and a UK Chartered Cyber Security Professional.
Tech
Justice Department Says Anthropic Can’t Be Trusted With Warfighting Systems
The Trump administration argued in a court filing on Tuesday that it did not violate Anthropic’s First Amendment rights by designating the AI developer a supply-chain risk and predicted that the company’s lawsuit against the government will fail.
“The First Amendment is not a license to unilaterally impose contract terms on the government, and Anthropic cites nothing to support such a radical conclusion,” US Department of Justice attorneys wrote.
The response was filed in a federal court in San Francisco, one of two venues where Anthropic is challenging the Pentagon’s decision to sanction the company with a label that can bar companies from defense contracts over concerns about potential security vulnerabilities. Anthropic argues the Trump administration overstepped its authority in applying the label and preventing the company’s technologies from being used inside the department. If the designation holds, Anthropic could lose up to billions of dollars in expected revenue this year.
Anthropic wants to resume business as usual until the litigation is resolved. Rita Lin, the judge overseeing the San Francisco case, has scheduled a hearing for next Tuesday to decide whether to honor Anthropic’s request.
Justice Department attorneys, writing for the Department of Defense and other agencies in the Tuesday filing, described Anthropic’s concerns about potentially losing business as “legally insufficient to constitute irreparable injury” and called on Lin to deny the company a reprieve.
The attorneys also wrote that the Trump administration was motivated to act because of “concerns about Anthropic’s potential future conduct if it retained access” to government technology systems. “No one has purported to restrict Anthropic’s expressive activity,” they wrote.
The government argues that Anthropic’s push to limit how the Pentagon can use its AI technology led defense secretary Pete Hegseth to “reasonably” determine that “Anthropic staff might sabotage, maliciously introduce unwanted function, or otherwise subvert the design, integrity, or operation of a national security system.”
The Department of Defense and Anthropic have been fighting over potential restrictions on the company’s Claude AI models. Anthropic believes its models shouldn’t be used to facilitate broad surveillance of Americans and are not currently reliable enough to power fully autonomous weapons.
Several legal experts previously told WIRED that Anthropic has a strong argument that the supply-chain measure amounts to illegal retaliation. But courts often favor national security arguments from the government, and Pentagon officials have described Anthropic as a contractor that has gone rogue and that its technologies cannot be trusted.
“In particular, DoW became concerned that allowing Anthropic continued access to DoW’s technical and operational warfighting infrastructure would introduce unacceptable risk into DoW supply chains,” Tuesday’s filing states. “AI systems are acutely vulnerable to manipulation, and Anthropic could attempt to disable its technology or preemptively alter the behavior of its model either before or during ongoing warfighting operations, if Anthropic—in its discretion—feels that its corporate ‘red lines’ are being crossed.”
The Defense Department and other federal agencies are working to replace Anthropic’s AI tools with products from competing tech companies in the next few months. One of the military’s top uses of Claude is through Palantir data analysis software, people familiar with the matter have told WIRED.
In Tuesday’s filing, the lawyers argued that the Pentagon “cannot simply flip a switch at a time when Anthropic currently is the only AI model cleared for use” on the department’s’s “classified systems and high-intensity combat operations are underway.” The department is working to deploy AI systems from Google, OpenAI, and xAI as alternatives.
A number of companies and groups, including AI researchers, Microsoft, a federal employee labor union, and former military leaders have filed court briefs in support of Anthropic. None have been filed in support of the government.
Anthropic has until Friday to file a counter response to the government’s arguments.
Tech
Meta Is Shutting Down Horizon Worlds on Meta Quest
Pour one out from your digital bottle, because Meta is shutting down the virtual reality experience of Horizon Worlds.
Meta sent an email blast to Horizon Worlds users today stating that the social VR world will officially end on its Quest VR headsets; starting March 31, Horizon Worlds will no longer be in the Quest store. Some Horizon-specific perks, including Meta Credits, avatars, and some digital clothes and in-world purchases, will also be removed. The VR worlds will be shutting down entirely on June 15, after which the service will be available only as a mobile platform.
The move comes after Meta made widespread cuts to its Reality Labs division in February, laying off 10 percent of employees in its VR department.
Horizon Worlds was Meta’s grand foray into building out the metaverse, the aspiration of a fully virtual environment inspired by Neal Stephenson’s Snow Crash. The company believed in the effort so much that it changed its name from Facebook to Meta in support of its VR endeavors.
Horizon Worlds is one of the less popular VR services out there, if the borderline glee you can find in the comments of the r/oculus subreddit thread about the service ending is anything to go by. It was widely mocked since it was first announced, especially due to a rocky start. Player avatars didn’t have legs and looked like such dead-eyed monsters that Meta CEO Mark Zuckerberg’s uncanny avatar became a meme.
Almost immediately, Horizon Worlds was populated primarily by children. But screeching kiddos throwing digital doughnuts around are not the most stable or profitable user base. Meta pumped billions of dollars into the service, arranging high-profile partnerships with other brands and artists to have virtual concerts by Imagine Dragons and Coldplay. Even with all that pomp, Meta’s proprietary-verse has always been less popular than VRChat, the social service that people actually seem to like enough to attend virtual raves and presidential elections.
As Meta shifts its focus to artificial intelligence and its Ray-Ban smart glasses, it has drastically cut its investments in its metaverse divisions, including stopping updates to very popular services like Supernatural Fitness.
“Meta’s pivot on Horizon Worlds is the predicted and inevitable outcome of a big, risky bet that never found an audience,” wrote Mike Proulx, vice president and research director at market research firm Forrester, in an email to WIRED. “Meta was trying to solve for a consumer problem that doesn’t exist. You can’t build a mass social platform reliant on hardware most people neither own nor want to wear for more than short bursts.”
Tech
MIT-IBM Watson AI Lab seed to signal: Amplifying early-career faculty impact

The early years of faculty members’ careers are a formative and exciting time in which to establish a firm footing that helps determine the trajectory of researchers’ studies. This includes building a research team, which demands innovative ideas and direction, creative collaborators, and reliable resources.
For a group of MIT faculty working with and on artificial intelligence, early engagement with the MIT-IBM Watson AI Lab through projects has played an important role helping to promote ambitious lines of inquiry and shaping prolific research groups.
Building momentum
“The MIT-IBM Watson AI Lab has been hugely important for my success, especially when I was starting out,” says Jacob Andreas — associate professor in the Department of Electrical Engineering and Computer Science (EECS), a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), and a researcher with the MIT-IBM Watson AI Lab — who studies natural language processing (NLP). Shortly after joining MIT, Andreas jump-started his first major project through the MIT-IBM Watson AI Lab, working on language representation and structured data augmentation methods for low-resource languages. “It really was the thing that let me launch my lab and start recruiting students.”
Andreas notes that this occurred during a “pivotal moment” when the field of NLP was undergoing significant shifts to understand language models — a task that required significantly more compute, which was available through the MIT-IBM Watson AI Lab. “I feel like the kind of the work that we did under that [first] project, and in collaboration with all of our people on the IBM side, was pretty helpful in figuring out just how to navigate that transition.” Further, the Andreas group was able to pursue multi-year projects on pre-training, reinforcement learning, and calibration for trustworthy responses, thanks to the computing resources and expertise within the MIT-IBM community.
For several other faculty members, timely participation with the MIT-IBM Watson AI Lab proved to be highly advantageous as well. “Having both intellectual support and also being able to leverage some of the computational resources that are within MIT-IBM, that’s been completely transformative and incredibly important for my research program,” says Yoon Kim — associate professor in EECS, CSAIL, and a researcher with the MIT-IBM Watson AI Lab — who has also seen his research field alter trajectory. Before joining MIT, Kim met his future collaborators during an MIT-IBM postdoctoral position, where he pursued neuro-symbolic model development; now, Kim’s team develops methods to improve large language model (LLM) capabilities and efficiency.
One factor he points to that led to his group’s success is a seamless research process with intellectual partners. This has allowed his MIT-IBM team to apply for a project, experiment at scale, identify bottlenecks, validate techniques, and adapt as necessary to develop cutting-edge methods for potential inclusion in real-world applications. “This is an impetus for new ideas, and that’s, I think, what’s unique about this relationship,” says Kim.
Merging expertise
The nature of the MIT-IBM Watson AI Lab is that it not only brings together researchers in the AI realm to accelerate research, but also blends work across disciplines. Lab researcher and MIT associate professor in EECS and CSAIL Justin Solomon describes his research group as growing up with the lab, and the collaboration as being “crucial … from its beginning until now.” Solomon’s research team focuses on theoretically oriented, geometric problems as they pertain to computer graphics, vision, and machine learning.
Solomon credits the MIT-IBM collaboration with expanding his skill set as well as applications of his group’s work — a sentiment that’s also shared by lab researchers Chuchu Fan, an associate professor of aeronautics and astronautics and a member of the Laboratory for Information and Decision Systems, and Faez Ahmed, associate professor of mechanical engineering. “They [IBM] are able to translate some of these really messy problems from engineering into the sort of mathematical assets that our team can work on, and close the loop,” says Solomon. This, for Solomon, includes fusing distinct AI models that were trained on different datasets for separate tasks. “I think these are all really exciting spaces,” he says.
“I think these early-career projects [with the MIT-IBM Watson AI Lab] largely shaped my own research agenda,” says Fan, whose research intersects robotics, control theory, and safety-critical systems. Like Kim, Solomon, and Andreas, Fan and Ahmed began projects through the collaboration the first year they were able to at MIT. Constraints and optimization govern the problems that Fan and Ahmed address, and so require deep domain knowledge outside of AI.
Working with the MIT-IBM Watson AI Lab enabled Fan’s group to combine formal methods with natural language processing, which she says, allowed the team to go from developing autoregressive task and motion planning for robots to creating LLM-based agents for travel planning, decision-making, and verification. “That work was the first exploration of using an LLM to translate any free-form natural language into some specification that robot can understand, can execute. That’s something that I’m very proud of, and very difficult at the time,” says Fan. Further, through joint investigation, her team has been able to improve LLM reasoning — work that “would be impossible without the IBM support,” she says.
Through the lab, Faez Ahmed’s collaboration facilitated the development of machine-learning methods to accelerate discovery and design within complex mechanical systems. Their Linkages work, for instance, employs “generative optimization” to solve engineering problems in a way that is both data-driven and has precision; more recently, they’re applying multi-modal data and LLMs to computer-aided design. Ahmed states that AI is frequently applied to problems that are already solvable, but could benefit from increased speed or efficiency; however, challenges — like mechanical linkages that were deemed “almost unsolvable” — are now within reach. “I do think that is definitely the hallmark [of our MIT-IBM team],” says Ahmed, praising the achievements of his MIT-IBM group, which is co-lead by Akash Srivastava and Dan Gutfreund of IBM.
What began as initial collaborations for each MIT faculty member has evolved into a lasting intellectual relationship, where both parties are “excited about the science,” and “student-driven,” Ahmed adds. Taken together, the experiences of Jacob Andreas, Yoon Kim, Justin Solomon, Chuchu Fan, and Faez Ahmed speak to the impact that a durable, hands-on, academia-industry relationship can have on establishing research groups and ambitious scientific exploration.
-
Business6 days agoStock market crash today (March 12, 2026): Nifty50 opens below 23,600; BSE Sensex down over 900 points on continuing US-Iran war – The Times of India
-
Fashion1 week agoIntertextile Shanghai 2026: Fringe events spotlight market trends
-
Entertainment1 week agoWhat time will NASA’s 600 kg satellite crash to Earth today— 14 years after launch?
-
Fashion1 week agoGerman brand Adidas posts 13% revenue growth in 2025
-
Tech7 days agoMeta Developed 4 New Chips to Power Its AI and Recommendation Systems
-
Fashion7 days agoUK’s Topshop unveils Tolu Coker capsule collection
-
Fashion7 days agoIndia’s textile recycling market may reach $3.5 bn by 2030: Report
-
Entertainment7 days agoGas, food, household prices explained
