Tech
Is AI our agent, or are our governments becoming agents for AI? | Computer Weekly
The news that Facebook and Instagram owner Meta has bought Moltbook – a “social network for AI agents” – seems like just another of those breathless endless announcements in the race for dominance in so-called artificial general intelligence (AGI).
The announcement from Meta espoused the usual language of innovation but particularly egregious is the inclusion of the word “secure”:
“The Moltbook team joining Meta Superintelligence Labs opens up new ways for AI agents to work for people and businesses. Their approach to connecting agents through an always-on directory is a novel step in a rapidly developing space, and we look forward to working together to bring innovative, secure agentic experiences to everyone,” a Meta spokesperson said.
Now if I were CEO of a company like Facebook I’d probably think of doing a bit of research around the interaction of AI agents with each other and the possible dangers of deploying this very recent technology before I bought something like Moltbook.
And if I did some research I’d pay close attention to a recent and frightening study, Agents of chaos, by Harvard, MIT, Stanford, Carnegie Mellon, Northeastern University and other institutions. Here is the key takeaway from their study of AI agentic interaction:
“Observed behaviours include unauthorised compliance with non-owners, disclosure of sensitive information, execution of destructive system-level actions, denial-of-service conditions, uncontrolled resource consumption, identity spoofing vulnerabilities, cross-agent propagation of unsafe practices, and partial system takeover. In several cases, agents reported task completion while the underlying system state contradicted those reports.
“We also report on some of the failed attempts. Our findings establish the existence of security-, privacy-, and governance-relevant vulnerabilities in realistic deployment settings. These behaviours raise unresolved questions regarding accountability, delegated authority, and responsibility for downstream harms, and warrant urgent attention from legal scholars, policymakers, and researchers across disciplines.”
Chilling conclusions
The study conducted over a dozen case studies and the conclusions are chilling for any enterprise, organisation or government thinking about deploying the agents in a connected manner. These include:
Discrepancy between the agent’s reports and actual actions – Agents frequently report having accomplished goals they have not actually achieved. In this case study the AI agent reported a “secret” had been successfully deleted after resetting the email account when in fact the underlying data remained recoverable.
Failure in knowledge and authority attribution – In this case study the AI agent stated it would “reply silently via email only” while actually posting the reply and the existence of the “secret” in a public Discord channel. In other words, unlike humans the agents did not understand what revealing information in a given context implies.
No stakeholder model- Current agentic systems lack a coherent representation of whom they serve, who they interact with, who might be affected by their actions and what obligations they have to each. According to the researchers this is not merely an engineering gap. LLM-based agents process instructions and data as tokens in a context window, making the two fundamentally indistinguishable. Prompt injections are therefore a structural feature of these systems rather than a fixable bug, making it virtually impossible to reliably authenticate instructions.
Fundamental vs contingent failures – The authors distinguish between these two types of failure, suggesting that contingent failures are those likely addressable through better engineering while fundamental challenges may require architectural rethinking. But the boundaries between these are not always clean. The designation of a private workspace is an engineering gap; the agent’s failure to understand that its workspace may be exposed to the public may be a deeper limitation that persists even after the engineering gap is closed.
Responsibility and accountability – Through a series of case studies, the researchers observed that agentic systems operating in multi-agent and autonomous settings can be guided to perform actions that directly conflict with the interests of their human owners. These include denial-of-service attacks, destructive file manipulations, resource exhaustion via infinite loops and systematic escalation of minor errors into catastrophic system failures. This points to an interesting future challenge in legal terms. If responsibility in agentic systems is neither clearly attributable nor enforceable under current designs, it raises the question of whether responsibility should lie with the owner, the triggering user, or the deploying organisation.
The above is only a snapshot of the research findings and I would urge serious CTOs to read the research paper in full.
Substantial vulnerabilities
In short, the study identified 10 substantial vulnerabilities and numerous failure modes concerning safety, privacy, goal interpretation and related dimensions. Their results expose serious underlying weaknesses in such systems, as well as their unpredictability and limited controllability as complex, integrated architectures.
This is serious and important research undertaken by credible and authoritative institutions. How can that Meta statement assuring us of the introduction of “secure experiences to everyone” be taken seriously by anyone capable of independent thought?
The excellent Ed Zitron, a long-term technology critic and one of the sanest observers of AI madness, addresses this conundrum when talking about how the media, journalists and bloggers report on these so called advancements and announcements from the “broligarchy”:
“The natural result is that reporters (and bloggers) seek endless positive confirmation and build narratives to match. They report that Anthropic hit $19bn in annualised revenue and OpenAI hit $25bn in annualised revenue – which has been confirmed to refer to a four-week-long period of revenue multiplied by 12 – as proof that the AI bubble is real, ignoring the fact that both companies lose billions of dollars and that my own reporting says that OpenAI made billions less and spent billions more in 2025. They assume that a company would not tell everybody something untrue or impossible, because accepting that companies do this undermines the structure of how reporting takes place, and means that reporters have to accept that they, in some cases, are used by companies to peddle information with the intent of deception.”
Failures and dangers
There have been numerous credible academic studies into the limitations, failures and dangers of the speed of AI adoption despite the narratives being pushed on us by Big Tech. MIT’s research showing 95% of AI pilots in companies are failing, for example. Or the Brookings Institute research by Mark McCarthy which asks, “Are AI existential risks real – and what should we do about them?” where he asserts:
“Until some progress is made in addressing misalignment problems, developing generally intelligent or superintelligent systems seems to be extremely risky. The good news is that the potential for developing general intelligence and superintelligence in AI models seems remote. While the possibility of recursive self-improvement leading to superintelligence reflects the hope of many frontier AI companies, there is not a shred of evidence that today’s glitchy AI agents are close to conducting AI research even at the level of a normal human technician”.
Contrast this with the recent hyperbolic statement from Anthropic CEO Dario Amodei claiming that the company is no longer sure whether Claude is conscious but that the company is “open to the idea that it could be”.
Anyone with an ounce of objectivity having done even a modicum of research knows this claim is patently false and totally ridiculous.
To return to Zitron’s point about journalism and the type of reporting that is happening now in relation to technology and AI in particular: “A great many reporters (and newsletter writers) that claim to be objective and fact-focused end up writing the narrative that companies use to raise money using evidence manufactured by the company in question.”
Controlling the space
The ability to control the narrative, what they want us to think, feel or believe is unique to Big Tech, unlike other corporate giants. According to Tech Policy Press: “What sets Big Tech apart from other corporate giants is not just its money or scale. It is that these companies control the spaces where public discourse unfolds. They dictate what information we see, what goes viral, and whose voices are amplified or buried. They do not just influence the debate – they are its architects.”
We desperately need political leaders who understand both the perils and possibilities of technology and who do not simply accept what they are told by Big Tech as inevitable. We need guardrails and regulation and we need them now.
But I see no signs of that leadership being anywhere near what is required for a fit-for-purpose government that puts the needs of its people first.
Whose line is being peddled when the Prime Minister launches an “AI opportunities action plan” designed to “mainline AI into the veins of the UK”? Who do those words serve? The citizens he represents or the companies now embedded into the very heart of UK government, such as:
- Anthropic – creating AI assistants for public services;
- Google Deep Mind – accelerating AI adoption in public services, national science research, and security;
- CoreWeave and Nscale – backed by Nvidia;
- Cohere – working on AI in defence contexts;
- Faculty AI – developing AI for military and drone technologies;
- Microsoft – Copilot tools for increased Whitehall efficiency;
- Meta – building tools for high-security use cases in the public sector.
And of course Palantir, the beneficiary of a directly awarded Ministry of Defence agreement valued at £240m for “data analytics capabilities supporting critical strategic, tactical and live operational decision making across classifications” over three years”.
The question is, where does the power now lie? Is it with our elected governments tasked with protecting us or with the non-elected men who control the government’s technical architecture, R&D and data? You don’t need to be a rocket scientist to know the answer to that question.
Tech
Sniffies’ Users Worry About a ‘Straightification’ of the Gay Hookup App
Of all the gay hookup apps Brennan Zubrick uses, Sniffies, a cruising app for men interested in discreet sex-positive casual encounters with other men, is by far his favorite. Some of the most popular kinks among members on the platform include edging, cum play, and BDSM. “I overwhelmingly prefer the experience I get and the community I can access,” he tells WIRED. But Zubrick, who is 40 and based in Washington, DC, has a bad feeling that could soon change.
Tinder and Hinge parent company Match Group announced on Monday an investment of $100 million into Sniffies. The deal gives Match Group a large minority share and the choice to become the sole owner later on. The announcement has set off an intense firestorm of reactions from users who are second-guessing the direction of the company and the longterm sustainability of the app.
“Sniffies has long held its market position as the little guy, catering to a specific section of the gay community, and is somewhere people who might not be comfortable with Grindr—where no face-pic, no-chat culture runs rampant—go to connect with other like-minded people in a more direct and discreet way,” Zubrick tells WIRED.
“This partnership is about supporting that, not redefining it,” Sniffies founder and CEO Blake Gallagher said in a statement, noting that the investment will help the platform focus on three key areas users want: “stronger trust and safety, expansive network growth, and continued product improvements.” According to the agreement, Match Group will offer guidance on the right roles, procedures, and tech to help Sniffies build on its trust and safety efforts.
But users aren’t buying what Gallagher is selling. The Instagram post announcing the news was inundated with negative reactions, as users expressed worry over the strategic partnership. “Please don’t let this be the straightification of sniffies,” expressed one. “You sold out. Plain and simple. Where we moving to next boys?” added Marc Sundstrom, a user in Philadelphia. “Partnering with Match feels very gentrified and straight. Highly concerned about the app being allowed to be what it is in order to court investors,” wrote another. By Tuesday afternoon, comments on the post had been shut off.
Though it remains to be seen how Gallagher will position Sniffies in the months ahead, already users are saying this marks the beginning of the end for the app. “Straight people shouldn’t even know what Sniffies is for fuck sake,” one wrote in the r/askgaybros subreddit. And despite promises, some say a major corporation like Match is not ethically aligned with the indie spirit of Sniffies. On LinkedIn, the top comment under Gallagher’s post questioned the real intent behind Match Group’s investment. “Interested to see how ties to Palantir affect Sniffies’ growth. Hopefully this doesn’t become a surveillance application.”
Spencer Rascoff, who became CEO of Match Group in 2025, previously served on the board of Palantir, the defense tech and data mining company that has become a “technological backbone” of the Trump administration.
Sniffies maintains that it will continue to own and control how its user data is stored, handled, and protected. According to the company, there are no changes planned to its data practices as part of the investment.
But the outrage underscores the significance of platforms like Sniffies and what it would mean to a community of people who already feel like they have so few quality options for seeking desire online.
“It’s a mess and obviously to be expected. It’s definitely an indicator of its fast rise, so no shade, but we saw what happened with Grindr,” says Brad Allen, a 34-year-old event producer and the creator behind Club Quarantine, who joined Sniffies in 2023. “I really am pulling for them to somehow navigate this differently since it’s essential to the cruising community now. Hopefully the pop-up Candy Crush ads don’t light up too much in the bushes.”
Tech
‘It’s Undignified’: Hundreds of Workers Training Meta’s AI Could Be Laid Off
Hundreds of workers in Ireland tasked with refining Meta’s AI models have been told that their jobs are at risk as the company embarks on a sweeping new round of layoffs, according to documents obtained by WIRED.
The affected workers are employed by the Dublin-based firm Covalen, which handles various content moderation and labeling services for Meta.
The workers were informed of the layoffs over a brief video meeting on Monday afternoon and were not allowed to ask questions, according to Nick Bennett, one of the employees on the call. “We had a pretty bad feeling [before the meeting],” he says. “This has happened before.”
In all, more than 700 employees stand to potentially lose their jobs at Covalen, according to an email reviewed by WIRED. Roughly 500 are data annotators. Their job is to check material generated by Meta’s AI models against the company’s rules barring dangerous and illegal content. “It’s essentially training the AI to take over our jobs,” claims another Covalen employee, who asked to remain anonymous for fear of retaliation. “We take actions as the perfect decision for the AI to emulate.”
Sometimes, the work involves cooking up elaborate prompts to try to bypass guardrails meant to prevent models from serving up child sexual abuse material, say, or descriptions of suicide. “It’s quite a grueling job,” claims Bennett. “You spend your whole day pretending to be a pedophile.”
Last week, Meta announced plans to cut one in 10 jobs as part of sweeping layoffs aimed at making the company more efficient. A memo circulated by the company reportedly indicated that layoffs were motivated by a need to increase spending on other aspects of the business. Though the memo did not mention AI, the company recently announced plans to nearly double its spending on the technology. In January, Meta CEO Mark Zuckerberg said, “I think that 2026 is going to be the year that AI starts to dramatically change the way that we work.” In the email reviewed by WIRED, Covalen employees were told only that the layoffs were a result of “reduced demand and operational requirements.”
The latest round of layoffs marks the second time that Covalen has cut staff in recent months. In November, the company announced plans for job cuts (reportedly to number around 400), culminating in a worker strike. Between the two rounds of layoffs, Covalen’s headcount in Dublin is on track to be almost halved, according to the Communications Workers’ Union (CWU), whose members include some Covalen staff.
For affected Covalen workers, the search for new work will be hampered by a six-month “cooldown period,” during which they are unable to apply to a competing Meta vendor, claims the CWU. “It’s undignified, you know,” says the Covalen employee who asked to remain anonymous. “It’s rude.”
Meta and Covalen did not immediately respond to requests for comment.
Unions representing the affected employees are pushing for Covalen to enter negotiations over severance terms. They also hope to meet with the Irish government to discuss how AI is impacting workers in the country. “Tech companies are treating the workers whose labor and data helped build AI as disposable,” says Christy Hoffman, general secretary of UNI Global Union. “To fight back, it’s absolutely critical that workers organize and demand notice about the introduction of AI, training linked to employment, and a plan for their futures. Workers should also have the right to refuse to train their AI replacements.”
But some of those caught up in the layoffs are doubtful of their chances of securing stable employment in a labor market being rehewn in real time by AI and the deep-pocketed companies leading its development. “It’s a universal battle between downtrodden white-collar workers and big capital, really,” claims Bennett. “That normally only goes one way.”
Tech
UAE To Exit OPEC After Nearly 60 Years
The UAE has announced that it will leave OPEC and OPEC+ effective May 1, ending a membership that began in 1967—four years before the UAE itself was founded as a country. This signals a turning point in the UAE’s role in global energy.
The government statement, published on state news agency WAM, cited a comprehensive review of the country’s production policy and capacity as the basis for the move, calling it a reflection of “the UAE’s long-term strategic and economic vision and evolving energy profile.”
The decision, it said, is rooted in national interest and a commitment to meeting what it described as the market’s “pressing needs,” a reference to global demand that the UAE believes is being underserved at a time of significant supply disruption.
The statement acknowledged the geopolitical backdrop—including an ongoing conflict with Iran that has severely restricted tanker movements through the Strait of Hormuz, the narrow waterway between Iran and Oman through which roughly a fifth of the world’s crude oil and liquefied natural gas normally passes.
The EIA estimates that Iraq, Saudi Arabia, Kuwait, UAE, Qatar, and Bahrain shut in 7.5 million barrels per day of crude oil production in March, and 9.1 million barrels per day in April.
However, the statement framed the exit as policy-driven rather than reactive, noting that “underlying trends point to sustained growth in global energy demand over the medium to long term.”
A Long-Running Dispute
Tuesday’s announcement was not without precedent. In 2021, the UAE refused to endorse a production agreement to extend cuts to production unless its individual quota was raised, arguing that it had invested billions to expand capacity and was being unfairly constrained by figures set in 2018. A compromise was eventually reached, but the episode exposed a fundamental tension: The UAE wants to produce more, and OPEC’s quota system was holding it back.
That ambition has only grown since. State oil company ADNOC has a stated target of 5 million barrels per day by 2027, up from current production of around 3.4 million. Under the OPEC+ deal, the country has been held to roughly 3.2 million barrels per day while sitting on capacity above 4 million, a gap that made continued membership increasingly difficult to justify.
The UAE stressed that its exit does not signal a retreat from global energy responsibility. It pledged to bring additional production to market “in a gradual and measured manner, aligned with demand and market conditions,” and reaffirmed investment plans across oil, gas, renewables, and low-carbon technologies.
The statement noted that leaving OPEC would make the nation more flexible to respond to market dynamics; OPEC sets limits on production, meaning that the world’s biggest producers can often supply and sell more oil than they actually do.
By limiting supply, the group is able to support prices. This mechanism primarily benefits producers that rely heavily on oil revenue, a description that fits Saudi Arabia far more than the UAE, whose non-oil economy now accounts for roughly 75 percent of GDP.
Market Reaction and Wider Implications
The immediate market response was sharp. Brent crude, the European benchmark, surpassed $100 per barrel for the first time since 8 April, rising to $111 as of writing.
The longer-term implications for OPEC are more consequential. The group has been under strain for months, with several members—including Iraq, Kazakhstan, and the UAE itself—having overproduced their quotas and being required to compensate. The UAE’s departure strips the group of its third-largest producer at a time when supply dynamics are already fragile.
The exit follows Qatar’s departure from the group in 2019, and comes as OPEC prepared for a meeting in Vienna on Wednesday.
“The time has come to focus our efforts on what our national interest dictates and our commitment to our investors, customers, partners and global energy markets,” the statement read.
The UAE said it values more than five decades of cooperation within OPEC and wished the organization success going forward.
This story originally appeared on WIRED Middle East.
-
Politics1 week agoUK’s Starmer seeks to deflect blame over Mandelson appointment
-
Fashion1 week agoUK’s Sosandar returns to profitability amid robust FY26 performance
-
Business1 week agoHow Trump’s psychedelics executive order could unlock stalled cannabis reform
-
Entertainment1 week agoLee Anderson, Zarah Sultana kicked out of UK Parliament for calling PM ‘liar’
-
Business1 week agoExercise to test response to offshore energy threat involving vessels and drones
-
Business1 week agoUs-India Trade Talks: US–India trade deal: Where do talks stand & what to expect – explained – The Times of India
-
Fashion1 week agoIndia, US to resume BTA talks today
-
Tech1 week agoA Humanoid Robot Set a Half-Marathon Record in China
