Tech
Horses, the Most Controversial Game of the Year, Doesn’t Live Up to the Hype
The debate over Horses’ delisting is emblematic of a bigger fight that’s taken place this year, when platforms such as Steam and Itch.io yanked down “NSFW” and “porn” games in July. Developers, players, and trade organizations have continued to be vocal about developers’ creative rights to make games that deal with adult content.
“Developers shouldn’t have to compromise their creative vision, but we also have to acknowledge that games exist within capitalist structures where access to platforms determines livelihood,” says Jakin Vela, executive director of the International Game Developers Association, a nonprofit supporting game developers. “The key is informed decision-making and understanding what each platform allows, what risks exist, and whether your artistic goals outweigh those risks.”
Still, Vela says, these removals have exposed the fragility of developers’ economic security. “We should be concerned whenever a system allows a creator’s livelihood to be cut off without transparency or recourse,” he says. The video game industry is highly consolidated, with a handful of platforms controlling access to the vast majority of players. “That imbalance creates a structural issue, not necessarily because platforms enforce rules, but because there are so few viable alternatives.”
Santa Ragione’s future should not hinge on its ability to exist on Steam or any other platform. A bad project should not spell the end of a developer who is, for all the criticisms I have of its game, trying to say something. That part of this story may still yet have a happy, or at least a survivable, ending. The Streisand effect is paying off for Horses. On the digital distribution platform GOG, where it’s still available, the game is a top-seller.
Horses needs to be defended against censorship. It is also a bad game that should be examined as such. But while the conversation around Horses is still stalling out about why the game is allowed to exist, or how it’s not that offensive, the better question is why we really care about it at all—and why, as players, we feel so reluctant to talk about its failings like any other piece of media.
Tech
Render Networks unveils synchronised agentic critical infrastructure architecture | Computer Weekly
Render Networks has made further expansion of its footprint as a system of execution for critical infrastructure with the ClearWay platform.
As infrastructure investment accelerates across fibre broadband, electric grid modernisation, distributed energy and AI-driven datacentre expansion, capital discipline has emerged as a defining concern, according to the company.
Render has stated that traditional methods of data analysis and manual decision-making often hamper progress, with deployment risk now consequentially translating directly into capital risk. It added that operators, utilities and builders must reduce variance, accelerate cash conversion and establish audit-grade accountability across increasingly complex, multi-asset deployments.
Originally establishing itself in telecommunications, Render now supports electric utilities and multi-utility environments where construction accuracy is a prerequisite for operational reliability. Built for infrastructure environments where governance is “non-negotiable”, ClearWay is claimed to advance automation without eroding engineering authority.
The new platform is built to transform design data into live scopes of work, to capture verified field progress in real-time and to econcile workflows to maintain financial integrity. This is seen as producing defensible as-built records that flow “seamlessly” into operations. Rather than a collection of isolated AI features, ClearWay is said to operate across a federated system of specialised agents designed to operate autonomously in identity, policy and audit controls.
Each agent operates with a uniquely defined degree of autonomy, managed identity and least-privilege access. As additional ClearWay agents are introduced, the system is built to support progressively higher levels of autonomy, bounded by deterministic guardrails derived from user-defined operational policies. The result is that decision-making choices are underpinned by controlled, auditable automation that preserves first-order accountability while also enabling meaningful scale.
The first release of ClearWay scheduled for release in the second quarter of 2026 and is scheduled to introduce field assurance and work approval capabilities across telecom and electric deployments with an assurance agent and approval agent.
The former is said to validate field-captured evidence against planned work in real-time, ensuring accuracy before crews leave the site. By contrast, the latter approves work autonomously based on a correlation of work type, planned vs. actual units, photos, and test results. When predefined criteria are met, the agent processes the approval and escalates exceptions only when human review is required.
By ensuring work is correct and defensible at the point of execution, Render is confident that ClearWay can accelerate design to build lead times, reduce construction rework, accelerate closeout and improve working capital velocity. This will be “particularly vital” in broadband and grid modernisation environments, where construction accuracy directly affects serviceability, network reliability and regulatory compliance.
“We have always focused on ensuring that work in the field becomes verified operational truth. The next step is ensuring that truth drives disciplined, governed and rapid action across the lifecycle,” said Stephen Rose, CEO of Render Networks.
“As capital efficiency becomes central to telecom and electric infrastructure, automation must ensure rapid decisions are made well to reinforce control and accountability. ClearWay is designed to do exactly that.”
Render will introduce additional specialised agents in the ClearWay architecture, spanning lifecycle management and financial reconciliation, service activation and operational monitoring, and predictive maintenance and sustainability governance.
Tech
Huawei: agent-oriented mobile networks to define Agent Verse | Computer Weekly
Two years after it proposed the transition from the mobile internet era to the mobile artificial intelligence (AI) era, leading to the rapid adoption of agents in B2B applications and 30 million agents applied over the past 12 months, Huawei has introduced the Agent Verse, predicting a 10,000-fold increase in agent-handled work in networks by 2030.
The proposal of a new paradigm for communications came on the back of the comms tech giant’s Agentic Core Summit at MWC 2026, which centred on the strategic theme of building an agentic network with device-network-service synergy.
At the summit, Huawei revealed that it had worked with global mobile trade association the GSMA and a range of operators and industry organisations across the Middle East, Asia Pacific, Europe, Latin America and other regions to explore AI-driven advancements for the core network. Together, they unanimously agreed that the 5G core network has entered into “a new phase” called the Agentic Core.
Huawei’s Agentic Core system integrates AI into mobile internet, voice, operations and maintenance (O&M) and telco cloud infrastructure to allow networks to evolve and main service offerings to be reshaped. Huawei sees AI as extending a core network with three “transformative” abilities: real-time experience awareness; global experience evaluation and resource coordination; and intelligent interaction and execution.
This architecture is designed to give rise to a “network brain” that drives a closed-loop experience monetisation model where experiences are definable and assessable, service offerings are marketable, quality is guaranteed and exclusive user identities are perceptible.
The intelligent O&M part of the solution is built to transform network operations into an automated and intelligent ecosystem, driving the core network toward Autonomous Network (AN) L4 Phase 2. Phase 1 focuses on the intelligent assistant, NOEMate, which delivers automated closed-loop management for both faults and changes. Building on this, Phase 2 introduces hierarchical autonomy and builds an unmanned factory, achieving full single-domain autonomy within the core network.
Looking toward the 6G era, Huawei Agentic Core also supports ubiquitous AI agent access, building an agent-based communication network that spans across devices and ecosystems. The Cloud Core Network is designed for an evolving communication infrastructure that will act as an interchange for AI agent network.
And these, said Huawei Eric Zhao, vice-president and CMO of Huawei’s wireless solution, would operate in the Agent Verse: “Mobile AI is sparking a paradigm shift across the communications industry. With a trillion-scale surge in Agent Verse connections on the horizon, mobile networks need an urgent upgrade.
“To unlock the full potential of 5G-Advanced, the industry should accelerate end-to-end upgrades and innovation, building multidimensional network capabilities that can meet the demands ahead.”
At MWC, Huawei argued that agents were reshaping mobile network demands – for example, by evolving into engines of industrial automation and broad societal change. It offered the example of productivity agents making fully automated manufacturing possible through autonomous learning and the precise coordination of thousands of robots. It calculated that by 2030, the global market is expected to reach trillions of intelligent connections worldwide.
Zhao added: “AI’s development has gone wide and far beyond our imagination, and it is now becoming clear that the application of AI will be [through] agents. We believe that in the future, every industry, terminal, organisation and individual will be served by agents – and this is why we propose the Agent Verse. Just in last year alone, there was 30 million agents applied in different industries, significantly improving the productivities of verticals; the adoption pace of agents is incredibly fast.
“It is estimated that by 2030, the amount of work handled by agents will grow by 10,000 times. Agents adoption means the introduction of changes in communication methods and communication objects. That means, in the future, agents will introduce new interactions, agents will interact with people, agents will interact with agents. This is why we think that the time has changed and the wireless industry needs to be prepared to welcome new services.”
Tech
Yann LeCun Raises $1 Billion to Build AI That Understands the Physical World
Advanced Machine Intelligence (AMI), a new Paris-based startup cofounded by Meta’s former chief AI scientist Yann LeCun, announced Monday it has raised more than $1 billion to develop AI world models.
LeCun argues that most human reasoning is grounded in the physical world, not language, and that AI world models are necessary to develop true human-level intelligence. “The idea that you’re going to extend the capabilities of LLMs [large language models] to the point that they’re going to have human-level intelligence is complete nonsense,” he said in an interview with WIRED.
The financing, which values the startup at $3.5 billion, was co-led by investors such as Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Bezos Expeditions. Other notable backers include Mark Cuban, former Google CEO Eric Schmidt, and French billionaire and telecommunications executive Xavier Niel.
AMI (pronounced like the French word for friend) aims to build “a new breed of AI systems that understand the world, have persistent memory, can reason and plan, and are controllable and safe,” the company says in a press release. The startup says it will be global from day one, with offices in Paris, Montreal, Singapore, and New York, where LeCun will continue working as a New York University professor in addition to leading the startup. AMI will be the first commercial endeavor for LeCun since his departure from Meta in November 2025.
LeCun’s startup represents a bet against many of the world’s biggest AI labs like OpenAI, Anthropic, and even his former workplace, Meta, which believe that scaling up LLMs will eventually deliver AI systems with human-level intelligence or even superintelligence. LLMs have powered viral products such as ChatGPT and Claude Code, but LeCun has been one of the AI industry’s most prominent researchers speaking out about the limitations of these AI models. LeCun is well known for being outspoken, but as a pioneer of modern AI that won a Turing award back in 2018, his skepticism carries weight.
LeCun says AMI aims to work with companies in manufacturing, biomedical, robotics, and other industries that have lots of data. For example, he says AMI could build a realistic world model of an aircraft engine and work with the manufacturer to help them optimize for efficiency, minimize emissions, or ensure reliability.
AMI was cofounded by LeCun and several leaders he worked with at Meta, including the company’s former director of research science, Michael Rabbat; former vice president of Europe, Laurent Solly; and former senior director of AI research, Pascale Fung. Other cofounders include Alexandre LeBrun, former CEO of the AI health care startup Nabla, who will serve as AMI’s CEO, and Saining Xie, a former Google DeepMind researcher who will be the startup’s chief science officer.
The Case for World Models
LeCun does not dismiss the overall utility of LLMs. Rather, in his view, these AI models are simply the tech industry’s latest promising trend, and their success has created a “kind of delusion” among the people who build them. “It’s true that [LLMs] are becoming really good at generating code, and it’s true that they are probably going to become even more useful in a wide area of applications where code generation can help,” says LeCun. “That’s a lot of applications, but it’s not going to lead to human-level intelligence at all.”
LeCun has been working on world models for years inside of Meta, where he founded the company’s Fundamental AI Research lab, FAIR. But he’s now convinced his research is best done outside the social media giant. He says it’s become clear to him that the strongest applications of world models will be selling them to other enterprises, which doesn’t fit neatly into Meta’s core consumer business.
As AI world models like Meta’s Joint-Embedding Predictive Architecture (JEPA) became more sophisticated, “there was a reorientation of Meta’s strategy where it had to basically catch up with the industry on LLMs and kind of do the same thing that other LLM companies are doing, which is not my interest,” says LeCun. “So sometime in November, I went to see Mark Zuckerberg and told him. He’s always been very supportive of [world model research], but I told him I can do this faster, cheaper, and better outside of Meta. I can share the cost of development with other companies … His answer was, OK, we can work together.”
-
Politics3 days agoIndia let Iran warship dock the day US sank another off Sri Lanka, say officials
-
Sports3 days agoPakistan set for FIH Pro League debut | The Express Tribune
-
Entertainment3 days agoHarry Styles kicks off new era with ‘One Night Only’ comeback show
-
Business1 week agoLabour parliamentarians urge UK Government to oppose Rosebank oil field
-
Business3 days agoRestaurant group changes name after bid to buys pubs across the UK
-
Business4 days agoHome heating oil: ‘Most of my pension has gone on home heating oil’
-
Tech1 week agoThe 5 Big ‘Known Unknowns’ of Donald Trump’s New War With Iran
-
Sports1 week agoUSA vs. Argentina (Mar 1, 2026) Live Score – ESPN
