Tech
Nscale founding director exits AI infrastructure provider in wake of $1.1bn investment round | Computer Weekly

One of the founding directors of artificial intelligence (AI) infrastructure provider Nscale has exited the company, in the wake of it securing $1.1bn in Series B funding, Computer Weekly has learned.
According to a Companies House filing dated 8 October 2025, Nathan Townsend has stepped down as a director of Nscale, which he co-founded with company CEO Josh Payne in May 2024.
The company was created through a spin-off from cryptocurrency mining and renewably powered infrastructure provider Arkon Energy, which Townsend originally founded with Payne. It is unclear if Arkon Energy still exists, as the company’s website is no longer accessible on the internet, but Townsend’s LinkedIn Page states that he remains the company’s chief operating officer.
Companies House also confirmed that another director, investment banker Barry Kupferberg, departed the company on 8 October 2025.
Meanwhile, the names of two other directors have now been added to the company’s roster, according to Companies House, including Øyvind Eriksen, who is the president and CEO of Aker ASA, the Norwegian industrial investment company that led Nscale’s recent funding round.
In a statement to Computer Weekly, a spokesperson for Nscale said the recent reshuffle of the company’s board of directors is connected to the Series B funding the company closed in late September 2025. As reported by Computer Weekly at the time, Nscale claimed the $1.1bn investment it received through its Series B funding round is the largest ever secured in the UK and Europe.
“In connection with this closing, Nscale implemented some governance changes, including modifications in the directors appointed to the board,” a company spokesperson told Computer Weekly. “We believe our new board composition will set the group up for success as Nscale continues to grow.”
Within days of the company switching up its board of directors, it announced – on 15 October 2025 – that it had signed an “expanded deal” with software giant Microsoft to supply 200,000 NVIDIA GB300 graphics processing units (GPUs) to stand-up hyperscale AI infrastructures across Europe and the US.
This is in addition to another deal announced in September 2025 that committed the two companies to work together to create the UK’s largest AI supercomputer at Nscale’s site in Loughton, Essex, which will also play host to Microsoft Azure services.
The delivery of the October 2025 deal, billed by Nscale as one of the largest AI infrastructure contracts ever signed, will be done in collaboration with Dell Technologies, the company confirmed.
“This agreement confirms Nscale’s place as a partner of choice for the world’s most important technology leaders,” said Payne in a statement announcing the deal. “Few companies are equipped to deliver GPU deployments at this scale, but we have the experience and have built the global pipeline to do so.
“The pace with which we have expanded our capacity demonstrates both our readiness and our commitment to efficiency, sustainability and providing our customers with the most advanced technology available. It’s a clear signal that Nscale is setting a new standard for how the next wave of AI infrastructure will be delivered.”
Tech
WIRED Roundup: Satellites Data Leak, Cybertrucks, Politicized Federal Workers

Zoë Schiffer: Yeah, I mean, I was talking to someone before these recent layoffs who’d worked at the CDC previously and had been pretty involved in efforts to study the impact of certain diseases or pandemics specifically on pregnant populations, and this person had told me a while ago, that entire team was gone. They didn’t have many people in place anymore who could look at particularly vulnerable populations from a health perspective, which I found pretty sad and disturbing, but now, I mean, it’s just getting so much worse. It’s getting so much worse.
Jake Lahut: And Russell Vought seems to be quite happy about each additional version of this that keeps coming down the pike, so.
Zoë Schiffer: Right. Okay. We’ll talk more about these federal layoffs and how they’ve affected other agencies too in our next segment. But before we go to break, I’ve got a fun and very tech bro scoop for you, Cybertrucks.
Jake Lahut: Yeah. Honestly, I should be paying you to be on the show today, Zoë, so tell me more about it.
Zoë Schiffer: Okay. Well, I found this story so charming because essentially our Features Director Reyhan had said, “Let’s do a photo essay of Cybertruck owners.” And I was like, ‘I volunteer as tribute. I really want to do this.” So I contacted a bunch of people, I was actually going around, and when I saw Cybertrucks, I would leave little notes on their car. Not a single person ever responded to me, I was like.
Jake Lahut: Stalker behavior.
Zoë Schiffer: “Okay, all right.” But eventually I got in contact with this guy who runs Cybertrucks Owners Only, which is this 50,000 person Facebook group that’s really, really active. And he, while very suspicious of the media, like many Cybertrucks owners was like, “I’m game. If you come to Palm Springs on this weekend, we can have a Cybertrucks meetup and you can go meet people, you can take photos and interview them.” I love reporting where your original thesis is completely disproven in the course of the reporting, and the Cybertrucks owners really see themselves as the victims of this campaign. They’re being spit at, they’re being targeted, people yell that they’re Nazis. And to a lot of people who I talk to, they don’t see their purchase of this car as at all political. They’re like, “I just like the car. It’s a cool car, it’s fun and all of these crazy liberal people are screaming at me all day. I have my kids in the car and they’re chasing after me calling me a Nazi.” The article came out today, there’s some really cool photos. I’m curious to hear what you thought.
Tech
AI model could boost robot intelligence via object recognition

Stanford researchers have developed an innovative computer vision model that recognizes the real-world functions of objects, potentially allowing autonomous robots to select and use tools more effectively.
In the field of AI known as computer vision, researchers have successfully trained models that can identify objects in two-dimensional images. It is a skill critical to a future of robots able to navigate the world autonomously. But object recognition is only a first step. AI also must understand the function of the parts of an object—to know a spout from a handle, or the blade of a bread knife from that of a butter knife.
Computer vision experts call such utility overlaps “functional correspondence.” It is one of the most difficult challenges in computer vision. But now, in a paper that will be presented at the International Conference on Computer Vision (ICCV 2025), Stanford scholars will debut a new AI model that can not only recognize various parts of an object and discern their real-world purposes but also map those at pixel-by-pixel granularity between objects.
A future robot might be able to distinguish, say, a meat cleaver from a bread knife or a trowel from a shovel and select the right tool for the job. Potentially, the researchers suggest, a robot might one day transfer the skills of using a trowel to a shovel—or of a bottle to a kettle—to complete a job with different tools.
“Our model can look at images of a glass bottle and a tea kettle and recognize the spout on each, but also it comprehends that the spout is used to pour,” explains co-first author Stefan Stojanov, a Stanford postdoctoral researcher advised by senior authors Jiajun Wu and Daniel Yamins. “We want to build a vision system that will support that kind of generalization—to analogize, to transfer a skill from one object to another to achieve the same function.”
Establishing correspondence is the art of figuring out which pixels in two images refer to the same point in the world, even if the photographs are from different angles or of different objects. This is hard enough if the image is of the same object but, as the bottle versus tea kettle example shows, the real world is rarely so cut-and-dried. Autonomous robots will need to generalize across object categories and to decide which object to use for a given task.
One day, the researchers hope, a robot in a kitchen will be able to select a tea kettle to make a cup of tea, know to pick it up by the handle, and to use the kettle to pour hot water from its spout.
Autonomy rules
True functional correspondence would make robots far more adaptable than they are currently. A household robot would not need training on every tool at its disposal but could reason by analogy to understand that while a bread knife and a butter knife may both cut, they each serve a specific purpose.
In their work, the researchers say, they have achieved “dense” functional correspondence, where earlier efforts were able to achieve only sparse correspondence to define only a few key points on each object. The challenge so far has been a paucity of data, which typically had to be amassed through human annotation.
“Unlike traditional supervised learning where you have input images and corresponding labels written by humans, it’s not feasible to humanly annotate thousands of pixels individually aligning across two different objects,” says co-first author Linan “Frank” Zhao, who recently earned his master’s in computer science at Stanford. “So, we asked AI to help.”
The team was able to achieve a solution with what is known as weak supervision—using vision-language models to generate labels to identify functional parts and using human experts only to quality-control the data pipeline. It is a far more efficient and cost-effective approach to training.
“Something that would have been very hard to learn through supervised learning a few years ago now can be done with much less human effort,” Zhao adds.
In the kettle and bottle example, for instance, each pixel in the spout of the kettle is aligned with a pixel in the mouth of the bottle, providing dense functional mapping between the two objects. The new vision system can spot function in structure across disparate objects—a valuable fusion of functional definition and spatial consistency.
Seeing the future
For now, the system has been tested only on images and not in real-world experiments with robots, but the team believes the model is a promising advance for robotics and computer vision. Dense functional correspondence is part of a larger trend in AI in which models are shifting from mere pattern recognition toward reasoning about objects. Where earlier models saw only patterns of pixels, newer systems can infer intent.
“This is a lesson in form following function,” says Yunzhi Zhang, a Stanford doctoral student in computer science. “Object parts that fulfill a specific function tend to remain consistent across objects, even if other parts vary greatly.”
Looking ahead, the researchers want to integrate their model into embodied agents and build richer datasets.
“If we can come up with a way to get more precise functional correspondences, then this should prove to be an important step forward,” Stojanov says. “Ultimately, teaching machines to see the world through the lens of function could change the trajectory of computer vision—making it less about patterns and more about utility.”
More information:
Weakly-Supervised Learning of Dense Functional Correspondences. dense-functional-correspondence.github.io/ On arXiv: DOI: 10.48550/arxiv.2509.03893
Citation:
AI model could boost robot intelligence via object recognition (2025, October 20)
retrieved 20 October 2025
from https://techxplore.com/news/2025-10-ai-boost-robot-intelligence-recognition.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
What the Huge AWS Outage Reveals About the Internet

A massive cloud outage stemming from Amazon Web Services’s key US-EAST-1 region, its hub near the United States capitol in northern Virginia, caused widespread disruptions of websites and platforms around the world on Monday morning. Amazon’s main e-commerce platform and other properties including Ring doorbells and the Alexa smart assistant suffered interruptions and outages throughout the morning, as did Meta’s communication platform WhatsApp, OpenAI’s ChatGPT, PayPal’s Venmo payment platform, multiple web services from Epic Games, multiple British government sites, and many others.
The outages stemmed from Amazon’s “DynamoDB” database application programming interfaces in US-EAST-1, and AWS said in status updates that the problem was specifically related to DNS resolution issues. The “Domain Name System” is a foundational internet service that essentially acts as an automatic phonebook lookup to translate web URLs like “www.wired.com” into numeric server IP addresses so web browsers show users the right content. DNS “resolution” issues occur when DNS servers aren’t accurately connecting these dots and, to keep with the phonebook analogy, are providing the wrong numbers for a given name, or vice versa.
“Based on our investigation, the issue appears to be related to DNS resolution of the DynamoDB API endpoint in US-EAST-1,” AWS wrote in status updates on Monday. Shortly after the company added: “If you are still experiencing an issue resolving the DynamoDB service endpoints in US-EAST-1, we recommend flushing your DNS caches.”
An AWS spokesperson did not immediately respond when asked for details about the nature of the failure. DNS resolution issues can be malicious—known as DNS hijacking—but there is no indication that Monday’s AWS outages were nefarious.
“When the system couldn’t correctly resolve which server to connect to, cascading failures took down services across the internet,” says Davi Ottenheimer, a longtime security operations and compliance manager and a vice president at the data infrastructure company Inrupt. “Today’s AWS outage is a classic availability problem, and we need to start seeing it more as data integrity failure.”
Problems began around 3 am ET. By 5:22 am ET AWS had applied “initial mitigations” that were starting to take effect. At 6:35 am ET, Amazon said that it had fully addressed the underlying technical issues but that “some services will have a backlog of work to work through, which may take additional time to fully process.”
-
Tech1 week ago
Australian airline Qantas says millions of customers’ data leaked online
-
Tech1 week ago
UK police to upgrade illicit asset recovery system | Computer Weekly
-
Entertainment1 week ago
Katy Perry and Justin Trudeau are dating: Report
-
Tech4 days ago
Why the F5 Hack Created an ‘Imminent Threat’ for Thousands of Networks
-
Tech5 days ago
What Is Google One, and Should You Subscribe?
-
Entertainment1 week ago
Victoria Beckham thinks Brooklyn Beckham is fed up with Nicola Peltz drama?
-
Business1 week ago
Environment minister Bhupender Yadav heads to Brazil: India engages in pre-talks ahead of COP30; climate finance and adaptation on agenda – The Times of India
-
Tech1 week ago
A US startup plans to deliver ‘sunlight on demand’ after dark. Can it work? Would we want it to?