Tech
SuperMicro takes on server leaders as AMD pushes on-premise AI | Computer Weekly
Market data from analyst IDC has shown that SuperMicro has leapfrogged established server makers Lenovo and HPE as the second-largest PC server maker behind Dell.
SuperMicro experienced growth of almost 134% for the fourth quarter of 2025 with revenue of $11.7bn, which means it accounts for over 9% of the global server market. Dell was ahead with 10% market share and revenue of $12.6bn, while Chinese manufacturer IEIT Systems took the third spot, with revenue of $5.2bn and a 4% market share ahead of Lenovo, which posted revenue of $5.1bn, and HPE ($3.9bn).
“The race for AI [artificial intelligence] adoption is settling the market pace, and with companies starving for infrastructure looking not only at GPUs [graphics processing units], but also consuming more CPUs [central processing units] among other components in order to feed their needs, we are going to see more price pressures, and that may impact on market dynamics with less units but higher average selling prices going forward,” said Juan Seminara, research director of Worldwide Enterprise Infrastructure Trackers at IDC.
IDC noted that volatile increasing prices on certain components such as GPUs, dynamic random access memory (DRAM) and solid state drives (SSDs) has meant that some companies have been trying to secure prices ahead while the industry is accommodating to the new reality. It predicted that the impact of this price volatility could be hitting harder during 2026 as demand keeps outpacing service capacity in the near term.
Besides Dell, the established server makers seem to be losing ground in the server market. But they appear to be looking at a new market opportunity being pushed by chipmaker AMD, which is the deployment of on-premise PC servers optimised to run agentic AI.
In a bid to entice IT buyers away from cloud-based AI hardware, AMD has unveiled what it sees as a new category of PC called Agent Computers. In a post on the AMD website, the company described how to run OpenClaw, the open source AI agent, locally on AMD Ryzen AI Max+ processors and Radeon GPUs using a Windows 11 PC with the Windows Subsystem for Linux (WSL).
AMD said the PC system configured with 128GB unified memory is capable of running “cloud-quality AI agent workloads efficiently” using OpenClaw. According to its own benchmark data, with the Qwen 3.5 35B A3B model, the system delivers around 45 tokens per second and processes 10,000 input tokens in about 19.5 seconds. AMD said the configuration supports a maximum context window of 260,000 tokens, and can run up to six agents concurrently, which it said means it is able to deliver scalable local AI experimentation while maintaining strong responsiveness on consumer hardware.
AMD sees such a system running autonomously rather like the pre-cloud era branch office servers, handling tasks sent by users through a browser user interface on another Windows PC, or via Slack or WhatsApp.
PC makers that have “agent-ready” PCs include HP, Lenovo and Asus. The IDC figures show that revenue for servers with an embedded GPU in the fourth quarter of 2025 grew 59.1% year-over-year, representing more than half of the total server market revenue.
The AMD Ryzen AI Max+ has an integrated GPU, and is currently one of the processor options for PCs certified as Copilot+ devices. While these devices are either laptops or desktop PCs with monitors, AMD’s Agent Computer appears to be positioned as more of a traditional desktop Windows PC running as a server, without a screen or keyboard. The setup AMD provides is optimised to run LM Studio. This uses Ubuntu on the WSL to provide access to large language models, which then work with an OpenClaw server running locally on the same hardware.
Tech
This Indigenous Language Survived Russian Occupation. Can It Survive YouTube?
When anthropology researcher Ashley McDermott was doing fieldwork in Kyrgyzstan a few years ago, she says many people voiced the same concern: Children were losing touch with their indigenous language. The Central Asian country of 7 million people was under Russian control for a century until 1991, but Kyrgyz (pronounced kur-giz) survived and remains widely spoken among adults.
McDermott, a doctoral student at the University of Michigan, says she also heard that some kids in rural villages where Kyrgyz dominated had spontaneously learned to speak Russian. The adults largely blamed a singular force: YouTube.
McDermott and a team of five researchers across four universities in the US and Kyrgyzstan have released new research they believe proves the fears about YouTube’s influence are valid. The group simulated user behavior on YouTube and collected nearly 11,000 unique search results and video recommendations.
What they found is that Kyrgyz-language searches for popular kid interests such as cartoons, fairy tales, and mermaids often did not yield content in Kyrgyz. Even after watching 10 children’s videos featuring Kyrgyz speech to demonstrate a strong desire for it, the simulated users received fewer Kyrgyz-language recommendations for what to watch next than, surprisingly, bots showing no language preference at all. The findings show YouTube prioritizes Russian-language content over Kyrgyz-language videos, especially when searching or browsing children’s topics, according to the researchers.
“Kyrgyz children are algorithmically constructed as audiences for Russian content,” Nel Escher, a coauthor who is a postdoctoral scholar at UC Berkeley, said during a presentation at the school last week. “There is no good way to be a Kyrgyz-speaking kid on YouTube.”
McDermott recalls one frustrated Kyrgyzstani mother in 2023 explaining that she paid the internet bill a day late each month to regularly have one day without internet and, thus, YouTube at home.
YouTube, which has “committed to amplifying indigenous voices,” did not respond to WIRED’s requests for comment. The researchers are attempting to meet with YouTube’s parental controls team to discuss the potential for language filters, according to Escher.
The researchers say their work is the latest to show how online platforms can reinforce colonial culture and influence offline behavior. Under Soviet control, people in Kyrgyzstan had to learn Russian to succeed. Today, many adults are fluent in both Russian and Kyrgyz, with Russian remaining important for commerce. Kids are required to learn at least some Kyrgyz in school. But many spend several hours a day online, and watching YouTube is the leading activity, McDermott says. Quoting from Russian language videos is common, whether creators’ refrains like “Let’s do a challenge,” adaptations of American words such as “cringe,” or parroting accents and syntax.
In one of the researchers’ experiments, they searched for several subjects which are spelled the same in Russian and Kyrgyz, including Harry Potter and Minecraft. The results were predominantly Russian. Overall, just 2.7 percent of the videos the research team analyzed appeared to even include ethnically Kyrgyz people.
YouTube “socializes youth to view Russian as the default language of entertainment and technology and to view Kyrgyz as uninteresting,” the researchers wrote in a self-published paper accepted to a social computing conference scheduled for October.
The researchers say there is ample Kyrgyz-language children’s content for YouTube to promote. In 2024, the 35th-most viewed channel on YouTube across the world was D Billions, a Kyrgyzstan-based children-focused content studio with a dedicated Kyrgyz-language channel that has nearly 1 million subscribers.
Tech
Cyber experts take an optimistic view of AI-powered hacking | Computer Weekly
The annual showcase at the Centre for Emerging Technology and Security (CETaS) kicked off with a discussion on the implications of Claude Mythos.
Opening the conference, Alexander (Sacha) Babuta, director of CETaS at the Alan Turing Institute, said that Anthropic’s latest frontier model, Claude Mythos Preview, demonstrates major improvements in mathematics, cyber security, software engineering and automated vulnerability detection.
While the model can identify and autonomously exploit previously undiscovered vulnerabilities in real-world systems, he described an optimistic outlook of how Claude Mythos Preview could be used to secure enterprise IT. “Companies can use models like Anthropic Mythos to rapidly discover vulnerabilities in their own systems and patch them to strengthen digital security for everyone,” said Babuta.
A study of the cyber crime community between the release of ChatGPT in 2022 and the end of 2025 revealed that cyber crime forums played host to a number of “dark AI” products.
These are claimed by their owners to be homegrown or extensively retrained and jailbroken large language models (LLMs) customised and tailored for cyber crime. But despite generating some early enthusiasm on the forums, these have made little impact to date, Ben Collier, senior lecturer at the University of Edinburgh, said in a presentation discussing the findings.
When the researchers looked at enterprise-grade, legitimate products designed explicitly to turn a novice developer into a competent coder, they found many aspiring cyber criminals experimenting with tools like ChatGPT and Claude, which the researchers said “excitedly report back on their discoveries”. However, Collier noted that a deeper exploration of these discussions found that, in most cases, forum members lacked the basic technical skills needed to use AI tools effectively for committing cyber crime.
“They’re using vibe coding tools for hobby projects, but particularly for the basic logistics of cyber crime operations,” he said. “Most of the coding involved in cyber crime isn’t hacking. It’s the same administration and basic engineering works that you’d need for any small startup, which means a lot of them don’t actually need to jailbreak Claude to get real utility out of it.”
The pessimistic view is that as these tools evolve, they will be able to be used for sophisticated cyber attacks. Adam Beaumont, interim director at the AI Security Institute (ASI), discussed the pessimist view. Beaumont, the former chief AI officer at GCHQ, said the ASI recently demonstrated how a frontier AI model executed a 32-step cyber attack against a simulated corporate environment from initial reconnaissance through to full network takeover.
“We estimate it would take a skilled human professional 20 hours’ worth of work, and this was the first time any model had done it, and weeks later, we tested a second model,” he said.
Beaumont pointed out that the attack he described was not a model answering a question about hacking. “It was a system that hacked,” he said. “We still don’t fully know how to ensure these systems act as we intend, or how to guarantee they remain under meaningful human control as they grow more capable.”
Beaumont called the ASI demonstration an “honest starting point”. “The uncertainty is real and the discomfort is appropriate,” he said.
For Beaumont, it represents something that can be built up to enable government, industry and the research community to make decisions based on what these systems can actually do built on evidence.
Tech
How Shivon Zilis Operated as Elon Musk’s OpenAI Insider
As the first week of trial in Musk v. Altman comes to a close, one person has emerged as a critical behind-the-scenes manager of communications and egos in OpenAI’s early years: Shivon Zilis.
A longtime employee of Musk and the mother to four of his children, Zilis first joined OpenAI as an advisor in 2016. She later served as a director of its nonprofit board from 2020 until 2023 and has also worked as an executive at Musk’s other companies, Neuralink and Tesla.
When asked about the nature of his relationship with Zilis in court, Musk offered several answers. At one point, he called her a “chief of staff.” Later, a “close advisor.” At another point, he said “we live together and she’s the mother of four of my children,” though Zilis said in a deposition that Musk is more of a regular guest and maintains his own residence. Last September, Zilis told OpenAI’s attorneys that she became romantic with Musk around 2016 after she had become an informal advisor to OpenAI. They had their first two children in 2021, she said.
But OpenAI’s lawyers have made the case in witness testimonies and evidence that her most important role, as it pertains to this lawsuit, is being a covert liaison between OpenAI and Musk, even years after he left the nonprofit’s board in February 2018.
“Do you prefer I stay close and friendly to OpenAI to keep info flowing or begin to disassociate? Trust game is about to get tricky so any guidance for how to do right by you is appreciated,” Zilis wrote in a text message to Musk on February 16, 2018, days before OpenAI announced he was leaving the board. Musk responded, “Close and friendly, but we are going to actively try to move three or four people from OpenAI to Tesla. More than that will join over time, but we won’t actively recruit them.”
When asked about this exchange on the witness stand, Musk said he “wanted to know what’s going on.”
In the same text thread, Musk said “there is little chance of OpenAI being a serious force if I focus on Tesla AI.” Zilis reaffirmed him, saying: “There is very low probability of a good future if someone doesn’t slow Demis down,” referring to the leader of Google DeepMind, who Musk has said he didn’t trust to control a superintelligent AI system. “You don’t realize how much you have an ability to influence him directly or otherwise slow him down. I think you know I’m not a malicious person but in this case it feels fundamentally irresponsible to not find a way to slow or alter his path.”
Roughly two months later, in an email from April 23, 2018, Zilis updated Musk on OpenAI’s fundraising efforts and progress on a project to develop an AI that could play video games. In the same message, she said she had reallocated most of her time away from OpenAI to his other companies, Neuralink and Tesla, but told him, “if you’d prefer I pull more hours back to OpenAI oversight please let me know.”
Almost a year earlier, in the summer of 2017, OpenAI’s cofounders had started negotiating changes to the organization’s corporate structure—Musk wanted control of the company to start out. In an email from August 28, 2017, Zilis wrote to Musk that she had met with Greg Brockman and Ilya Sutskever to discuss how equity would be divided up in the new company. She summarized points from the meeting, including that Brockman and Sutskever thought one person shouldn’t have unilateral power over AGI, should they develop it. Musk wrote back to Zilis, “This is very annoying. Please encourage them to go start a company. I’ve had enough.”
-
Sports1 week agoPSL 11: Hyderabad Kingsmen opt to field after winning toss against Multan Sultans
-
Business1 week agoTrump administration in advanced talks for a rescue package for Spirit Airlines, source says
-
Entertainment1 week agoAnne Hathaway shares major news about ‘Princess Diaries 3’
-
Business1 week agoGold prices in Pakistan Today – April 23, 2026 | The Express Tribune
-
Entertainment1 week agoMike Vrabel to miss Patriots’ final NFL draft day —here’s why
-
Fashion1 week agoCanada forms new advisory committee to strengthen US trade relations
-
Fashion1 week agoUS CBP to soon launch electronic system for importers to claim refunds
-
Business1 week agoIran war: Trump sanctions waiver or not – why India continues to buy Russian oil – The Times of India
