Tech
Everpure’s Evergreen One for AI brings Exa flash and GPU-based service-level agreements | Computer Weekly
Everpure has announced Evergreen One for AI, a performance-backed consumption model for artificial intelligence (AI) that extends to use of its FlashBlade//Exa high-performance storage. Meanwhile, the company – known as Pure Storage until recently – has announced the beta release of its Datastream automated AI pipeline appliance.
Evergreen One for AI differs from existing flexible capacity offers in the Everpure range by providing use of FlashBlade//Exa and service-level agreements (SLA) based on graphics processing unit (GPU) count. The aim here is to ensure that the storage environment provides the throughput to keep GPU resources fully utilised.
FlashBlade//Exa, Everpure’s highest-performance platform, was previously excluded from the Evergreen One consumption model.
Exa aims at AI and high-performance computing (HPC) workloads that demand extremely high throughput, likely in customers between large enterprise users of AI and the hyperscalers.
At its launch, FlashBlade//Exa introduced an architecture to the Pure product line in which metadata and bulk storage are disaggregated with different hardware and protocols in use.
Kaycee Lai, vice-president for AI with Everpure, said Evergreen One for AI shifted the financial and operational risk away from the customer. “Specifically, we have an offering which we call Evergreen One for AI,” he said. “The big difference for AI is that we set the performance level of the offering based on the number of GPUs that you have … it is an SLA-backed performance guarantee.”
Evergreen One and Flex are Pure Storage’s pay-as-you-go procurement models, while Forever involves upfront purchase with built-in upgrades.
Automating the RAG pipeline
Everpure also announced the beta availability of Datastream. First previewed in late 2024, Datastream is a “single SKU” appliance that integrates Nvidia GPUs with Everpure storage. It is designed to tackle the “data readiness” challenge, said Lai. This refers to the oft-cited statistic that data teams spend 80% of their time preparing unstructured data for use.
The appliance automates the retrieval-augmented generation (RAG) pipeline, which includes ingest, curation and vectorisation of data. By providing an integrated hardware and software stack, Everpure aims to provide an “easy button” for enterprises building chatbots or autonomous agents, he said.
The software capability behind Datastream was built in-house, though it can connect to third-party data sources including Dell, HP and NetApp environments, as well as cloud-resident data. This flexibility allows the appliance to act as a central hub for AI readiness regardless of where the data lives.
“Today, people run RAG pipelines … they do the chunking, the embedding, the indexing to make sure that the data is going to be accurate and relevant so that chatbot agents can consume them in a specific format,” said Lai. “That takes up about 80% of most data teams’ time because there’s no standard tool.”
Underpinning performance
To support these launches, Everpure revealed new benchmarks intended to validate its hardware under AI stress. In MLPerf 2.0 testing, the company claimed the top spot for checkpointing – a critical function for saving the state of a model during long training runs – reporting results up to two times better than competitors such as Huawei and Vast.
The company also cited Spec Storage AI image benchmarks, where it outperformed NetApp’s AFX platform by approximately 20%, he said.
Tech
This Indigenous Language Survived Russian Occupation. Can It Survive YouTube?
When anthropology researcher Ashley McDermott was doing fieldwork in Kyrgyzstan a few years ago, she says many people voiced the same concern: Children were losing touch with their indigenous language. The Central Asian country of 7 million people was under Russian control for a century until 1991, but Kyrgyz (pronounced kur-giz) survived and remains widely spoken among adults.
McDermott, a doctoral student at the University of Michigan, says she also heard that some kids in rural villages where Kyrgyz dominated had spontaneously learned to speak Russian. The adults largely blamed a singular force: YouTube.
McDermott and a team of five researchers across four universities in the US and Kyrgyzstan have released new research they believe proves the fears about YouTube’s influence are valid. The group simulated user behavior on YouTube and collected nearly 11,000 unique search results and video recommendations.
What they found is that Kyrgyz-language searches for popular kid interests such as cartoons, fairy tales, and mermaids often did not yield content in Kyrgyz. Even after watching 10 children’s videos featuring Kyrgyz speech to demonstrate a strong desire for it, the simulated users received fewer Kyrgyz-language recommendations for what to watch next than, surprisingly, bots showing no language preference at all. The findings show YouTube prioritizes Russian-language content over Kyrgyz-language videos, especially when searching or browsing children’s topics, according to the researchers.
“Kyrgyz children are algorithmically constructed as audiences for Russian content,” Nel Escher, a coauthor who is a postdoctoral scholar at UC Berkeley, said during a presentation at the school last week. “There is no good way to be a Kyrgyz-speaking kid on YouTube.”
McDermott recalls one frustrated Kyrgyzstani mother in 2023 explaining that she paid the internet bill a day late each month to regularly have one day without internet and, thus, YouTube at home.
YouTube, which has “committed to amplifying indigenous voices,” did not respond to WIRED’s requests for comment. The researchers are attempting to meet with YouTube’s parental controls team to discuss the potential for language filters, according to Escher.
The researchers say their work is the latest to show how online platforms can reinforce colonial culture and influence offline behavior. Under Soviet control, people in Kyrgyzstan had to learn Russian to succeed. Today, many adults are fluent in both Russian and Kyrgyz, with Russian remaining important for commerce. Kids are required to learn at least some Kyrgyz in school. But many spend several hours a day online, and watching YouTube is the leading activity, McDermott says. Quoting from Russian language videos is common, whether creators’ refrains like “Let’s do a challenge,” adaptations of American words such as “cringe,” or parroting accents and syntax.
In one of the researchers’ experiments, they searched for several subjects which are spelled the same in Russian and Kyrgyz, including Harry Potter and Minecraft. The results were predominantly Russian. Overall, just 2.7 percent of the videos the research team analyzed appeared to even include ethnically Kyrgyz people.
YouTube “socializes youth to view Russian as the default language of entertainment and technology and to view Kyrgyz as uninteresting,” the researchers wrote in a self-published paper accepted to a social computing conference scheduled for October.
The researchers say there is ample Kyrgyz-language children’s content for YouTube to promote. In 2024, the 35th-most viewed channel on YouTube across the world was D Billions, a Kyrgyzstan-based children-focused content studio with a dedicated Kyrgyz-language channel that has nearly 1 million subscribers.
Tech
Cyber experts take an optimistic view of AI-powered hacking | Computer Weekly
The annual showcase at the Centre for Emerging Technology and Security (CETaS) kicked off with a discussion on the implications of Claude Mythos.
Opening the conference, Alexander (Sacha) Babuta, director of CETaS at the Alan Turing Institute, said that Anthropic’s latest frontier model, Claude Mythos Preview, demonstrates major improvements in mathematics, cyber security, software engineering and automated vulnerability detection.
While the model can identify and autonomously exploit previously undiscovered vulnerabilities in real-world systems, he described an optimistic outlook of how Claude Mythos Preview could be used to secure enterprise IT. “Companies can use models like Anthropic Mythos to rapidly discover vulnerabilities in their own systems and patch them to strengthen digital security for everyone,” said Babuta.
A study of the cyber crime community between the release of ChatGPT in 2022 and the end of 2025 revealed that cyber crime forums played host to a number of “dark AI” products.
These are claimed by their owners to be homegrown or extensively retrained and jailbroken large language models (LLMs) customised and tailored for cyber crime. But despite generating some early enthusiasm on the forums, these have made little impact to date, Ben Collier, senior lecturer at the University of Edinburgh, said in a presentation discussing the findings.
When the researchers looked at enterprise-grade, legitimate products designed explicitly to turn a novice developer into a competent coder, they found many aspiring cyber criminals experimenting with tools like ChatGPT and Claude, which the researchers said “excitedly report back on their discoveries”. However, Collier noted that a deeper exploration of these discussions found that, in most cases, forum members lacked the basic technical skills needed to use AI tools effectively for committing cyber crime.
“They’re using vibe coding tools for hobby projects, but particularly for the basic logistics of cyber crime operations,” he said. “Most of the coding involved in cyber crime isn’t hacking. It’s the same administration and basic engineering works that you’d need for any small startup, which means a lot of them don’t actually need to jailbreak Claude to get real utility out of it.”
The pessimistic view is that as these tools evolve, they will be able to be used for sophisticated cyber attacks. Adam Beaumont, interim director at the AI Security Institute (ASI), discussed the pessimist view. Beaumont, the former chief AI officer at GCHQ, said the ASI recently demonstrated how a frontier AI model executed a 32-step cyber attack against a simulated corporate environment from initial reconnaissance through to full network takeover.
“We estimate it would take a skilled human professional 20 hours’ worth of work, and this was the first time any model had done it, and weeks later, we tested a second model,” he said.
Beaumont pointed out that the attack he described was not a model answering a question about hacking. “It was a system that hacked,” he said. “We still don’t fully know how to ensure these systems act as we intend, or how to guarantee they remain under meaningful human control as they grow more capable.”
Beaumont called the ASI demonstration an “honest starting point”. “The uncertainty is real and the discomfort is appropriate,” he said.
For Beaumont, it represents something that can be built up to enable government, industry and the research community to make decisions based on what these systems can actually do built on evidence.
Tech
How Shivon Zilis Operated as Elon Musk’s OpenAI Insider
As the first week of trial in Musk v. Altman comes to a close, one person has emerged as a critical behind-the-scenes manager of communications and egos in OpenAI’s early years: Shivon Zilis.
A longtime employee of Musk and the mother to four of his children, Zilis first joined OpenAI as an advisor in 2016. She later served as a director of its nonprofit board from 2020 until 2023 and has also worked as an executive at Musk’s other companies, Neuralink and Tesla.
When asked about the nature of his relationship with Zilis in court, Musk offered several answers. At one point, he called her a “chief of staff.” Later, a “close advisor.” At another point, he said “we live together and she’s the mother of four of my children,” though Zilis said in a deposition that Musk is more of a regular guest and maintains his own residence. Last September, Zilis told OpenAI’s attorneys that she became romantic with Musk around 2016 after she had become an informal advisor to OpenAI. They had their first two children in 2021, she said.
But OpenAI’s lawyers have made the case in witness testimonies and evidence that her most important role, as it pertains to this lawsuit, is being a covert liaison between OpenAI and Musk, even years after he left the nonprofit’s board in February 2018.
“Do you prefer I stay close and friendly to OpenAI to keep info flowing or begin to disassociate? Trust game is about to get tricky so any guidance for how to do right by you is appreciated,” Zilis wrote in a text message to Musk on February 16, 2018, days before OpenAI announced he was leaving the board. Musk responded, “Close and friendly, but we are going to actively try to move three or four people from OpenAI to Tesla. More than that will join over time, but we won’t actively recruit them.”
When asked about this exchange on the witness stand, Musk said he “wanted to know what’s going on.”
In the same text thread, Musk said “there is little chance of OpenAI being a serious force if I focus on Tesla AI.” Zilis reaffirmed him, saying: “There is very low probability of a good future if someone doesn’t slow Demis down,” referring to the leader of Google DeepMind, who Musk has said he didn’t trust to control a superintelligent AI system. “You don’t realize how much you have an ability to influence him directly or otherwise slow him down. I think you know I’m not a malicious person but in this case it feels fundamentally irresponsible to not find a way to slow or alter his path.”
Roughly two months later, in an email from April 23, 2018, Zilis updated Musk on OpenAI’s fundraising efforts and progress on a project to develop an AI that could play video games. In the same message, she said she had reallocated most of her time away from OpenAI to his other companies, Neuralink and Tesla, but told him, “if you’d prefer I pull more hours back to OpenAI oversight please let me know.”
Almost a year earlier, in the summer of 2017, OpenAI’s cofounders had started negotiating changes to the organization’s corporate structure—Musk wanted control of the company to start out. In an email from August 28, 2017, Zilis wrote to Musk that she had met with Greg Brockman and Ilya Sutskever to discuss how equity would be divided up in the new company. She summarized points from the meeting, including that Brockman and Sutskever thought one person shouldn’t have unilateral power over AGI, should they develop it. Musk wrote back to Zilis, “This is very annoying. Please encourage them to go start a company. I’ve had enough.”
-
Sports1 week agoPSL 11: Hyderabad Kingsmen opt to field after winning toss against Multan Sultans
-
Business1 week agoTrump administration in advanced talks for a rescue package for Spirit Airlines, source says
-
Entertainment1 week agoAnne Hathaway shares major news about ‘Princess Diaries 3’
-
Business1 week agoGold prices in Pakistan Today – April 23, 2026 | The Express Tribune
-
Fashion1 week agoCanada forms new advisory committee to strengthen US trade relations
-
Entertainment1 week agoMike Vrabel to miss Patriots’ final NFL draft day —here’s why
-
Business1 week agoOil surges past 4% as Iran keeps Hormuz locked – SUCH TV
-
Fashion1 week agoUS CBP to soon launch electronic system for importers to claim refunds
