Connect with us

Tech

Anthropic Teams Up With Its Rivals to Keep AI From Hacking Everything

Published

on

Anthropic Teams Up With Its Rivals to Keep AI From Hacking Everything


Following leaked revelations at the end of March that Anthropic had developed a powerful new Claude model, the company formally announced Mythos Preview on Tuesday along with news of an industry consortium it has convened, known as Project Glasswing, to grapple with the cybersecurity implications of the new model and advancing capabilities more generally across the AI field.

The group includes Microsoft, Apple, and Google as well as Amazon Web Services, the Linux Foundation, Cisco, Nvidia, Broadcom, and more than 40 other tech, cybersecurity, critical infrastructure, and financial organizations that will have private access to the model, which is not yet being generally released. The idea, in part, is simply to give the developers of the world’s foundational tech platforms time to turn Mythos Preview on their own systems so they can mitigate vulnerabilities and exploit chains that the model develops in simulated attacks. More broadly, Anthropic emphasizes that the purpose of convening the effort is to kickstart urgent exploration of how AI capabilities across the industry are on the precipice, the company says, of upending current software security and digital defense practices around the world.

“The real message is that this is not about the model or Anthropic,” Logan Graham, the company’s frontier red team lead, tells WIRED. “We need to prepare now for a world where these capabilities are broadly available in 6, 12, 24 months. Many things would be different about security. Many of the assumptions that we’ve built the modern security paradigms on might break.”

Models developed and trained by multiple companies have increasingly been able to find vulnerabilities in code and propose mitigations—or strategies for exploitation. This creates a next generation of security’s classic cat-and-mouse game in which a tool can aid defenders but can also fuel bad actors and make it easier to carry out attacks that were once too expensive or complex to be practical.

“Claude Mythos preview is a particularly big jump,” Anthropic CEO Dario Amodei said on Tuesday in a Project Glasswing launch video. “We haven’t trained it specifically to be good at cyber. We trained it to be good at code, but as a side effect of being good at code, it’s also good at cyber.” He adds in the video that “more powerful models are going to come from us and from others. And so we do need a plan to respond to this.”

Anthropic’s Graham notes that in addition to vulnerability discovery—including producing potential attack chains and proofs of concept—Mythos Preview is capable of more advanced exploit development, penetration testing, endpoint security assessment, hunting for system misconfigurations, and evaluating software binaries without access to its source code.

In carrying out a staggered release of Mythos Preview, beginning with an industry collaboration phase, Graham says that Anthropic sought to draw on tenets of coordinated vulnerability disclosure, the process of giving developers time to patch a bug before it is publicly discussed.

“We’ve seen Mythos Preview accomplish things that a senior security researcher would be able to accomplish,” Graham says. “This has very big implications then for how capabilities like this should be released. Done not carefully, this could be a meaningfully accelerant for attackers.”

Project Glasswing partners, including some of Anthropic’s competitors, struck a collaborative tone in statements as part of the launch.

“Google is pleased to see this cross-industry cybersecurity initiative coming together,” Heather Adkins, Google’s vice president of security engineering, says in a statement. “We have long believed that AI poses new challenges and opens new opportunities in cyber defense.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Conflicting Rulings Leave Anthropic in ‘Supply-Chain Risk’ Limbo

Published

on

Conflicting Rulings Leave Anthropic in ‘Supply-Chain Risk’ Limbo


Anthropic “has not satisfied the stringent requirements” to temporarily lose the supply-chain-risk designation imposed by the Pentagon, a US appeals court in Washington, DC, ruled on Wednesday. The decision is at odds with one issued last month by a lower court judge in San Francisco, and it wasn’t immediately clear how the conflicting preliminary judgments would be resolved.

The government sanctioned Anthropic under two different supply-chain laws with similar effects, and the San Francisco and Washington, DC, courts are each ruling on only one of them. Anthropic has said it is the first US company to be designated under the two laws, which are typically used to punish foreign businesses that pose a risk to national security.

“Granting a stay would force the United States military to prolong its dealings with an unwanted vendor of critical AI services in the middle of a significant ongoing military conflict,” the three-judge appellate panel wrote on Wednesday in what they described as an unprecedented case. The panel said that while Anthropic may suffer financial harm from the ongoing designation, they did not want to risk “a substantial judicial imposition on military operations” or “lightly override” the military’s judgments on national security.

The San Francisco judge had found that the Department of Defense likely acted in bad faith against Anthropic, driven by frustration over the AI company’s proposed limits on how its technology could be used and its public criticism of those restrictions. The judge ordered the supply-chain risk label removed last week, and the Trump administration complied by restoring access to Anthropic AI tools inside the Pentagon and throughout the rest of the federal government.

Anthropic spokesperson Danielle Cohen says the company is grateful the Washington, DC, court “recognized these issues need to be resolved quickly” and remains confident “the courts will ultimately agree that these supply chain designations were unlawful.”

The Department of Defense did not immediately respond to a request for comment, but acting attorney general Todd Blanche posted a statement on X. “Today’s DC Circuit stay allowing the government to designate Anthropic as a supply-chain risk is a resounding victory for military readiness,” he wrote.
“Our position has been clear from the start—our military needs full access to Anthropic’s models if its technology is integrated into our sensitive systems.

Military authority and operational control belong to the Commander-in-Chief and Department of War, not a tech company.”

The cases are testing how much power the executive branch has over the conduct of tech companies. The battle between Anthropic and the Trump administration is also playing out as the Pentagon deploys AI in its war against Iran. The company has argued it is being illegally punished for insisting that its AI tool Claude lacks the accuracy needed for certain sensitive operations such as carrying out deadly drone strikes without human supervision.

Several experts in government contracting and corporate rights have told WIRED that Anthropic has a strong case against the government, but the courts sometimes refuse to overrule the White House on matters related to national security. Some AI researchers have said the Pentagon’s actions against Anthropic “chills professional debate” about the performance of AI systems.

Anthropic has claimed in court that it lost business because of the designation, which government lawyers contend bars the Pentagon and its contractors from using the company’s Claude AI as part of military projects. And as long as Trump remains in power, Anthropic may not be able to regain the significant foothold it held in the federal government.

Final decisions in the company’s two lawsuits could be months away. The Washington court is scheduled to hear oral arguments on May 19.

The parties have revealed minimal details so far about how exactly the Department of Defense has used Claude or how much progress it has made in transitioning staff to other AI tools from Google DeepMind, OpenAI, or others. The military, which under President Trump calls itself the Department of War, has said it has taken steps to ensure Anthropic can’t purposely try to sabotage its AI tools during the transition.

Update 4/8/26 7:27 EDT: This story has been updated to include a statement form acting attorney general Todd Blanche.



Source link

Continue Reading

Tech

As the Strait of Hormuz Reopens, Global Shipping Will Take Months to Recover

Published

on

As the Strait of Hormuz Reopens, Global Shipping Will Take Months to Recover


As the world held its breath on Tuesday night, news of a ceasefire and the potential reopening of the Strait of Hormuz brought a collective sigh of relief. But with shipments stalled in the strait for over a month, the disruption to global shipping will not resolve immediately.

“Traffic through Hormuz dropped by about 95 percent [during this conflict]. As a result, prices surged, and not just for crude oil but also for refined products like jet fuel, diesel, and gas oil,” says Carsten Ladekjær, CEO at Glander International Bunkering, which specializes in supplying fuel and lubricants to the global shipping industry.

The impact has been uneven across regions. Countries heavily dependent on Middle Eastern energy—particularly in Asia—have been most affected. India sources around 55 percent of its energy imports from the region, China about 50 percent, Japan 93 percent, South Korea 67 percent, and Singapore 70 percent, according to Ladekjær.

While the ceasefire signals a possible reopening, key details remain unclear. “Even with a ceasefire, reopening won’t be immediate,” Ladekjær says. “There’s a backlog, with ships waiting to leave, and likely a controlled process for who gets out first. Iran still appears to be managing that.”

Energy markets reacted quickly. Brent crude fell to around $94 from $110 earlier in the week—a drop of roughly 15 percent.

“Refined products like diesel and jet fuel have dropped even more, because markets are forward-looking—they price in expectations,” says Arne Lohmann Rasmussen, chief analyst and head of research at Global Risk Management. “But we’re still well above prewar levels, which were around $60 to $70.”

A System Under Backlog

Around 1,000 ships remain in the Gulf, including hundreds of tankers awaiting passage.

As of this writing, more than 800 cargo ships and tankers are stuck inside the Persian Gulf, with over 1,000 additional vessels waiting on both sides of the Strait of Hormuz.

Under normal conditions, roughly 150 vessels pass through the strait daily. Experts say clearing the backlog will take time, as ships must be sequenced through, refueled, and repositioned.

Ships began passing through the Strait of Hormuz after the ceasefire announcement.

Elif Acar/Getty Images

“That’s a logistical nightmare. We don’t yet know what the current capacity will be, especially from a security standpoint,” says Lohmann Rasmussen. “It’s not something that can be solved overnight. There are logistical issues, security issues, and even communication challenges.”

Though the market has already seen a correction, that doesn’t mean prices at the pump or in storage will drop immediately.



Source link

Continue Reading

Tech

Meta’s New AI Model Gives Mark Zuckerberg a Seat at the Big Kid’s Table

Published

on

Meta’s New AI Model Gives Mark Zuckerberg a Seat at the Big Kid’s Table


Meta on Wednesday announced its first major model since CEO Mark Zuckerberg rebooted the company’s AI efforts last year under a new division called Meta Intelligence Labs. The model, called Muse Spark, is a step toward Zuckerberg’s vision of “personal superintelligence,” the company says, and for now, it will remain closed source.

Zuckerberg said in a social media post that Meta’s goal is to build AI products that “don’t just answer your questions but act as agents that do things for you.” The billionaire added that he is “optimistic that this will support a wave of creativity, entrepreneurship, growth, and health.”

Muse Spark certainly appears to be a major upgrade over Meta’s last big release, Llama 4, which came out in April 2025 and was viewed in the tech industry as a disappointment with middling performance.

Meta is making Muse Spark available via meta.ai and through the Meta AI app. Unlike Llama, Muse Spark is not being released for others to download, though the company says it hopes to open-source future versions. Meta was previously seen as a leader in open source AI and made its Llama models available for researchers, startups, and hobbyists to download and customize.

“Looking ahead, we plan to release increasingly advanced models that push the frontier of intelligence and capabilities, including new open source models,” Zuckerberg wrote.

Meta’s self-reported benchmark scores for Muse Spark suggest the model is better at some tasks than the latest models from OpenAI, Anthropic, Google, and xAI. “Muse Spark is the first step on our scaling ladder,” Meta said in a blog post, referring to its goal of building AI that far outstrips human abilities.

Artificial Analysis, an AI benchmarking company that got early access to Muse Spark, said on social media that the new model is one of the best it has tested. “Muse Spark scores 52 on the Artificial Analysis Intelligence Index, placing it within the top 5 models we have benchmarked,” the company said in its post, citing its own rubric for scoring models that combines various third-party benchmarks.

Meta says the new model is natively multimodal, meaning that it has been trained to handle images, audio, and video as well as text. Muse Spark also features advanced reasoning capabilities, a key feature of the best AI models available today, and it was built from scratch to have strong coding capabilities. Meta described these features as the foundation for building ever-more capable models using modern machine-learning methods.

Meta says that it built Muse Spark to be especially good at providing medical advice. “To improve Muse Spark’s health reasoning capabilities, we collaborated with over 1,000 physicians to curate training data that enables more factual and comprehensive responses,” the company said in its blog post.

Zuckerberg has spent a small fortune overhauling Meta’s artificial intelligence efforts since Llama 4 came out. The tech giant poached top AI engineers from competing firms with compensation packages worth hundreds of millions. It also spent billions to acquire or make major investments in a number of AI startups. Meta recruited Alexandr Wang, the CEO of Scale, an AI training company, to lead its AI efforts after investing $14.3 billion in the company.

Meta also published a document outlining its vision for safely scaling AI models to superhuman levels of performance. The company’s Advanced AI Scaling Framework outlines safety checks that the company will perform as its models become increasingly advanced.



Source link

Continue Reading

Trending