Connect with us

Tech

Apple asks EU to scrap landmark digital competition law

Published

on

Apple asks EU to scrap landmark digital competition law


Apple and the EU have repeatedly locked horns over the bloc’s Digital Markets Act.

Apple asked the European Union to scrap its landmark digital competition law on Thursday, arguing that it poses security risks and creates a “worse experience” for consumers.

The US tech giant and the EU have repeatedly locked horns over the bloc’s Digital Markets Act (DMA), which Brussels says seeks to make the digital sector in the 27-nation bloc fairer and more open.

“The DMA should be repealed while a more appropriate fit for purpose legislative instrument is put in place,” Apple said in a formal submission to the European Commission as part of a consultation on the law.

The latest clash came as President Donald Trump sought to pressure the EU over decisions and laws affecting US Big Tech—with key industry figures including Apple chief Tim Cook moving closer to the White House since Trump’s return to power.

“It’s become clear that the DMA is leading to a worse experience for Apple users in the EU,” the tech giant said in a blog post accompanying its submission. “It’s exposing them to new risks, and disrupting the simple, seamless way their Apple products work together.”

Pushing for wholesale reform of the law if it is not repealed, Apple suggested enforcement “should be undertaken by an independent European agency” rather than the commission, the EU’s executive arm and digital watchdog.

The DMA challenges Apple’s closed ecosystem, but Brussels argues that it is necessary to do so to level the playing field for Apple’s rivals and avoid unfair market domination.

The law tells Big Tech firms what they can and cannot do on their platforms. For example, companies must offer choice screens for and search engines to give users more options.

Violations of the DMA can lead to hefty fines.

Brussels in April slapped a 500-million-euro ($590-million) fine on Apple under the DMA, which the company has appealed.

Delays for EU users

Apple says dangers are posed when Europeans can download app marketplaces that rival its App Store.

The giant also cites an increasing number of complaints from users about DMA-related changes but has not provided exact figures.

It argued in its 25-page submission that the EU’s law had forced it to delay new features in the bloc.

For example, Apple has not yet rolled out “live translation”—which allows consumers to choose another language to hear via AirPods in their ears.

The technology was launched this month in the United States but Apple says it must undertake further engineering work to ensure users’ privacy in the EU.

Under the DMA, companies including Apple must make sure their products can work seamlessly with third-party devices such as earphones.

The commission said it was “normal” companies sometimes needed more time to make sure their products were in line with the new law and that it was helping them comply.

DMA enforcement began in March 2024 and the EU’s consultation on the first review of the law ended just before midnight on Wednesday.

Independently from the digital rules, Apple has faced the heat under different EU competition rules. Brussels slapped it with a 1.8-billion-euro fine in March 2024.

© 2025 AFP

Citation:
Apple asks EU to scrap landmark digital competition law (2025, September 25)
retrieved 25 September 2025
from https://techxplore.com/news/2025-09-apple-eu-scrap-landmark-digital.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Anthropic’s Claude Takes Control of a Robot Dog

Published

on

Anthropic’s Claude Takes Control of a Robot Dog


As more robots start showing up in warehouses, offices, and even people’s homes, the idea of large language models hacking into complex systems sounds like the stuff of sci-fi nightmares. So, naturally, Anthropic researchers were eager to see what would happen if Claude tried taking control of a robot—in this case, a robot dog.

In a new study, Anthropic researchers found that Claude was able to automate much of the work involved in programming a robot and getting it to do physical tasks. On one level, their findings show the agentic coding abilities of modern AI models. On another, they hint at how these systems may start to extend into the physical realm as models master more aspects of coding and get better at interacting with software—and physical objects as well.

“We have the suspicion that the next step for AI models is to start reaching out into the world and affecting the world more broadly,” Logan Graham, a member of Anthropic’s red team, which studies models for potential risks, tells WIRED. “This will really require models to interface more with robots.”

Courtesy of Anthropic

Courtesy of Anthropic

Anthropic was founded in 2021 by former OpenAI staffers who believed that AI might become problematic—even dangerous—as it advances. Today’s models are not smart enough to take full control of a robot, Graham says, but future models might be. He says that studying how people leverage LLMs to program robots could help the industry prepare for the idea of “models eventually self-embodying,” referring to the idea that AI may someday operate physical systems.

It is still unclear why an AI model would decide to take control of a robot—let alone do something malevolent with it. But speculating about the worst-case scenario is part of Anthropic’s brand, and it helps position the company as a key player in the responsible AI movement.

In the experiment, dubbed Project Fetch, Anthropic asked two groups of researchers without previous robotics experience to take control of a robot dog, the Unitree Go2 quadruped, and program it to do specific activities. The teams were given access to a controller, then asked to complete increasingly complex tasks. One group was using Claude’s coding model—the other was writing code without AI assistance. The group using Claude was able to complete some—though not all—tasks faster than the human-only programming group. For example, it was able to get the robot to walk around and find a beach ball, something that the human-only group could not figure out.

Anthropic also studied the collaboration dynamics in both teams by recording and analyzing their interactions. They found that the group without access to Claude exhibited more negative sentiments and confusion. This might be because Claude made it quicker to connect to the robot and coded an easier-to-use interface.

Courtesy of Anthropic

The Go2 robot used in Anthropic’s experiments costs $16,900—relatively cheap, by robot standards. It is typically deployed in industries like construction and manufacturing to perform remote inspections and security patrols. The robot is able to walk autonomously but generally relies on high-level software commands or a person operating a controller. Go2 is made by Unitree, which is based in Hangzhou, China. Its AI systems are currently the most popular on the market, according to a recent report by SemiAnalysis.

The large language models that power ChatGPT and other clever chatbots typically generate text or images in response to a prompt. More recently, these systems have become adept at generating code and operating software—turning them into agents rather than just text-generators.



Source link

Continue Reading

Tech

The AI Boom Is Fueling a Need for Speed in Chip Networking

Published

on

The AI Boom Is Fueling a Need for Speed in Chip Networking


The new era of Silicon Valley runs on networking—and not the kind you find on LinkedIn.

As the tech industry funnels billions into AI data centers, chip makers both big and small are ramping up innovation around the technology that connects chips to other chips, and server racks to other server racks.

Networking technology has been around since the dawn of the computer, critically connecting mainframes so they can share data. In the world of semiconductors, networking plays a part at almost every level of the stack—from the interconnect between transistors on the chip itself, to the external connections made between boxes or racks of chips.

Chip giants like Nvidia, Broadcom, and Marvell already have well-established networking bona fides. But in the AI boom, some companies are seeking new networking approaches that help them speed up the massive amounts of digital information flowing through data centers. This is where deep-tech startups like Lightmatter, Celestial AI, and PsiQuantum, which use optical technology to accelerate high-speed computing, come in.

Optical technology, or photonics, is having a coming-of-age moment. The technology was considered “lame, expensive, and marginally useful,” for 25 years until the AI boom reignited interest in it, according to PsiQuantum cofounder and chief scientific officer Pete Shadbolt. (Shadbolt appeared on a panel last week that WIRED cohosted.)

Some venture capitalists and institutional investors, hoping to catch the next wave of chip innovation or at least find a suitable acquisition target, are funneling billions into startups like these that have found new ways to speed up data throughput. They believe that traditional interconnect technology, which relies on electrons, simply can’t keep pace with the growing need for high-bandwidth AI workloads.

“If you look back historically, networking was really boring to cover, because it was switching packets of bits,” says Ben Bajarin, a longtime tech analyst who serves as CEO of the research firm Creative Strategies. “Now, because of AI, it’s having to move fairly robust workloads, and that’s why you’re seeing innovation around speed.”

Big Chip Energy

Bajarin and others give credit to Nvidia for being prescient about the importance of networking when it made two key acquisitions in the technology years ago. In 2020, Nvidia spent nearly $7 billion to acquire the Israeli firm Mellanox Technologies, which makes high-speed networking solutions for servers and data centers. Shortly after, Nvidia purchased Cumulus Networks, to power its Linux-based software system for computer networking. This was a turning point for Nvidia, which rightly wagered that the GPU and its parallel-computing capabilities would become much more powerful when clustered with other GPUs and put in data centers.

While Nvidia dominates in vertically-integrated GPU stacks, Broadcom has become a key player in custom chip accelerators and high-speed networking technology. The $1.7 trillion company works closely with Google, Meta, and more recently, OpenAI, on chips for data centers. It’s also at the forefront of silicon photonics. And last month, Reuters reported that Broadcom is readying a new networking chip called Thor Ultra, designed to provide a “critical link between an AI system and the rest of the data center.”

On its earnings call last week, semiconductor design giant ARM announced plans to acquire the networking company DreamBig for $265 million. DreamBig makes AI chiplets—small, modular circuits designed to be packaged together in larger chip systems—in partnership with Samsung. The startup has “interesting intellectual property … which [is] very key for scale-up and scale-out networking” said ARM CEO Rene Haas on the earnings call. (This means connecting components and sending data up and down a single chip cluster, as well as connecting racks of chips with other racks.)

Light On

Lightmatter CEO Nick Harris has pointed out that the amount of computing power that AI requires now doubles every three months—much faster than Moore’s Law dictates. Computer chips are getting bigger and bigger. “Whenever you’re at the state of the art of the biggest chips you can build, all performance after that comes from linking the chips together,” Harris says.

His company’s approach is cutting-edge and doesn’t rely on traditional networking technology. Lightmatter builds silicon photonics that link chips together. It claims to make the world’s fastest photonic engine for AI chips, essentially a 3D stack of silicon connected by light-based interconnect technology. The startup has raised more than $500 million over the past two years from investors like GV and T. Rowe Price. Last year, its valuation reached $4.4 billion.



Source link

Continue Reading

Tech

Waymo’s Robotaxis Can Now Use the Highway, Speeding Up Longer Trips

Published

on

Waymo’s Robotaxis Can Now Use the Highway, Speeding Up Longer Trips


When Google’s self-driving car project began testing in the Bay Area back in 2009, its engineers focused on highways by sending its sensor-laden vehicles cruising down Interstate 280, which runs the length of Silicon Valley’s peninsula.

More than 15 years later, the cars are back on the freeway—this time without drivers. On Tuesday, the project, now an Alphabet subsidiary we all know as Waymo, announced that its robotaxi service would now drive on freeways in the San Francisco Bay Area, Los Angeles, and Phoenix.

The new service marks another technical leap for Waymo, whose robotaxis currently serve five US metros: Atlanta, Austin, Los Angeles, Phoenix, and the San Francisco Bay Area. The company says it will launch in several other US and international cities next year, including Dallas, Miami, Nashville, Las Vegas, Detroit, and London.

Waymo also announced Wednesday that it would begin curbside pickup and drop-off service at San Jose Mineta International Airport, allowing passengers to, theoretically, travel autonomously all the way from San Francisco to San Jose—a service area of some 260 square miles. Waymo has been offering its autonomous taxi service on area service roads since the summer of 2023, but the new freeway service could cut in half the time it takes for a robotaxi to travel from San Francisco to Mountain View, Waymo user experience researcher Naomi Guthrie says.

“Freeway driving is one of those things that’s very easy to learn, but very hard to master,” Waymo co-CEO Dmitri Dolgov told reporters last week. Highways are predictable, with (mostly) clear signs and lane lines, and a limited set of vehicles and players (trucks, cars, motorcycles, trailers) that a vehicle’s software must learn to recognize and predict. But Waymo executives said that, despite a year of employee- and guest-only highway testing, safety emergencies on highways are relatively rare, so the team was unable to collect as much real-world data as it needed to train its vehicles to operate safely there. Complicating the project was the fact that highway crashes, at high speeds, are subject to the laws of phsyics—and so more likely to maim or kill.

To get ready for highways, Waymo executives say, engineers supplemented real-world driving data and training with data collected on private, closed courses, and data created in simulations. Two onboard computers help create system “redundancies,” meaning the vehicles will have computer backup if something goes wrong. The vehicles have been trained to exit highways in the case of emergencies, but will be able to pull over as well. Waymo execs also say they have and will work with law enforcement and first responders, including highway patrols, to create procedures for vehicles and riders stranded on highway shoulders, where hundreds of Americans are killed every year.



Source link

Continue Reading

Trending