Connect with us

Tech

Google avoids being dismantled after US court battle—and it’s down to the rise of AI

Published

on

Google avoids being dismantled after US court battle—and it’s down to the rise of AI


Credit: Unsplash/CC0 Public Domain

A year ago, Google faced the prospect of being dismantled. Today, artificial intelligence (AI) and a new court judgment has helped it avoid this fate. Part of the reason is that AI poses a grave threat to Google’s advertising revenues.

“Google will not be required to divest Chrome; nor will the court include a contingent divestiture of the Android operating system in the final judgment,” according to the decision.

Google must share certain data with “qualified competitors” as deemed by the court. This will include parts of its search index, Google’s inventory of web content. Judge Mehta will allow Google to continue paying companies like Apple and Samsung to distribute its search engine on devices and browsers. But he will bar Google from maintaining exclusive contracts.

The history of this decision goes back to a 2024 ruling by federal judge Amit Mehta. It found that Google maintained a monopoly in the , notably by paying billions to companies including Apple and Samsung to set Google as the default search engine on their devices.

Almost a year later, the same US judge issued his final ruling, and the tone could not be more different. Google will not be broken up. There will be no choice screen on new phones.

The nature of the search engine market, where more users generate more data, and more data improves search quality, made it impossible for competitors to challenge Google, the court found in 2024.

The 2024 ruling itself was controversial. While high quality data enables a dominant firm to extract more profit from consumers, it also allows it to provide a better service. Decades of research in economics has shown that determining which effect is more important is not straightforward.

At the time, the US Department of Justice deemed the issue so serious that it considered breaking up Google as the only viable solution. For instance, it suggested forcing the company to sell its web browser, Google Chrome.

The government also proposed forcing device manufacturers to offer users a choice of search engines during set up, and compelling Google to share most of its data on user behavior and ad bidding, where advertisers compete in auctions to get their ads shown to users for a specific search query or audience. These so-called “remedies,” measures Google would be required to implement to end its monopoly, aimed to restore competition.

Limited sharing

So, what has changed in a year to so radically change the perception of Google’s market dominance? The main answer is AI—and specifically, large language models (LLMs) like ChatGPT, Claude, and Google’s own Gemini. As users increasingly turn to LLMs for web searches, Google responded by placing AI-generated summaries at the top of its search results.

The way people navigate the internet is quickly evolving, with one trend reshaping the business models of online companies: the zero-click search. According to a Bain & Company survey, consumers now default to accepting AI-generated answers without further interaction. The data is striking: 80% of users report being satisfied with AI responses for at least 40% of their searches, often stopping at the summary page.

Threat to ad revenue

This AI-driven shift in consumer behavior threatens not only Google’s business model but also that of most internet-based companies. Advertising accounts for roughly 80% of Google’s revenue, earned by charging companies for prominent placement in search results and by leveraging its vast amount of user data to sell ad space across the web. If users stop clicking links, this revenue stream evaporates.

More importantly for this ruling, the market Google once monopolized may no longer be the relevant one. Today, Google’s primary potential competitors in search are not Microsoft Bing, but AI models like ChatGPT, Claude, and Perplexity. In the global race for AI dominance, the outcome is far from certain.

From an antitrust standpoint, there is little justification for penalizing Google now or forcing it to cede advantages to competitors. What would be the benefit for consumers of forcing Google to accept the £24.6 billion offer from Jeff Bezos’ Perplexity AI to buy the Chrome browser?

In essence, the judge acknowledges that Google monopolized the search engine market for a decade but concludes that the issue may resolve itself in the years ahead.

This situation echoes the first major monopolization case: Internet Explorer. For years, European and US regulators battled Microsoft to dismantle the dominance of its web browser, which was bundled with the then-dominant Windows 95 operating system.

By the time all appeals were exhausted, however, the monopoly had vanished. Internet Explorer was partly a victim of the rise of smartphones, which did not rely on Windows. The new king in town was a newcomer: a certain Google Chrome.

How you view the economic and political power of tech giants will shape which lesson you draw from this story. An optimistic view I suggested (with the economist Jana Friedrichsen) is that winner-takes-all markets can intensify competition through innovation. In such markets, incremental investment is not enough; to challenge Google, a competitor must offer a vastly superior product to capture the entire market.

Precisely because they ruthlessly defend their monopoly positions, tech giants show competitors that the potential gains from radical innovations are massive. The pessimistic view, however, is that years of dominance have left these firms largely unaccountable, which could embolden them in future.

Provided by
The Conversation


This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation:
Google avoids being dismantled after US court battle—and it’s down to the rise of AI (2025, September 6)
retrieved 6 September 2025
from https://techxplore.com/news/2025-09-google-dismantled-court-ai.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Tech

‘Orbs,’ ‘Saucers,’ and ‘Flashes’ on the Moon: Pentagon Drops New UFO Files

Published

on

‘Orbs,’ ‘Saucers,’ and ‘Flashes’ on the Moon: Pentagon Drops New UFO Files


Trump first teased the release in February in a Truth Social post. The Pentagon coordinated the release in partnership with the White House, Director of National Intelligence Tulsi Gabbard, the Energy Department, NASA, and the FBI. Many of the files in this new drop contain documents that are already publicly available. However, some versions of these known documents in the new files contain more pages, or fewer redactions, than previously released versions.

More than 60 percent of Americans believe that the government is concealing information about UAP, according to YouGov, while 40 percent think UAP are likely alien in origin, according to Gallup. Congress has held hearings into whether there’s been a decades-long program to recover “non-human” technologies, yet evidence remains elusive.

Courtesy of the US Department of Defense

“If it’s just more blobby photos or redacted documents that don’t have any details in them, it’s more of the same,” Adam Frank, an astrophysicist at the University of Rochester who studies the search for alien life, says of the new files. “What we need are actual scientific results from the investigations that should have been done if the most extraordinary claims being made are true.”

The document drop follows a week of high-profile discussions of aliens, including Stephen Colbert’s interview with former President Barack Obama, released on Wednesday. Obama cast doubt on government cover-ups about aliens by joking that “some guy guarding the installation would have taken a selfie with the alien and sent it to his girlfriend.”

Image may contain Outdoors

Courtesy of the US Department of Defense

Members of the Artemis II crew also second-guessed the idea of a vast government-wide conspiracy to hide the discovery of extraterrestrial life in a discussion with The Daily this week.

“Do you realize that if we found alien life out there, and we came back and reported on it, NASA would never have a budget issue for the rest of eternity?” said Reid Weisman, the commander of Artemis II. “So trust me.”

Victor Glover, the astronaut who piloted the mission, added: “Why would we hide that from you?”



Source link

Continue Reading

Tech

Nick Bostrom Has a Plan for Humanity’s ‘Big Retirement’

Published

on

Nick Bostrom Has a Plan for Humanity’s ‘Big Retirement’


Philosopher Nick Bostrom recently posted a paper, where he postulated that a small chance of AI annihilating all humans might be worth the risk, because advanced AI might relieve humanity of “its universal death sentence.” That upbeat gamble is quite a leap from his previous dark musings on AI, which made him a doomer godfather. His 2014 book Superintelligence was an early examination of AI’s existential risk. One memorable thought experiment: An AI tasked with making paper clips winds up destroying humanity because all those resource-needy people are an impediment to paper clip production. His more recent book, Deep Utopia, reflects a shift in his focus. Bostrom, who leads Oxford’s Future of Humanity Institute, dwells on the “solved world” that comes if we get AI right.

STEVEN LEVY: Deep Utopia is more optimistic than your previous book. What changed for you?

NICK BOSTROM: I call myself a fretful optimist. I am very excited about the potential for radically improving human life and unlocking possibilities for our civilization. That’s consistent with the real possibility of things going wrong.

You wrote a paper with a striking argument: Since we’re all going to die anyway, the worst that can happen with AI is that we die sooner. But if AI works out, it might extend our lives, maybe indefinitely.

That paper explicitly looks at only one aspect of this. In any given academic paper, you can’t address life, the universe, and the meaning of everything. So let’s just look at this little issue and try to nail that down.

That isn’t a little issue.

I guess I’ve been irked by some of the arguments made by doomers who say that if you build AI, you’re going to kill me and my children and how dare you. Like the recent book If Anyone Builds It, Everyone Dies. Even more probable is that if nobody builds it, everyone dies! That’s been the experience for the last several 100,000 years.

But in the doomer scenario everybody dies and there’s no more people being born. Big difference.

I have obviously been very concerned with that. But in this paper, I’m looking at a different question, which is, what would be best for the currently existing human population like you and me and our families and the people in Bangladesh? It does seem like our life expectancy would go up if we develop AI, even if it is quite risky.

In Deep Utopia you speculate that AI could create incredible abundance, so much that humanity might have a huge problem with finding purpose. I live in the United States. We’re a very rich country, but our government, ostensibly with support of the people, has policies that deny services to the poor and distribute rewards to the rich. I think that even if AI was able to provide abundance for everyone, we would not supply it to everyone.

You might be right. Deep Utopia takes as its starting point the postulation that everything goes extremely well. If we do a reasonably good job on governance, everybody gets a share. There is quite a deep philosophical question of what a good human life would look like under these ideal circumstances.

The meaning of life is something you hear a lot about in Woody Allen movies and maybe in the philosophers community. I’m worried more about the wherewithal to support oneself and get a stake in this abundance.

The book is not only about meaning. That’s one out of a bunch of different values that it considers. This could be a wonderful emancipation from the drudgery that humans have been subjected to. If you have to give up, say, half of your waking hours as an adult just to make ends meet, doing some work you don’t enjoy and that you don’t believe in, that’s a sad condition. Society is so used to it that we’ve invented all kinds of rationalizations around it. It’s like a partial form of slavery.



Source link

Continue Reading

Tech

There’s a Long-Shot Proposal to Protect California Workers From AI

Published

on

There’s a Long-Shot Proposal to Protect California Workers From AI


Billionaire California gubernatorial candidate Tom Steyer is rolling out a new proposal that would guarantee jobs with benefits for workers displaced by artificial intelligence. He’s the first state-wide candidate to make such a pledge.

The plan, which builds on a broader AI policy framework Steyer released in March, promises to make California “the first major economy in the world” to ensure “good-paying” jobs to workers impacted by AI. To do so, Steyer tells WIRED he plans to build off a previous proposal to introduce a “token tax” which would tax big tech companies “a fraction of a cent for every unit of data processed” for AI. The funding generated by that tax would go to what Steyer has called the Golden State Sovereign Wealth Fund, with some of that money being earmarked for jobs building housing, health care, and modernizing California’s energy infrastructure.

“The aim of the initiative will be to strengthen the foundation of the state’s economy, invest in our communities, and create beautiful, vibrant public spaces,” states a campaign memo viewed by WIRED. “To support these efforts, Tom will also invest heavily in training and apprenticeship programs across the state.”

The new plan also intends to expand unemployment insurance and establish a new agency called the AI Worker Protection Administration that would include union leaders, academics, and technologists that would adopt rules to protect workers’ rights, the memo says.

“People all over this state are terrified that AI is going to hollow out this whole economy and they’re going to lose their jobs. Young people are worried they’ll never get a job,” Steyer tells WIRED. “We believe this can be an amazing transformational technology in many ways, but we’re not in the business of leaving people in California behind.”

Steyer’s job guarantee comes as lawmakers across the state and federal levels—and even some AI executives—scramble to address the ramifications of widespread AI adoption across the US workforce. In New Jersey, state senator Troy Singleton recently put out a bill that would require companies that replace workers with AI to contribute to a fund that would pay to retrain those workers. In Congress, there are a handful of proposals for grants and tax credits for companies to provide AI training to existing employees.

Dario Amodei, CEO of Anthropic, has previously suggested the concept of a token tax that is now being proposed by Steyer. “Obviously, that’s not in my economic interest,” Amodei told Axios last year. “But I think that would be a reasonable solution to the problem.” In April, OpenAI proposed a similar public wealth fund to what Steyer has rolled out.

Steyer’s announcement comes days after Democratic primary opponent Xavier Becerra—former Health and Human Services secretary under president Joe Biden—offered his own AI plan. In that proposal, Becerra calls for “workforce investment and transition support” but doesn’t provide a specific funding mechanism.

“Displacement without support is abandonment,” Becerra said in a Monday memo outlining his plan. “I will work with the Legislature, the California public education system and industry partners to build accessible, stackable workforce programs that prepare Californians for the AI economy and support workers navigating role changes.”

Over the past few months, the White House has threatened to go after states that choose to regulate AI. In December, President Donald Trump signed an executive order that could revoke federal broadband funding from states that approve “onerous” AI laws. This is happening in local races as well: In New York, a super PAC backed by a number of Silicon Valley powerhouses, including OpenAI cofounder Greg Brockman, has targeted Alex Bores, a Manhattan congressional candidate who has made AI regulation the centerpiece of his campaign.

“Not regulating AI doesn’t seem remotely reasonable,” Steyer says. “But if California wants to lead, we’ve got to have a vision for the future that includes something that is not just about letting entrepreneurs get rich at the expense of everybody else.”



Source link

Continue Reading

Trending