Connect with us

Tech

Google Injects Gemini Into Chrome as AI Browsers Go Mainstream

Published

on

Google Injects Gemini Into Chrome as AI Browsers Go Mainstream


Google is adding multiple new AI features to Chrome, the most popular browser in the world. The most visible change is a new button in Chrome that launches the Gemini chatbot, but there are also new tools for searching, researching, and answering questions with AI. Google has additional cursor-controlling “agentic” tools in the pipeline for Chrome as well.

The Gemini in Chrome mode for the web browser uses generative AI to answer questions about content on a page and synthesize information across multiple open tabs. Gemini in Chrome first rolled out to Google’s paying subscribers in May. The AI-focused features are now available to all desktop users in the US browsing in English; they’ll show up in a browser update.

On mobile devices, Android users can already use aspects of Gemini within the Chrome app, and Google is expected to launch an update for iOS users of Chrome in the near future.

When I wrote about web browsers starting to add more generative AI tools back in 2023, it was primarily something that served as an alternative to the norm. The software was built by misfits and change-makers who were experimenting with new tools, or hunting for a break-out feature to grow their small user bases. All of this activity was dwarfed by the commanding number of users who preferred Chrome.

Two years later, while Google’s browser remains the market leader, the internet overall is completely seeped in AI tools, many of them also made by Google. Still, today marks the moment when the concept of an “AI browser” truly went mainstream with the weaving of Gemini so closely into the Chrome browser.

The Gemini strategy at Google has already been to leverage as many of its in-house integrations as possible, from Gmail to Google Docs. So, the decision to AI-ify the Chrome browser for a wider set of users does not come as a shock.

Even so, the larger roll out will likely be met with ire by some users who are either exhausted by the onslaught of AI-focused features in 2025 or want to abstain from using generative AI, whether for environmental reasons or because they don’t want their activity to be used to train an algorithm. Users who don’t want to see the Gemini option will be able to click on the Gemini sparkle icon and unpin it from the top right corner of the Chrome browser.

The new button at the top of the browser will launch Gemini. Users in the US will see these changes first.

Video: Google



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Brendan Carr Isn’t Going to Stop Until Someone Makes Him

Published

on

Brendan Carr Isn’t Going to Stop Until Someone Makes Him


To Genevieve Lakier, a professor of law at the University of Chicago whose research focuses on free speech, Carr’s threats against ABC appear to be “a pretty clear cut case of jawboning.” Jawboning refers to a type of informal coercion where government officials try to pressure private entities into suppressing or changing speech without using any actual formal legal action. Since jawboning is typically done in letters and private meetings, it rarely leaves a paper trail, making it notoriously difficult to challenge in court.

This Kimmel suspension is a little different, Lakier says. During the podcast appearance, Carr explicitly named his target, threatened regulatory action, and within a matter of hours the companies complied.

“The Supreme Court has made clear that that’s unconstitutional in all circumstances,” says Lakier. “You’re just not allowed to do that. There’s no balancing. There’s no justification. Absolutely no, no way may the government do that.”

Even if Carr’s threats amount to unconstitutional jawboning, though, stopping him could still prove difficult. If ABC sued, it would need to prove coercion—and however a suit went, filing one could risk additional regulatory retaliation down the line. If Kimmel were to sue, there’s no promise that he would get anything out of the suit even if he won, says Lakier, making it less likely for him to pursue legal action in the first place.

“There’s not much there for him except to establish that his rights were violated. But there is a lot of benefit for everyone else,” says Lakier. “This has received so much attention that it would be good if there could be, from now on, some mechanism for more oversight from the courts over what Carr is doing.”

Organizations like the FPF have sought novel means of limiting Carr’s power. In July, the FPF submitted a formal disciplinary complaint to the DC Bar’s Office of Disciplinary Counsel arguing that Carr violated its ethical rules, misrepresenting the law by suggesting the FCC has the ability to regulate editorial viewpoints. Without formal rulings, companies affected by Carr’s threats would be some of the only organizations with grounding to sue. At the same time, they have proven to be some of the least likely groups to pursue legal action over the last eight months.

In a statement on Thursday, House Democratic leadership wrote that Carr had “disgraced the office he holds by bullying ABC” and called on him to resign. They said they plan to “make sure the American people learn the truth, even if that requires the relentless unleashing of congressional subpoena power,” but did not outline any tangible ways to rein in Carr’s power.

“People need to get creative,” says Stern. “The old playbook is not built for this moment and the law only exists on paper when you’ve got someone like Brendan Carr in charge of enforcing it.”

This vacuum has left Carr free to push as far as he likes and it has spooked experts over how far this precedent will travel. Established in the 1930s, the FCC was designed to operate as a neutral referee, but years of media consolidation have dramatically limited the number of companies controlling programming over broadcast, cable, and now, streaming networks. Spectrum is a limited resource the FCC controls, giving the agency more direct control over the broadcast companies that rely on it than it has over cable or streaming services. This concentration makes them infinitely easier to pressure, benefitting the Trump administration, Carr, but also whoever might come next.

“If political tides turn, I don’t have confidence that the Democrats won’t also use them in an unconstitutional and improper matter,” says Stern. “[The Trump administration is] really setting up this world where every election cycle, assuming we still have elections in this country, the content of broadcast news might drastically shift depending on which political party controls the censorship office.”



Source link

Continue Reading

Tech

OpenAI launches teen-safe ChatGPT with parental controls

Published

on

OpenAI launches teen-safe ChatGPT with parental controls


by I. Edwards


Teenagers chatting with ChatGPT will soon see a very different version of the tool—one built with stricter ways to keep them safe online, OpenAI announced.

The new safeguards come as regulators increase scrutiny of chatbots and their impact on young people’s mental health.

Under the change, anyone identified as under 18 will automatically be directed to a different version of ChatGPT designed with “age-appropriate” content rules, the company said in a statement.

The teen version blocks sexual content and can involve in rare cases where a user is in acute distress.

“The way ChatGPT responds to a 15-year-old should look different than the way it responds to an adult,” the company explained.

OpenAI also plans to roll out parental controls by the end of September. Parents will be able to link accounts, view chat history and even set blackout hours to limit use.

The announcement follows the Federal Trade Commission’s (FTC) investigation into the potential risks of AI chatbots for children and teens.

In April, 16-year-old Adam Raine of California died by suicide; his family has sued OpenAI, claiming ChatGPT played a role in his death, CBS News reported.

While OpenAI says it is prioritizing safety, questions still remain about how the system will verify a user’s age. If the platform cannot confirm a user’s age, it will default to the teen version, the company said.

Other tech giants have announced similar steps. YouTube, for example, has introduced new age-estimation technology that factors in account history and viewing habits, CBS News said.

Parents remain concerned.

A Pew Research Center report released earlier this year found 44% of parents who worry about teen mental health believe has the biggest negative impact.

More information:
HealthyChildren.org has more on how AI chatbots can affect kids.

© 2025 HealthDay. All rights reserved.

Citation:
OpenAI launches teen-safe ChatGPT with parental controls (2025, September 18)
retrieved 18 September 2025
from https://techxplore.com/news/2025-09-openai-teen-safe-chatgpt-parental.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Tech

Scams and frauds: Here are tactics criminals use on you in the age of AI and cryptocurrencies

Published

on

Scams and frauds: Here are tactics criminals use on you in the age of AI and cryptocurrencies


Credit: Pixabay/CC0 Public Domain

Scams are nothing new—fraud has existed as long as human greed, but what changes are the tools.

Scammers thrive on exploiting vulnerable, uninformed users, and they adapt to whatever technologies or trends dominate the moment. In 2025, that means AI, cryptocurrencies and stolen personal data are their weapons of choice.

And, as always, the duty, fear and hope of their targets provide openings. Today, duty often means following instructions from bosses or co-workers, who scammers can impersonate. Fear is that a loved one, who scammers can also impersonate, is in danger. And hope is often for an investment scheme or job opportunity to pay off.

AI-powered scams and deepfakes

Artificial intelligence is no longer niche—it’s cheap, accessible and effective. While businesses use AI for advertising and , scammers exploit the same tools to mimic reality, with disturbing precision.

Criminals are using AI-generated audio or video to impersonate CEOs, managers or even family members in distress. Employees have been tricked into transferring money or leaking . Over 105,000 such deepfake attacks were recorded in the U.S. in 2024, costing more than US$200 million in the first quarter of 2025 alone. Victims often cannot distinguish synthetic voices or faces from real ones.

Fraudsters are also using emotional manipulation. The scammers make or send convincing AI-written texts posing as relatives or friends in distress. Elderly victims in particular fall prey when they believe a grandchild or other family member is in urgent trouble. The Federal Trade Commission has outlined how scammers use fake emergencies to pose as relatives.

Cryptocurrency scams

Crypto remains the Wild West of finance—fast, unregulated and ripe for exploitation.

Pump-and-dump scammers artificially inflate the price of a cryptocurrency through hype on to lure investors with promises of huge returns—the pump—and then sell off their holdings—the dump—leaving victims with worthless tokens.

Pig butchering is a hybrid of romance scams and crypto fraud. Scammers build trust over weeks or months before persuading victims to invest in fake crypto platforms. Once the scammers have extracted enough money from the victim, they vanish.

Scammers also use cryptocurrencies as a means of extracting money from people in impersonation scams and other forms of fraud. For example, scammers direct victims to bitcoin ATMs to deposit large sums of cash and convert it to the untraceable cryptocurrency as payment for fictitious fines.

Phishing, smishing, tech support and jobs

Old scams don’t die; they evolve.

Phishing and smishing have been around for years. Victims are tricked into clicking links in emails or text messages, leading to malware downloads, credential theft or ransomware attacks. AI has made these lures eerily realistic, mimicking corporate tone, grammar and even video content.

Tech support scams often start with pop-ups on computer screens that warn of viruses or identity theft, urging users to call a number. Sometimes they begin with a direct cold call to the victim. Once the victim is on a call with the fake , the scammers convince victims to grant to their supposedly compromised computers. Once inside, install malware, steal data, demand payment or all three.

Fake websites and listings are another current type of scam. Fraudulent sites impersonating universities or ticket sellers trick victims into paying for fake admissions, concerts or goods.

One example is when a website for “Southeastern Michigan University” came online and started offering details about admission. There is no such university. Eastern Michigan University filed a complaint that Southeastern Michigan University was copying its website and defrauding unsuspecting victims.

The rise of remote and gig work has opened new fraud avenues.

Victims are offered fake jobs with promises of high pay and flexible hours. In reality, scammers extract “placement fees” or harvest sensitive personal data such as Social Security numbers and bank details, which are later used for identity theft.

How you can protect yourself

Technology has changed, but the basic principles remain the same: Never click on suspicious links or download attachments from unknown senders, and enter personal information only if you are sure that the website is legitimate. Avoid using third-party apps or links. Legitimate businesses have apps or real websites of their own.

Enable two-factor authentication wherever possible. It provides security against stolen passwords. Keep software updated to patch security holes. Most software allows for automatic updates or warns about applying a patch.

Remember that a legitimate business will never ask for or a money transfer. Such requests are a red flag.

Relationships are a trickier matter. The state of California provides details on how people can avoid being victims of pig butchering.

Technology has supercharged age-old fraud. AI makes deception virtually indistinguishable from reality, crypto enables anonymous theft, and the remote-work era expands opportunities to trick people. The constant: Scammers prey on trust, urgency and ignorance. Awareness and skepticism remain your best defense.

Provided by
The Conversation


This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation:
Scams and frauds: Here are tactics criminals use on you in the age of AI and cryptocurrencies (2025, September 18)
retrieved 18 September 2025
from https://techxplore.com/news/2025-09-scams-frauds-tactics-criminals-age.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Trending