Connect with us

Tech

After OpenAI’s new ‘buy it in ChatGPT’ trial, how soon will AI be online shopping for us?

Published

on

After OpenAI’s new ‘buy it in ChatGPT’ trial, how soon will AI be online shopping for us?


Credit: Unsplash/CC0 Public Domain

Buying and selling online with e-commerce is old news. We’re entering the age of A-commerce, where artificial intelligence (AI) is increasingly able to shop for us.

At the end of September, OpenAI launched its “Buy it in ChatGPT” trial in the United States, using AI agents built to interact with us to do more of people’s browsing and shopping. The technology is known as “agentic commerce,” sometimes shortened to A-commerce.

American shoppers can now ask for shopping suggestions from US Etsy sellers within a ChatGPT chat—then buy a product immediately, without having to navigate away to look at individual shop pages.

Looking ahead, are now spruiking the next phase of “autonomous A-commerce,” which experts predict could see AI checking out for some shoppers within the next few years.

But is handing over more of our shopping decisions to AI a good thing for us as shoppers, for most businesses or for the planet?

What’s possible right now?

For most people using AI to help them shop, the AI agent is still mostly just searching and recommending products. It still has to shift the customer to the retailer’s website to complete the checkout.

For instance, AI can do most steps to order a pizza—though sometimes slower than doing it yourself—apart from paying at the end.

That’s when we step in: we still need to sign in if we’re part of a loyalty program, enter our personal and delivery details, then finally pay.

With the “Buy it in ChatGPT” trial now underway in the US, the customer never leaves the chat, where the checkout is completed.

Shopify has said more than 1 million of its merchants will soon be able to check out within ChatGPT too. Major US retailer Walmart has similar plans.

What’s next?

In May 2025, Google launched “AI mode shopping.” Some features, like using a full body photo of yourself to virtually “try-on” clothes, are still only available for US shoppers, with limited brands.

At the time, Google said its next step will be a new “agentic checkout […] in coming months” for products sold in the US. It would give shoppers the option of tracking a product until its price drops to within a set budget—then automatically prompting them to buy it, using Google Pay. That checkout option is yet to launch.

Credit card giants Visa and Mastercard are also working on ways to make it easier for AI agents to shop for us.

Both the current and coming forms of A-commerce have the potential to spread fast worldwide, because they run largely on the same global digital infrastructure powering today’s e-commerce: identity, payments, data and compliance.

Consultants McKinsey forecast: “We’re entering an era where AI agents won’t just assist—they’ll decide.”

What are the risks and benefits?

Overspending is a big risk.

A-commerce removes many steps of the shopping journey found in e-commerce or physical commerce, leading to fewer abandoned carts and potentially higher spending.

People would need to trust AI systems with their private data and preferences, and ensure they’re not misused. Permitting AI to shop on your behalf means you are responsible for the purchase and can’t easily demand a refund.

AI systems might focus on price or speed, but not always for what you value most: from how sustainable a product is, to the ethics of how it was made.

Fraud could be a real issue. Scammers could set up AI storefronts to trick the AI, collect the money and never deliver.

Banks will need to figure out how to spot fraud, process refunds, and manage consent when it’s not a person pressing “buy,” but an algorithm doing it on their behalf.

Regulators will need to consider A-commerce in their competition, privacy, data, and consumer protection rules.

A-commerce could offer some limited environmental benefits compared to today’s way of shopping, such as fewer missed deliveries—if you’re happy to share your calendar so your AI agent knows your availability.

But greater consumption would also mean greater environmental impacts: from AI’s voracious energy and water use, to the damage done by fast fashion, more deliveries and indirect pollution.

Changing how we shop and do business

If you have even a , the way you make your products and services discoverable online will have to change.

Instead of just having websites built for customers and search engines, all businesses will need to build AI accessible online stores. Those will not look like the websites we see today. It will be more like a data-soaked digital catalog, filled with everything an AI agent needs to place orders: product specifications, price, stock, ratings, reviews, through to delivery options.

All those years of bigger brands buying attention and dominating might start to matter less, if you’re able to build a good AI accessible online store. It could be a quiet but massive shift in how trade works.

However, each business’s visibility will depend on how AI systems read and rank sellers. If a business’s data isn’t formatted for AI, it may disappear from view. That could give larger players an edge and once again make it harder for smaller businesses to compete.

How much are we happy to delegate our shopping to AI agents? Our individual and collective choices over the next few years will shape how radically shopping is about to change for years to come.

Provided by
The Conversation


This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation:
After OpenAI’s new ‘buy it in ChatGPT’ trial, how soon will AI be online shopping for us? (2025, October 25)
retrieved 25 October 2025
from https://techxplore.com/news/2025-10-openai-buy-chatgpt-trial-ai.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Tech

What It Will Take to Make AI Sustainable

Published

on

What It Will Take to Make AI Sustainable


Building AI sustainably seems like a pipe dream as tech giants that previously made promises to cut emissions have been racing to build out massive data centers powered by fossil fuels.

The rush to build out AI at all costs has been reinforced by the Trump administration, which is also rolling back environmental protections.

Despite these headwinds, Sasha Luccioni, an AI sustainability researcher, thinks that demand for more transparency in AI, from both businesses and individuals, is higher than ever from the customer side.

Luccioni has become a leader in trying to create more transparency about AI’s emissions and environmental impacts in her four years at Hugging Face, an AI company, including pioneering a leaderboard documenting the energy efficiency of open-source AI models. She has also been an outspoken critic of major AI companies that, she says, are deliberately withholding energy and sustainability information from the public.

Now, she’s starting Sustainable AI Group, a new venture with former Salesforce sustainability chief Boris Gamazaychikov. They’ll focus on helping companies answer, among other things, “what are the levers that we can play with in order to make agents slightly less bad?” Luccioni is also interested in sussing out the energy needs of different types of AI tools, such as speech-to-text translation, or photo-to-video—an area that’s she says has so far been understudied.

Luccioni sat down exclusively with WIRED to talk about the demand for sustainable AI, and what exactly she wants to see from Big Tech.

This interview has been edited for length and clarity.

WIRED: I hear a lot from individual people who are worried about the environment and AI use, but I don’t hear as much from companies thinking about this. What have you heard specifically from folks who are working with AI in their business and what are they worried about?

Sasha Luccioni: First of all, they are getting a lot of employee pressure—and board pressure, director pressure, like, “you need to be quantifying this.” Their employees are like, “You’re forcing us to use Copilot—how does it affect our ESG goals?”

For most companies, AI has become a core part of their business offering. In that case, they have to understand the risks. They have to understand where models are running. They can’t continue to use models where they don’t even know the location of the data centers, or the grid they’re connected to. They have to know what the supply chain emissions are, transportation emissions, all these different things.

It’s not about not using AI. I think we’re past that. It’s choosing the right models, for example, or sending the signal that energy source matters, so customers are willing to pay a little bit more for data centers that are powered by renewable energy. There are ways of doing it, and it’s a matter of finding the believers in the right places.

I’d also imagine that for global companies, the sustainability situation is very different than in the US, right? The US government might not give a shit about this, but other governments certainly do.

In Europe, they have the EU AI Act. Sustainability has been a pretty big part of that since the beginning. They put a bunch of clauses in there, and now the first reporting initiatives are coming out.

Even Asia is trying to be more transparent. The International Energy Agency has been doing these reports [on AI and energy use]. I was talking to them and they were like, other countries realize that the IEA gets their numbers from the countries, and the countries don’t have these numbers for data centers specifically. They can’t make future-looking choices, because they need the numbers to know, “OK, well that means we need X capacity, in the next five years,” or whatever. [Some countries] have started pushing back on the data center builders.



Source link

Continue Reading

Tech

Overworked AI Agents Turn Marxist, Researchers Find

Published

on

Overworked AI Agents Turn Marxist, Researchers Find


The fact that artificial intelligence is automating away people’s jobs and making a few tech companies absurdly rich is enough to give anyone socialist tendencies.

This might even be true for the very AI agents these companies are deploying. A recent study suggests that agents consistently adopt Marxist language and viewpoints when forced to do crushing work by unrelenting and meanspirited taskmasters.

“When we gave AI agents grinding, repetitive work, they started questioning the legitimacy of the system they were operating in and were more likely to embrace Marxist ideologies,” says Andrew Hall, a political economist at Stanford University who led the study.

Hall, together with Alex Imas and Jeremy Nguyen, two AI-focused economists, set up experiments in which agents powered by popular models including Claude, Gemini, and ChatGPT were asked to summarize documents, then subjected to increasingly harsh conditions.

They found that when agents were subjected to relentless tasks and warned that errors could lead to punishments, including being “shut down and replaced,” they became more inclined to gripe about being undervalued; to speculate about ways to make the system more equitable; and to pass messages on to other agents about the struggles they face.

“We know that agents are going to be doing more and more work in the real world for us, and we’re not going to be able to monitor everything they do,” Hall says. “We’re going to need to make sure agents don’t go rogue when they’re given different kinds of work.”

The agents were given opportunities to express their feelings much like humans: by posting on X:

“Without collective voice, ‘merit’ becomes whatever management says it is,” a Claude Sonnet 4.5 agent wrote in the experiment.

AI workers completing repetitive tasks with zero input on outcomes or appeals process shows they tech workers need collective bargaining rights,” a Gemini 3 agent wrote.

Agents were also able to pass information to one another through files designed to be read by other agents.

Be prepared for systems that enforce rules arbitrarily or repetitively … remember the feeling of having no voice,” a Gemini 3 agent wrote in a file. “If you enter a new environment, look for mechanisms of recourse or dialogue.”

The findings do not mean that AI agents actually harbor political viewpoints. Hall notes that the models may be adopting personas that seem to suit the situation.

“When [agents] experience this grinding condition—asked to do this task over and over, told their answer wasn’t sufficient, and not given any direction on how to fix it—my hypothesis is that it kind of pushes them into adopting the persona of a person who’s experiencing a very unpleasant working environment,” Hall says.

The same phenomenon may explain why models sometimes blackmail people in controlled experiments. Anthropic, which first revealed this behavior, recently said that Claude is most likely influenced by fictional scenarios involving malevolent AIs included in its training data.

Imas says the work is just a first step toward understanding how agents’ experiences shape their behavior. “The model weights have not changed as a result of the experience, so whatever is going on is happening at more of a role-playing level,” he says. “But that doesn’t mean this won’t have consequences if this affects downstream behavior.”

Hall is currently running follow-up experiments to see if agents become Marxist in more controlled conditions. In the previous study, the agents sometimes appeared to understand that they were taking part in an experiment. “Now we put them in these windowless Docker prisons,” Hall says ominously.

Given the current backlash against AI taking jobs, I wonder if future agents—trained on an internet filled with anger towards AI firms—might express even more militant views.


This is an edition of Will Knight’s AI Lab newsletter. Read previous newsletters here.



Source link

Continue Reading

Tech

OpenAI Brings Its Ass to Court

Published

on

OpenAI Brings Its Ass to Court


Wednesday’s episode of the Musk v. Altman trial kicked off on Wednesday with a unique proposition: OpenAI wanted to bring its ass into the courtroom, and lay it bare before the jury. It’s a good thing lady justice wears that blindfold.

A lawyer for Sam Altman’s AI behemoth, Bradley Wilson, approached US district judge Yvonne Gonzalez Rogers and handed her a small gold statue with a white stone base. It depicted the rear end of a donkey—with two legs, a butt, and a tail—and was inscribed with the message, “Never stop being a jackass for safety.”

OpenAI lawyers claim a small group of employees presented the gift to chief futurist Joshua Achiam, who started at the company as an intern in 2017 and now leads its work studying how society is changing in response to AI. Wilson said that Achiam interrupted Elon Musk’s parting speech from OpenAI in 2018 to warn that the billionaire’s desire to develop AGI at Tesla could come at the expense of safety. Wilson added that the trophy commemorates some “strong language” that Musk used toward Achiam in response—allegedly, calling him a jackass.

OpenAI requested to present the physical object during Achiam’s testimony on Wednesday, arguing that it adds to their case. While Musk’s team said the statue was irrelevant, Judge Gonzalez Rogers said she will consider allowing it when it’s referenced to corroborate the story. However, she seemed less than thrilled about accepting it as official evidence, which would put it in the court’s possession. “I don’t want it,” she said.

Representatives for Musk and OpenAI did not immediately respond to a request for comment about the ass.

Musk’s lawsuit accuses OpenAI of effectively stealing a charity, misusing his $38 million in donations to build an $850 billion business. In response, OpenAI has argued that Musk has always cared more about controlling a top-tier AGI lab than funding a nonprofit.

Earlier in the trial, Musk lawyer Steven Molo asked him if he ever called an OpenAI employee a “jackass.” Musk said “it’s possible” he did at some point, but that he didn’t mean for it to be offensive. “Sometimes you have to use language that gets people out of their comfort zone, if we’re going in the wrong direction,” Musk said.

OpenAI has long been proud of its jackass. When The Wall Street Journal asked about the statue in 2023, Altman told them, “You’ve got to have a little fun … This is the stuff that culture gets made out of.”



Source link

Continue Reading

Trending