“Increasingly, I’m coming back to running product and working with the vice president of tech on some artificial intelligence (AI) projects and getting very hands-on myself,” says Wolf & Badger CEO and co-founder George Graham.
“It’s intellectually challenging, stimulating and intriguing – and I want to learn more about it. I’m trying to get as much info as I can on what I consider to be the most interesting tech advancement of my professional work lifetime.”
Not the words of a head of engineering, CIO, or technology executive, but those of the CEO of the online marketplace, whose business world continues to be lit up by the opportunity to use AI across multiple operations.
And why not? In January 2026, Wolf & Badger released a performance update to mark 15 years of trading, reporting it had now surpassed $500m in cumulative sales since inception and achieved almost 40 million website visits in 2025 alone, while reinforcing its reputation as an ethical platform by securing B-Corp recertification.
Wolf & Badger partners with independent brands promising strong ethics, and effectively becomes their tech stack and online operations provider. It is the conduit for these brands to achieve the scale that few organisations of their size can achieve alone.
The business achieved annual sales of $100m (£75m) in 2024, with more than 2,000 brand partners now in place, helping the London-headquartered operation grow globally. And ongoing investment in “AI-driven discovery and on-site personalisation” is delivering a measurable impact, with the company talking about £3.2m of directly attributable incremental sales from recent AI initiatives.
“There’s tremendous opportunity to improve the efficiency and discovery on Wolf & Badger by better understanding our shoppers and our brands and the products they sell,” he says.
“There’s lots around AI on image recognition and product tagging to build out better information related to style or what event a product would be suitable for, and using that to surface more relevant products to the user at the right time – all with the end point of making life more exciting and creating inspirational shopping experiences.”
Are we all product managers now?
While the work on using AI to power the online experience is not uncommon in e-commerce today, Graham’s attitude as CEO of the marketplace is. He is a CEO getting his hands dirty with the tech, which is rare in retail.
“I have personally spent many hundreds of hours over the past three months getting my head around AI and the future of commerce – with agentic commerce in mind,” he explains.
I have personally spent many hundreds of hours over the past three months getting my head around AI and the future of commerce – with agentic commerce in mind George Graham, Wolf & Badger
“Claude Code has become my go-to app. I have built a fairly bespoke AI agent ‘chief of staff’ that is connected to my tools via MCPs [model context protocols] or APIs [application programming interfaces], with a bunch of bespoke skills and scripts that I have ended up building into that.”
Graham says the collective memory stack is getting more powerful by the day, and using it has improved his own working practices.
“I feel twice as efficient as I was six months ago. I have taken that time and – in the short term – continued reinvesting it in understanding AI.”
The AI assistant Graham has developed is being made accessible via Stack internally, and the wider team is getting set up on Claude Code themselves with access to their own version.
The CEO acknowledges he isn’t an engineer or coder, but as a teenager, he would make games in Basic (Beginner’s All-purpose Symbolic Instruction Code) and design websites with HTML. After studying business at university, he joined PwC as a strategy consultant. By the age of 23, he was starting Wolf & Badger and “had to figure out how to build a marketplace as there wasn’t really marketplace software out there”.
When development at Wolf & Badger was brought in-house – today, it has a vice-president of technology, around a dozen people in engineering and others in product management and design – Graham continued to play a part in building out features to support brand partners and customers.
“I have always found all that fascinating,” he explains.
“Over the years, I stepped away as we brought the experts in – but, increasingly, I’m coming back to running product and working with our vice-president of tech on some of the AI projects and getting very hands-on myself.”
Graham, who founded Wolf & Badger with his brother Henry in 2010, admits he doesn’t fully understand the finer nuances of coding and doesn’t have the experienced engineer’s eye. But with the new tech available, he suggests “anyone can be a product manager or software developer” now.
“I have been able to create prototypes – I have built things that assess brands coming on the platform and help the sustainability team with vetting,” he says.
The software his internal teams are now using is set up on GitHub, and built on Eversell MSL front-end, Superbase, and other apps. “Everything is hooked up in what I think is reasonably robust for internal use,” he adds.
He urges other leaders in retail and wider business not to be afraid, and to experiment with the tools now available. There’s a lot that can be built on just a small monthly tech subscription outlay, he notes.
The wider tech team at Wolf & Badger initially experimented with solutions such as Microsoft Copilot and then Cursor.
“Only in the past few months have our engineers found the quality is at a point where they can lean on it more to start actually writing code. We’re keeping clear of the vibe coding in key sensitive areas, of course, but we can experiment in lots of spaces.”
The tech exploration work Graham has taken on – and there is much more to come, he says – is to “ready ourselves for agentic commerce and make sure we’re ahead of the pack”.
Since the turn of the year, there have already been some noteworthy developments in agentic commerce that further underline it as a future direction of travel for e-commerce and, therefore, something e-commerce and retail leaders must better grasp an understanding of.
Agentic commerce and UCP
The new year started with JD Sports announcing it is enabling consumers to use AI platforms to search for and purchase products – all in a single click, without leaving the apps.
JD customers in the US can purchase directly through Copilot, and – in due course – this will be followed by the ability to do so via Google Gemini and ChatGPT. JD is leveraging the agentic commerce suite of tech players Commercetools and Stripe.
Jetan Chowk, JD’s chief technology and transformation officer, said the move was about meeting customers “where they are”. It came after OpenAI announced in September that US shoppers could buy from Etsy directly through ChatGPT.
It started with an “instant checkout” to support single-item purchases, but multi-item carts are now on their way to being a reality.
Then, at the January 2026 NRF Big Show in New York, where many from the retail technology community congregate every year, Google launched the Universal Commerce Protocol (UCP), an open standard for agentic commerce that aims to establish a common language for AI agents and systems to operate together across consumer surfaces, businesses and payment providers.
The work Stripe is doing with agentic commerce protocol and standardising the mechanism by which people can shop directly via the [AI] agents is super interesting George Graham, Wolf & Badger
This is a fast-moving space, but was co-developed with prominent retail industry players such as Shopify, Etsy, Wayfair, Target and Walmart, and endorsed by more than 20 others across the ecosystem, including payments companies Adyen, American Express, MasterCard and Visa.
Graham is in close conversations with Stripe and Google, attending their events and regularly tuning into their updates.
“The work Stripe is doing with agentic commerce protocol and standardising the mechanism by which people can shop directly via the agents is super interesting,” he says. “Google and Shopify UCP is a further move towards a standardisation of how this is going to work.”
Graham is confident there will be more consumer discovery conducted on Google’s AI-powered platforms, ChatGPT, Perplexity, and other similar spaces.
“We need to ensure we’re supporting the 2,000 brands we’re working with to appear in the right way on those channels and facilitate the tech that can support one- or zero-click checkout, where an agent has the ability to buy on a consumer’s behalf.”
He is confident that a platform such as Wolf & Badger can play a key role in the agentic space. Individual brands are typically going to struggle to really build out the right metadata and set up UCP to be recognised by the human in the loop or an AI agent.
Graham says: “If we can wrap together the best independent brands and collectively go to a shopping agent to ensure those brands appear in the right places, we’re well placed to capture some of that demand and drive it towards the individual brands we work with, rather than the resulting purchases ending up with the bigger homogenised brands in our space.”
He adds that Wolf & Badger’s presence harks back to the pre-digital days of boutique shopping in-store, but with the right technology investment and focus now, it can deliver this in a “scaled way” online and through its showrooms.
“Our editorial and marketing team still make the creative calls, but we’re able to drive it forward with some of these new bits of tech,” he says, adding that as Wolf & Badger extends its technological nous, it can enable its brands to focus on “the difficult part” of commerce – meaning the design and manufacturing of compelling garments and consumer products.
Rapidly evolving space
As for the immediate future at Wolf & Badger, the US expansion is a key focus – as are ventures across Europe and into the Middle East. An expanded brand partnerships function within the business is expected to support the onboarding of new designers from around the world.
But AI continues to be an area of significant exploration, with Graham confident that his experimentation and use of cost-effective tools are improving how the business operates.
“It’s a rapidly evolving space – everything is changing these days,” he says, adding that it’s getting increasingly difficult to understand what will come next due to the acceleration of technological capability.
“You just have to try to stay ahead,” he says. “We’re repositioning ourselves in making sure we are embracing AI in the way I think any forward-thinking growth company should, and recognising the power it can bring to enable us to do much more for our brands and shoppers.”
Initially, Gorham used his brain-computer interface for single clicks, Oxley says. Then he moved on to multi-clicks and eventually sliding control, which is akin to turning up a volume knob. Now he can move a computer cursor, an example of 2D control—horizontal and vertical movements within a two-dimensional plane.
Over the years, Gorham has gotten to try out different devices using his implant. Zafar Faraz, a field clinical engineer for Synchron, says Gorham directly contributed to the development of Switch Control, a new accessibility feature Apple announced last year that allows brain-computer interface users the ability to control iPhones, iPads, and the Vision Pro with their thoughts.
In a video demonstration shown at an Nvidia conference last year in San Jose, California, Gorham demonstrates using his implant to play music from a smart speaker, turn on a fan, adjust his lights, activate an automatic pet feeder, and run a robotic vacuum in his home in Melbourne, Australia.
“Rodney has been pushing the boundaries of what is possible,” Faraz says.
As a field clinical engineer, Faraz visits Gorham in his home twice a week to lead sessions on his brain-computer interface. It’s Faraz’s job to monitor the performance of the device, troubleshoot problems, and also learn the range of things that Gorham can and can’t do with it. Synchron relies on this data to improve the reliability and user-friendliness of its system.
In the years he’s been working with Gorham, the two have done a lot of experimenting to see what’s possible with the implant. Once, Faraz says, he had Gorham using two iPads side by side, switching between playing a game on one and listening to music on the other. Another time, Gorham played a computer game in which he had to grab blocks on a shelf. The game was tied to an actual robotic arm at the University of Melbourne, about six miles from Gorham’s home, that remotely moved real blocks in a lab.
Gorham, who was an IBM software salesman before he was diagnosed with ALS in 2016, has relished being such a key part of the development of the technology, his wife Caroline says.
“It fits Rodney’s set of life skills,” she says. “He spent 30 years in IT, talking to customers, finding out what they needed from their software, and then going back to the techos to actually develop what the customer needed. Now it’s sort of flipped around the other way.” After a session with Faraz, Gorham will often be smiling ear to ear.
Through field visits, the Synchron team realized it needed to change the setup of its system. Currently, a wire cable with a paddle on one end needs to sit on top of the user’s chest. The paddle collects the brain signals that are beamed through the chest and transmits them via the wire to an external unit that translates those signals into commands. In its second generation system, Synchron is removing that wire.
“If you have a wearable component where there’s a delicate communication layer, we learned that that’s a problem,” Oxley says. “With a paralyzed population, you have to depend on someone to come and modify the wearable components and make sure the link is working. That was a huge learning piece for us.”
As my fellow pet parents will know, it’s amazing how quickly even the tiniest of dogs can demolish their toys and treat stash. We love and spoil them nonetheless. When you subscribe to BarkBox a fresh batch of cleverly themed treats and toys arrives at your doorstep. The costs of pet ownership can stack up quickly, especially if you’re buying your pooch a random gift box that goes well beyond the essentials. That’s why we have Barkbox promo codes and discount options ready to go for you.
Barkbox Promo: Enjoy a Free Toy for a Year at Barkbox
When your monthly Barkbox arrives, it’s like Christmas morning for your dogs. I watch as my two dogs, Rosi and Randy, shake their little Chihuahua mix bodies with barely restrained excitement. They’re never gentle on their toys but the stimulation that comes from textures and chewing is good for their little brains. With Barkbox you get a steady supply of two unique toys and two bags of all-natural treats every month. If you want to see how your dogs react, this Barkbox coupon is good for new Barkbox subscription customers and adds an additional toy in your box every month for a year.
Save 50% on Your First Barkbox Food Subscription With a Barkbox Coupon Code
Another reason why Barkbox is the best dog subscription box is how easy the company makes it to keep your pantry stocked with your dog’s food. Use this Barkbox coupon to save 50% off your first Barkbox food subscription, so you won’t have to end up running out to the grocery store in the middle of the night when your scooper scrapes across the bottom of an empty kibble bin.
Fly Travel Stress-Free With Your Dog and Get $300 Off BARK Air Flights
If you live in a Barkbox flight hub destination, please know I am insanely jealous of you. It’s no secret that flying is stressful and can be very dangerous for pets, especially if they have to ride in a cargo hold. Barkbox makes them the VIP with BARK Air, letting them ride in the cabin with you and get doted on, so things are a lot less scary. This is another perk of having a BarkBox subscription, with the opportunity to save $300 off BARK Air Flights.
Support Your Dog’s Dental Health and Get $10 Off With a Barkbox Coupon
Dental health is crucial for dogs, as it can prevent disease not just in their mouths, but their vital organs. Don’t forget to schedule your yearly cleaning with your vet, but in the meantime, use this BarkBox discount code to get $10 off a special BarkBox Dental kit.
Get an Extra Premium Toy in Every BarkBox With the Extra Toy Club
For having such tiny mouths, my dogs can gnaw through toys with surprising speed. If you’re also buried in a pile of shredded fluff and squeakers from disemboweled toys, the Extra Toy Club can help. This subscription includes dog toys for aggressive chewers of all ages, breeds, and sizes, offering extra durable toys meant to last longer. So far, so good at my house. To upgrade to this subscription box, it’s an extra $9 per month.
Get Exclusive BarkBox Discounts: Join the Email List
If you assume that the punchy branding and witty lingo extend to Barkbox’s email subscribers and not just the box subscription, you’d be correct. As a bonus, you can get exclusive BarkBox discount codes when you sign up to receive these emails. Who also doesn’t love a furry face and reminder of their pet in between work subject lines and bill payment reminders, too?
The transnational nature of artificial intelligence (AI) means international regulation is essential to tackle the safety issues associated with advanced AI, according to tech chiefs.
In the final evidence session of the Joint Committee on Human Rights inquiry into human rights and the regulation of AI on 25 February, MPs and Lords pressed the AI minister and senior executives from Meta and Microsoft on the adequacy of current safeguards in protecting fundamental rights.
Lawmakers questioned the panel on misinformation, accountability, child safety, existential risk and Britain’s AI sovereignty, probing whether current safeguards are strong enough to protect democratic rights and freedoms as AI systems become embedded across society.
The session came just weeks after the committee warned that the UK’s existing regulatory framework is struggling to keep pace with AI harms – with several regulators telling MPs that a lack of resources, rather than statutory powers, is the greatest hurdle to effective oversight.
Ginny Badanes, general manager of tech for society at Microsoft, and Rob Sherman, deputy chief privacy officer of policy at Meta, welcomed greater harmonisation in regulatory standards at a global level.
Speaking on AI governance, Badanes told MPs the current issue is not a lack of activity, but the bigger challenge of fragmentation.
“I worry at times when we have this variety of approaches that we’re not actually addressing the broader safety or human rights risks that are at the centre of what everyone is trying collectively to solve,” she said.
Transnational by design
Badanes added that “everything about advanced AI is transnational by design – the systems are developed, tested and deployed in a variety of places across borders and within multiple supply chains, and then integrated into products that are used at a global scale”.
She argued that an alignment in international standards could lead to a base layer of agreement, “creating a strong place to get out of fragmented models”.
Sherman mirrored this, noting that Meta operates in most countries worldwide, and that its human rights policy applies globally.
He added that Meta does not build separate AI models for different countries, despite the regional variation in AI governance.
Asked whether the UK’s AI Opportunities Action Plan strikes the right balance between innovation and human rights, both companies were broadly supportive.
Badanes said the UK had made “a sensible start”, building on its “strong foundation of human rights” law and taking a risk-based approach.
Public trust, she argued, is “absolutely critical” to AI adoption. “People will not embrace and use a technology that they do not trust,” said Badanes, adding that strong but proportionate regulation would help secure that trust.
Sherman described the UK’s strategy as “a really thoughtful and sensible approach”, and, in some respects, “a global model”. He also praised the UK’s AI Security Institute as “a global thought leader” in technical AI governance.
Misinformation and democracy
The committee asked if Meta was doing enough to challenge the use of AI by foreign actors on social media, raising concerns about how AI and social media are being used to undermine democratic rights and freedoms.
The committee noted that anonymous posting is increasingly the main way people post on Facebook groups.
Sherman stressed that Facebook is a “real identity platform”, meaning identity is verified using government-issued photo IDs, and that these groups were intended to allow people to share sensitive information without attaching their identity to it. Without accounting for the platform’s own role in spreading misinformation, he said, “I would encourage people to be thoughtful about the sources of the information that they consume”.
However, Sherman said the company would “certainly never suggest that the work to do that is done”, noting that adversaries “continue to evolve their tactics” and “behave adversarially”.
On the reliability of large language models, executives admitted AI systems can generate false information – so-called “hallucinations”. While models are “designed to tell you the truth”, Sherman conceded they are not 100% accurate.
Badanes added: “I think it’s incredibly difficult to ask a large language model to consistently provide you with the truth, in part because of the inherent flaws of the way the systems are designed. I do expect they will continue to get better, but also because truth is at times subjective, and it is a challenging environment to guarantee or ensure anything.”
The committee asked about situations when chatbots provide incorrect or manipulative outputs. Badanes noted the importance of public trust in AI, saying it is lost when the system does not answer a question.
The witnesses said Facebook and Microsoft are working to improve factual alignment, provide citations and, in some cases, indicate levels of confidence in responses. They also emphasised the importance of AI literacy and managing expectations of what services chatbots should provide.
The most difficult questions centred on accountability. When asked who should be responsible if someone suffers harm after relying on incorrect or manipulative AI outputs, such as bad legal advice or encouragement of self-harm, executives stopped short of proposing a specific legal framework.
Microsoft’s Badanes said accountability should attach “where there’s meaningful control”, suggesting responsibility may vary depending on whether harm stems from the model itself, its deployment, or a malicious user. Meta’s Sherman agreed courts would likely need to examine “multiple players” in any given case.
Parental controls
Sherman highlighted that age verification often varies app to app, and highlighted that standardised, platform-level verification is not in the current ecosystem, but would be valuable.
Badanes emphasised the variation in experiences of AI across platforms. “A chatbox where a child can form relationships is going to be a higher-risk scenario than potentially a tutoring app,” she said, encouraging a risk-based approach to AI governance rather than attempting to apply a single age-based threshold across AI tools.
“It’s not just about restricting access, we also need to build these age-appropriate designs and safety guardrails – it’s about adding clear boundaries into the system from the very beginning,” said Badanes.
Existential risks from AI
Asked if individuals should be able to opt out of AI entirely, Sherman said AI has been embedded in services such as Facebook and Instagram “since the beginning”, from news feed ranking to spam filtering. “I don’t think that opting out of AI as a technology is probably realistic,” he said, warning against the idea that it would be possible to “wall off AI from the rest of technology”.
Sherman and Badanes pushed back against binary artificial general intelligence narratives, such as the 2023 extinction-risk statement from the Centre for AI Safety, signed by many leaders in the tech industry, that warned of possible risks of extinction from AI.
Sherman said: “I think the reality is maybe a little bit less exciting and a little bit more mundane, which is that the technology will continue to improve iteratively. I don’t think we’re in a situation where we’re going to wake up one day, and the world is vastly different.”
Badanes described existential harm as “low-probability, high-impact”, stressing that companies are focused on managing both long-term and immediate dangers. “We have to address the risks in the here and now,” said Sherman, while continuing to plan for more extreme scenarios.
Both firms pointed to internal governance structures, including red-teaming exercises, external expert consultation and frontier risk frameworks. Sherman told MPs that through the Frontier programme, Meta evaluates models for “chemical, biological, cyber security and autonomy risks” before and after deployment.
They also emphasised the importance of collaboration with governments, noting that states hold intelligence and national security information unavailable to the private sector.
Speaking to the committee in a separate session, AI minister Kanishka Narayan praised the UK’s AI Security Institute, saying it provides “unparalleled pre-deployment access” to advanced models and plays a key role in developing international evaluation standards.
Badanes likened AI to nuclear regulation. “There are a lot of really complicated challenges that we as a big, large society, have been able to resolve that have had similar roots,” she said.
However, MPs raised concerns about AI researchers who have left major companies over safety disagreements. Asked whether voluntary corporate safeguards were sufficient, Sherman responded that firms have “clear internal reporting mechanisms” and “encourage dissent”, but stopped short of calling for binding global treaties.
Industry leaders urged policymakers to prioritise “interoperable, risk-based global standards” for the most capable systems and invest in content provenance tools, including watermarking, to counter misinformation.
Narayan noted that compared with the first AI summit in Bletchley Park, the India AI Impact Summit was much more focused on the day-to-day experience of people rather than the more abstract, long-term questions of how AI might fundamentally transform the economy, or the more long-term risks it may pose.