“Increasingly, I’m coming back to running product and working with the vice president of tech on some artificial intelligence (AI) projects and getting very hands-on myself,” says Wolf & Badger CEO and co-founder George Graham.
“It’s intellectually challenging, stimulating and intriguing – and I want to learn more about it. I’m trying to get as much info as I can on what I consider to be the most interesting tech advancement of my professional work lifetime.”
Not the words of a head of engineering, CIO, or technology executive, but those of the CEO of the online marketplace, whose business world continues to be lit up by the opportunity to use AI across multiple operations.
And why not? In January 2026, Wolf & Badger released a performance update to mark 15 years of trading, reporting it had now surpassed $500m in cumulative sales since inception and achieved almost 40 million website visits in 2025 alone, while reinforcing its reputation as an ethical platform by securing B-Corp recertification.
Wolf & Badger partners with independent brands promising strong ethics, and effectively becomes their tech stack and online operations provider. It is the conduit for these brands to achieve the scale that few organisations of their size can achieve alone.
The business achieved annual sales of $100m (£75m) in 2024, with more than 2,000 brand partners now in place, helping the London-headquartered operation grow globally. And ongoing investment in “AI-driven discovery and on-site personalisation” is delivering a measurable impact, with the company talking about £3.2m of directly attributable incremental sales from recent AI initiatives.
“There’s tremendous opportunity to improve the efficiency and discovery on Wolf & Badger by better understanding our shoppers and our brands and the products they sell,” he says.
“There’s lots around AI on image recognition and product tagging to build out better information related to style or what event a product would be suitable for, and using that to surface more relevant products to the user at the right time – all with the end point of making life more exciting and creating inspirational shopping experiences.”
Are we all product managers now?
While the work on using AI to power the online experience is not uncommon in e-commerce today, Graham’s attitude as CEO of the marketplace is. He is a CEO getting his hands dirty with the tech, which is rare in retail.
“I have personally spent many hundreds of hours over the past three months getting my head around AI and the future of commerce – with agentic commerce in mind,” he explains.
I have personally spent many hundreds of hours over the past three months getting my head around AI and the future of commerce – with agentic commerce in mind George Graham, Wolf & Badger
“Claude Code has become my go-to app. I have built a fairly bespoke AI agent ‘chief of staff’ that is connected to my tools via MCPs [model context protocols] or APIs [application programming interfaces], with a bunch of bespoke skills and scripts that I have ended up building into that.”
Graham says the collective memory stack is getting more powerful by the day, and using it has improved his own working practices.
“I feel twice as efficient as I was six months ago. I have taken that time and – in the short term – continued reinvesting it in understanding AI.”
The AI assistant Graham has developed is being made accessible via Stack internally, and the wider team is getting set up on Claude Code themselves with access to their own version.
The CEO acknowledges he isn’t an engineer or coder, but as a teenager, he would make games in Basic (Beginner’s All-purpose Symbolic Instruction Code) and design websites with HTML. After studying business at university, he joined PwC as a strategy consultant. By the age of 23, he was starting Wolf & Badger and “had to figure out how to build a marketplace as there wasn’t really marketplace software out there”.
When development at Wolf & Badger was brought in-house – today, it has a vice-president of technology, around a dozen people in engineering and others in product management and design – Graham continued to play a part in building out features to support brand partners and customers.
“I have always found all that fascinating,” he explains.
“Over the years, I stepped away as we brought the experts in – but, increasingly, I’m coming back to running product and working with our vice-president of tech on some of the AI projects and getting very hands-on myself.”
Graham, who founded Wolf & Badger with his brother Henry in 2010, admits he doesn’t fully understand the finer nuances of coding and doesn’t have the experienced engineer’s eye. But with the new tech available, he suggests “anyone can be a product manager or software developer” now.
“I have been able to create prototypes – I have built things that assess brands coming on the platform and help the sustainability team with vetting,” he says.
The software his internal teams are now using is set up on GitHub, and built on Eversell MSL front-end, Superbase, and other apps. “Everything is hooked up in what I think is reasonably robust for internal use,” he adds.
He urges other leaders in retail and wider business not to be afraid, and to experiment with the tools now available. There’s a lot that can be built on just a small monthly tech subscription outlay, he notes.
The wider tech team at Wolf & Badger initially experimented with solutions such as Microsoft Copilot and then Cursor.
“Only in the past few months have our engineers found the quality is at a point where they can lean on it more to start actually writing code. We’re keeping clear of the vibe coding in key sensitive areas, of course, but we can experiment in lots of spaces.”
The tech exploration work Graham has taken on – and there is much more to come, he says – is to “ready ourselves for agentic commerce and make sure we’re ahead of the pack”.
Since the turn of the year, there have already been some noteworthy developments in agentic commerce that further underline it as a future direction of travel for e-commerce and, therefore, something e-commerce and retail leaders must better grasp an understanding of.
Agentic commerce and UCP
The new year started with JD Sports announcing it is enabling consumers to use AI platforms to search for and purchase products – all in a single click, without leaving the apps.
JD customers in the US can purchase directly through Copilot, and – in due course – this will be followed by the ability to do so via Google Gemini and ChatGPT. JD is leveraging the agentic commerce suite of tech players Commercetools and Stripe.
Jetan Chowk, JD’s chief technology and transformation officer, said the move was about meeting customers “where they are”. It came after OpenAI announced in September that US shoppers could buy from Etsy directly through ChatGPT.
It started with an “instant checkout” to support single-item purchases, but multi-item carts are now on their way to being a reality.
Then, at the January 2026 NRF Big Show in New York, where many from the retail technology community congregate every year, Google launched the Universal Commerce Protocol (UCP), an open standard for agentic commerce that aims to establish a common language for AI agents and systems to operate together across consumer surfaces, businesses and payment providers.
The work Stripe is doing with agentic commerce protocol and standardising the mechanism by which people can shop directly via the [AI] agents is super interesting George Graham, Wolf & Badger
This is a fast-moving space, but was co-developed with prominent retail industry players such as Shopify, Etsy, Wayfair, Target and Walmart, and endorsed by more than 20 others across the ecosystem, including payments companies Adyen, American Express, MasterCard and Visa.
Graham is in close conversations with Stripe and Google, attending their events and regularly tuning into their updates.
“The work Stripe is doing with agentic commerce protocol and standardising the mechanism by which people can shop directly via the agents is super interesting,” he says. “Google and Shopify UCP is a further move towards a standardisation of how this is going to work.”
Graham is confident there will be more consumer discovery conducted on Google’s AI-powered platforms, ChatGPT, Perplexity, and other similar spaces.
“We need to ensure we’re supporting the 2,000 brands we’re working with to appear in the right way on those channels and facilitate the tech that can support one- or zero-click checkout, where an agent has the ability to buy on a consumer’s behalf.”
He is confident that a platform such as Wolf & Badger can play a key role in the agentic space. Individual brands are typically going to struggle to really build out the right metadata and set up UCP to be recognised by the human in the loop or an AI agent.
Graham says: “If we can wrap together the best independent brands and collectively go to a shopping agent to ensure those brands appear in the right places, we’re well placed to capture some of that demand and drive it towards the individual brands we work with, rather than the resulting purchases ending up with the bigger homogenised brands in our space.”
He adds that Wolf & Badger’s presence harks back to the pre-digital days of boutique shopping in-store, but with the right technology investment and focus now, it can deliver this in a “scaled way” online and through its showrooms.
“Our editorial and marketing team still make the creative calls, but we’re able to drive it forward with some of these new bits of tech,” he says, adding that as Wolf & Badger extends its technological nous, it can enable its brands to focus on “the difficult part” of commerce – meaning the design and manufacturing of compelling garments and consumer products.
Rapidly evolving space
As for the immediate future at Wolf & Badger, the US expansion is a key focus – as are ventures across Europe and into the Middle East. An expanded brand partnerships function within the business is expected to support the onboarding of new designers from around the world.
But AI continues to be an area of significant exploration, with Graham confident that his experimentation and use of cost-effective tools are improving how the business operates.
“It’s a rapidly evolving space – everything is changing these days,” he says, adding that it’s getting increasingly difficult to understand what will come next due to the acceleration of technological capability.
“You just have to try to stay ahead,” he says. “We’re repositioning ourselves in making sure we are embracing AI in the way I think any forward-thinking growth company should, and recognising the power it can bring to enable us to do much more for our brands and shoppers.”
Neuroscientists know that there is a link between loneliness and cognitive decline in older adults, although it is still difficult to understand the exact magnitude of the link. A new longitudinal study provides evidence that a proportion of people who feel lonely end up having more memory impairment, though this doesn’t necessarily mean that their brains age faster.
The report, published in Aging & Mental Health, shows that older adults with higher levels of loneliness scored lower on tests of immediate and delayed recall. Even so, the rate at which their memory declined over six years was virtually identical to those who were not lonely.
“It suggests that loneliness may play a more prominent role in the initial state of memory than in its progressive decline,” said Luis Carlos Venegas-Sanabria of the School of Medicine and Health Sciences at Universidad del Rosario, who led the research. “The study underscores the importance of addressing loneliness as a significant factor in the context of cognitive performance in older adults.”
Six-Year Study of Thousands of Single People
The team analyzed data from the Survey of Health, Ageing and Retirement in Europe (SHARE), one of the most robust longitudinal databases for studying aging. For six years, the researchers followed 10,217 adults, aged 65 to 94, from 12 European countries. They assessed their level of loneliness and their performance on memory tests.
The results show that age was the most important determinant of memory level and speed of decline. From the age of 75 onwards, scores began to fall more rapidly. After 85 the decline became more pronounced. Depression and chronic diseases such as diabetes also reduced the initial score. Loneliness, while influencing the starting point, did not accelerate the slope of cognitive decline.
The study also found that physical activity was associated with better initial memory scores. People who engaged in moderate or vigorous physical activity at least once a month recalled more words on immediate and delayed recall tests. This effect did not change the speed of decline, but it did raise the baseline level, which functions as a kind of “cognitive buffer.”
Although the study does not explore the causes of the link between loneliness and cognition, previous research has proposed plausible mechanisms. Loneliness is often associated with less social interaction, a factor that influences cognitive performance. It is also associated with increased risk of depression, which does directly affect memory tests. In addition, lonely people tend to have more health problems, such as hypertension or diabetes, which also affect cognitive function.
By 2050, according to United Nations projections, one in six people in the world will be over the age of 65. Societies are entering a stage where old age will no longer be the exception but will become the norm. Dementia, as well as other neurodegenerative diseases that appear with age, will be a major challenge for health care institutions.
Chances are, you’ve already used a satellite today. Satellites make it possible for us to stream our favorite shows, call and text a friend, check weather and navigation apps, and make an online purchase. Satellites also monitor the Earth’s climate, the extent of agricultural crops, wildlife habitats, and impacts from natural disasters.
As we’ve found more uses for them, satellites have exploded in number. Today, there are more than 10,000 satellites operating in low-Earth orbit. Another 5,000 decommissioned satellites drift through this region, along with over 100 million pieces of debris comprising everything from spent rocket stages to flecks of spacecraft paint.
For MIT’s Richard Linares, the rapid ballooning of satellites raises pressing questions: How can we safely manage traffic and growing congestion in space? And at what point will we reach orbital capacity, where adding more satellites is not sustainable, and may in fact compromise spacecraft and the services that we rely on?
“It is a judgement that society has to make, of what value do we derive from launching more satellites,” says Linares, who recently received tenure as an associate professor in MIT’s Department of Aeronautics and Astronautics (AeroAstro). “One of the things we try to do is approach these questions of traffic management and orbital capacity as engineering problems.”
Linares leads the MIT Astrodynamics, Space Robotics, and Controls Lab (ARCLab), a research group that applies astrodynamics (the motion and trajectory of orbiting objects) to help track and manage the millions of objects in orbit around the Earth. The group also develops tools to predict how space traffic and debris will change as operators launch large satellite “mega-constellations” into space.
He is also exploring the effects of space weather on satellites, as well as how climate change on Earth may limit the number of satellites that can safely orbit in space. And, anticipating that satellites will have to be smarter and faster to navigate a more cluttered environment, Linares is looking into artificial intelligence to help satellites autonomously learn and reason to adapt to changing conditions and fix issues onboard.
“Our research is pretty diverse,” Linares says. “But overall, we want to enable all these economic opportunities that satellites give us. And we are figuring out engineering solutions to make that possible.”
Grounding practical problems
Linares was born and raised in Yonkers, New York. His parents both worked as school bus drivers to support their children, Linares being the youngest of six. He was an active kid and loved sports, playing football throughout high school.
“Sports was a way to stay focused and organized, and to develop a work ethic,” Linares says. “It taught me to work hard.”
When applying for colleges, rather than aim for Division I schools like some of his teammates, Linares looked for programs that were strong in science, specifically in aerospace. Growing up, he was fascinated with Carl Sagan’s “Cosmos” docuseries. And being close to Manhattan, he took regular trips to the Hayden Planetarium to take in the center’s immersive projections of space and the technologies used to explore it.
“My interest in science came from the universe and trying to understand our place within it,” Linares recalls.
Choosing to stay close to home, he applied to in-state schools with strong aeronautical engineering departments, and happily landed at the State University of New York at Buffalo (SUNY Buffalo), where he would ultimately earn his bachelor’s, master’s, and doctoral degrees, all in aerospace engineering.
As an undergraduate, Linares took on a research project in astrodynamics, looking to solve the problem of how to determine the relative orientation of satellites flying in formation.
“Formation flying was a big topic in the early 2000s,” Linares says. “I liked the flavor of the math involved, which allowed me to go a layer deeper toward a solution.”
He worked out the math to show that when three satellites fly together, they essentially form a triangle, the angles of which can be calculated to determine where each satellite is in relation to the other two at any moment in time. His work introduced a new controls approach to enable satellites to fly safely together. The research had direct applications for the U.S. Air Force, which helped to sponsor the work.
As he expanded the research into a master’s thesis, Linares also took opportunities to work directly with the Air Force on issues of satellite tracking and orientation. He served two internships with the U.S. Air Force Research Lab, one at Kirtland Air Force Base in Albuquerque, New Mexico, and the other in Maui, Hawaii.
“Being able to collaborate with the Air Force back then kind of grounded the research in practical problems,” Linares says.
For his PhD, he turned to another practical problem of “uncorrelated tracks.” At the time, the Air Force operated a network of telescopes to observe more than 20,000 objects in space, which they were working to label and record in a catalog to help them track the objects over time. But while detecting objects was relatively straightforward, the challenge came in correlating a detected object with what was already in the catalog. In other words, is what they were seeing something they had already seen?
Linares developed image analysis techniques to identify key characteristics of objects such as their shape and orientation, which helped the Air Force “fingerprint” satellites and pieces of space debris, and track their activity — and potential for collisions — over time.
After completing his PhD, Linares worked as a postdoc at Los Alamos National Laboratory and the U.S. Naval Observatory. During that time he expanded his aerospace work to other areas including space weather, using satellite measurements to model how Earth’s ionosphere — the upper layer of the atmosphere that is ionized by the sun’s radiation — affects satellite drag.
He then accepted a position as assistant professor of aerospace engineering at the University of Minnesota at Minneapolis. For the next three years, he continued his research in modeling space weather, tracking space objects and coordinating satellites to fly in swarms.
Making space
In 2018, Linares made the move to MIT.
“I had a lot of respect for the people and for the history of the work that was done here,” says Linares, who was especially inspired by the legendary Charles Stark “Doc” Draper, who developed the first inertial guidance systems in the 1940s that would enable the self-navigation of airplanes, submarines, satellites, and spacecraft for decades to come. “This was essentially my field, and I knew MIT was the best place to continue my career.”
As a junior faculty member in AeroAstro, Linares spent his first years focused on an emerging challenge: space sustainability. Around that time, the first satellite constellations were launching into low-Earth orbit with SpaceX’s Starlink, which aimed to provide global internet coverage via a huge network of several thousand coordinating satellites. The launching of so many satellites, into orbits that already held other active and nonactive satellites, along with millions of pieces of space debris, raised questions about how to safely manage the satellite traffic and how much traffic an orbit can sustain.
“At what level do we reach a tipping point, where we have too many satellites in certain orbital regimes?” Linares says. “It was kind of a known problem at the time, but there weren’t many solutions.”
Linares’ group applied an understanding of astrodynamics, and the physics of how objects move in space, to figure out the best way to pack satellites in orbital “shells,” or lanes that would most likely prevent collisions. They also developed a state-of-the-art model of orbital traffic, that was able to simulate the trajectories of more than 10 million individual objects in space. Previous models were much more limited in the number of objects they could accurately simulate. Linares’ open-source model, called the MIT Orbital Capacity Assessment Tool, or MoCAT, could account for the millions of pieces of space debris, in addition to the many intact satellites in orbit.
The tools that his group has developed are used today by satellite operators to plan and predict safe spacecraft trajectories. His team is continuing to work on problems of space traffic management and orbital capacity. They are also branching out into space robotics. The team is testing ways to teleoperate a humanoid robot, which could potentially help to build future infrastructure and carry out long-duration tasks in space.
Linares is also exploring artificial intelligence, including ways that a satellite can autonomously “learn” from its experience and safely adapt to uncertain environments.
“Imagine if each satellite had a virtual Doc Draper onboard that could do the de-bugging that we did from the ground during the Apollo missions,” Linares says. “That way, satellites would become instantaneously more robust. And it’s not taking the human out of the equation. It’s allowing the human to be amplified. I think that’s within reach.”
Every time I’ve written about Meta’s AI-enabled glasses, I invariably get asked these questions: Why do you even want these? Why do you want smart glasses that can play music or misidentify native flora in a weirdly cheery voice? I am a lifelong Ray-Ban Wayfarer wearer, and I’m also WIRED’s resident Meta wearer. I grab a pair of Meta glasses whenever I leave the house because I like being able to use one device instead of two or three on a walk. With Meta glasses, I can wear sunglasses andworkout headphones in one!
Meta sold more than 7 million pairs in 2025. Take a look at any major outdoor or sporting event, and you’ll see more than a few people wearing these to record snippets for Instagram or TikTok. Meta’s partnership with EssilorLuxottica has made smart glasses accessible, stylish, and useful and is undoubtedly the reason why Google, and now Apple, are trying to horn in on the market. After the notable flop that is the Apple Vision Pro, Apple is recalibrating its face-wearable strategy, moving away from augmented reality (AR) toward simpler, display-less, and hopefully good-looking glasses.
That’s not to say that you shouldn’t be careful how you use these glasses. Meta doesn’t have the greatest track record on privacy, and the company has continued to push forward with policies that are questionable at best. Even if you’re not concerned that face recognition will allow Meta to target immigrants or enable stalkers to find their victims, at the very least, people really do not like the idea that you could start recording them at any moment.
Probably the biggest hurdle to wearing Meta glasses is that even doing so seems like a gross violation of the social contract. After all, these are Mark Zuckerberg’s “pervert glasses.” When I pop these on my head, I’ve had friends (and my spouse) recoil and say, “I have apps to warn me away from people like you.” The best part, though, is that Oakley and Ray-Ban already make really great sunglasses. Even if the battery runs out or you don’t use Meta AI at all, these are stellar at shading your eyes from the sun.
Last year, Meta upgraded the original Meta Ray-Ban Wayfarers that became a smash hit. These are Meta’s entry-level glasses, and they come in a variety of lens styles. You can order them with clear lenses, prescription lenses, transition lenses, or the OG sunglass lenses, as well as in a variety of fits, including standard, large, or high-bridge frames. Improvements to this generation include an upgrade to a 12-MP camera and up to eight hours of battery life; writer Boone Ashworth’s testing clocked in at five to six hours.