Changpeng Zhao, the multibillionaire founder of crypto exchange Binance, spent four months last year locked in a federal prison. After US president Donald Trump pardoned Zhao in October, the government has recast him as a martyr.
Zhao, who goes by CZ, pleaded guilty in November 2023 to failing to maintain an effective anti-money-laundering program at Binance. In parallel, Binance admitted to violating US sanctions and settled with financial regulators, which accused the company of failing to report suspicious transactions involving terror groups, child exploitation networks, and cybercriminals, among other violations. In a particularly incriminating exchange detailed in court documents, one Binance employee said to a colleague, “We see the bad, but we close 2 eyes.”
As part of their respective settlement deals, Zhao agreed to forfeit his role as Binance CEO, and Binance agreed to leave the US, accept supervision by a US-appointed compliance monitor, and pay a record $4.3 billion penalty.
Less than two years later, the narrative has flipped. On October 23, Trump struck the charges from Zhao’s criminal record. The Binance founder was a victim of the “Biden administration’s war on crypto,” a White House spokesperson declared.
The decision to pardon Zhao will reverberate throughout the US crypto exchange market, which Binance could seek to reenter, legal experts claim. It may also come with long-term political consequences for the crypto industry after Trump’s presidency ends.
Whether Zhao’s pardon was justified has been hotly disputed, particularly in light of connections between Binance and World Liberty Financial, a crypto business founded by Trump and his sons. (Through a corporate entity, the Trump family owns a 38 percent stake in World Liberty Financial’s parent company.) In May, Binance agreed to receive a $2 billion investment denominated in USD1, a coin issued by World Liberty Financial, which could earn tens of millions of dollars from the arrangement. In July, Bloomberg reported that Binance had developed the codebase for USD1.
Remarkably, Trump claims to know very little about Zhao. “OK, are you ready? I don’t know who he is,” Trump told 60 Minutes in an interview that aired on November 2. “I can only tell you this. My sons are into [crypto],” he said later in the interview.
Zhao’s legal representatives and industry allies have defended the pardon as a rightful corrective. “CZ is the first and only known first-time offender in US history to receive a prison sentence for this single, non-fraud-related charge,” wrote Teresa Goody Guillén, partner at law firm Baker & Hostetler, which represents Zhao, in a post on X.
Last month, the UK government announced the AI Skills Boost programme, promising “free AI training for all” and claiming that the courses will give people the skills needed to use artificial intelligence (AI) tools effectively. There are multiple reasons why we don’t agree.
US dependency over UK sovereignty
The “AI Skills Boost” is the free, badged “foundation” element of the government’s AI Skills Hub which was launched with great fanfare. There are 14 courses, exclusively from big US organisations, promoting and training on their platforms. The initiative increases dependency on US big tech – the opposite of the government’s recent conclusion, in its new AI opportunities action plan, to position the UK “to be an AI maker, not an AI taker”. It is also not clear how increasing UK workers’ reliance and usage of US big tech tools and platforms is intended to increase the UK’s homegrown AI talent.
In stark contrast to President Macron’s announcement last week that the French government will phase out dependency on US-based big tech, by using local providers to enhance digital sovereignty and privacy, technology secretary Liz Kendall’s speech was a lesson in contradictions.
Right after affirming that AI is “far too important a technology to depend entirely on other countries, especially in areas like defence, financial services and healthcare”, the secretary of state went on that the country’s strategy is to adopt existing technologies based overseas.
Microsoft, one of the founding partners for this initiative, has already admitted that “US authorities can compel access to data held by American cloud providers, regardless of where that data physically resides”, further acknowledging that the company will honour any data requests from the US state, regardless of where in the world the data is housed. Is this the sovereignty and privacy the UK government is trying to achieve?
Commercial content rather than quality skills provision
The AI Skills Hub indexes hundreds of AI-related courses. That means the hub, which cost £4.1m to build, is simply a bookmark or affiliate list of online courses and resources that are already available, with seemingly no quality control or oversight. The decision to award the contract to a “Big Four” commercial consultancy, PwC, rather than the proven national data, AI and digital skills providers who tendered, needs to be investigated.
The press releases focus on the “free” element of the training, but 60% of the courses are paid, even some of those which are marked as free, providing a deceptive funnel for paid commercial training providers.
We need to have greater national ambition than simply providing skills training. That the only substandard skills provision available is provided by those with commercial interests in controlling how people think about and use AI is a further insult
The package launched includes 595 courses, but only 14 have been benchmarked by Skills England, and there has been a critical outcry over the dangerously poor quality of many courses, some of which are 10 years’ old, don’t exist, or are poor quality AI slop.
An example of why this is so concerning is that many courses are not relevant to the UK. One of the courses promoted has already been shown to misrepresent the UK law on intellectual property, with the course creators later denying they had any contractual arrangement with the site and admitting that they were “not consulted before our materials were posted and linked from there”.
Warnings on the need for public AI literacy provision ignored
Aside from concerns over the standards, safety, sovereignty and cost of the content offered, there is a much bigger issue, which we have been warning about.
Currently, 84% of the UK public feel disenfranchised and excluded from AI decision-making, and mistrust key institutions, and 91% prioritise safe and fair usage of AI over economic gain and adoption speed.
In 2021, the UK’s AI Council provided a roadmap for developing the UK’s National AI Strategy. It advised on programmes of public and educational AI literacy beyond teaching technical or practical skills. This call has been repeated, especially in the wake of greater public exposure to generative AI since 2023, which now requires the public to understand not just how to prompt or code, but to use critical thinking to navigate a number of related implications of the technology.
In July 2025, we represented a number of specialists, education experts and public representatives, and wrote an open letter calling for investment in the UK’s AI capabilities beyond being passive users of US tools. Despite initial agreements to meet and discuss from the Department for Education and Department for Science, Innovation and Technology, the offer was rescinded.
Without comprehensive public understanding and sustained engagement, developing AI for public good and maintaining public trust will be a significant challenge. By investing in independent AI literacy initiatives that are accessible to all and not just aimed at onboarding uncritical users and consumers, the UK can help to ensure that its AI future is shaped with the UK public’s benefit at the heart.
Wasted opportunity to develop a beneficial UK approach to AI
We need to have greater national ambition than simply providing skills training. That the only substandard skills provision available is provided – at great public cost – by those with commercial interests in controlling how people think about and use AI is a further insult.
Indeed, Kendall’s claim that AI has the potential to add £400bn to the economy by 2030 is lifted from a report built by a sector consultancy that only focuses on the positive impact of Google technologies in the UK. Her announcement leaned heavily on claims such as “AI is now the engine of economic power and of hard power”, which come from a Silicon Valley playbook.
The focus on practical skills undermines the nation’s AI and tech sovereignty, harms the economy, with money leaving the nation to fund big tech. It entrenches political disenfranchisement, with decisions about AI framed as too complex for the general population to meaningfully engage with. It stands on fictitious narratives about inevitable big tech AI futures, in which public voice and public good are irrelevant.
If you wish to sign a second version of the open letter, which we are currently drafting, or to submit a critical AI literacy resource to We and AI’s resource hub, contact us here.
This article is co-authored by:
Tania Duarte, founder, We and AI
Bruna Martins, director at Tecer Digital
Dr. Elinor Carmi, senior lecturer in data politics and social justice, City St. George’s University of London
Dr. Mark Wong, head of social and urban policy, University of Glasgow
Dr Susan Oman, senior Lecturer, data, AI & society, The University of Sheffield
Ismael Kherroubi Garcia, founder & CEO, Kairoi
Cinzia Pusceddu, senior fellow of the Higher Education Academy, independent researcher
Dylan Orchard, postgraduate researcher, King’s College London
Tim Davies, director of research & practice, Connected by Data
Steph Wright, co-founder & managing director, Our AI Collective
Peter Thiel—the billionaire venture capitalist, PayPal and Palantir cofounder, and outspoken commentator on all matters relating to the “Antichrist”—appears at least 2,200 times in the latest batch of files released by the Department of Justice related to convicted sex offender and disgraced financier Jeffrey Epstein.
The tranche of records demonstrate how Epstein managed to cultivate an extensive network of wealthy and influential figures in Silicon Valley. A number of them, including Thiel, continued to interact with Epstein even after his 2008 guilty plea for solicitation of prostitution and of procurement of minors to engage in prostitution.
The new files show that Thiel arranged to meet with Epstein several times between 2014 and 2017. “What are you up to on Friday?” Thiel wrote to Epstein on April 5, 2016. “Should we try for lunch?” The bulk of the communications between the two men in the data dump concern scheduling meals, calls, and meetings with one another. Thiel did not immediately return a request for comment from WIRED.
One piece of correspondence stands out for being particularly bizarre. On February 3, 2016, Thiel’s former chief of staff and senior executive assistant, Alisa Bekins, sent an email with the subject line “Meeting – Feb 4 – 9:30 AM – Peter Thiel dietary restrictions – CONFIDENTIAL.” The initial recipient of the email is redacted, but it was later forwarded directly to Epstein.
The contents of the message are also redacted in at least one version of the email chain uploaded by the Justice Department on Friday. However, twoother files from what appears to be the same set of messages have less information redacted.
In one email, Bekins listed some two dozen approved kinds of sushi and animal protein, 14 approved vegetables, and 0 approved fruits for Thiel to eat. “Fresh herbs” and “olive oil” were permitted, however, ketchup, mayonnaise, and soy sauce should be avoided. Only one actual meal was explicitly outlined: “egg whites or greens/salad with some form of protein,” such as steak, which Bekins included “in the event they eat breakfast.” It’s unclear if the February 4 meeting ultimately occurred; other emails indicate Thiel got stuck in traffic on his way to meet Epstein that day.
According to a recording of an undated conversation between Epstein and former Israeli Prime Minister Ehud Barak that was also part of the files the DOJ released on Friday, Epstein told Barak that he was hoping to meet Thiel the following week. He added that he was familiar with Thiel’s company Palantir, but proceeded to spell it out loud for Barak as “Pallentier.” Epstein speculated that Thiel may put Barak on the board of Palantir, though there’s no evidence that ever occurred.
“I’ve never met Peter Thiel, and everybody says he sort of jumps around and acts really strange, like he’s on drugs,” Epstein said at one point in the audio recording, referring to Thiel. The former prime minister expressed agreement with Epstein’s assessment.
In 2015 and 2016, Epstein put $40 million in two funds managed by one of Thiel’s investment firms, Valar Ventures, according to The New York Times. Epstein and Thiel continued to communicate and were discussing meeting with one another as recently as January 2019, according to the files released by the DOJ. Epstein committed suicide in his prison cell in August of that year.
Below are Thiel’s dietary restrictions as outlined in the February 2016 email. (The following list has been reformatted slightly for clarity.)
APPROVED SUSHI + APPROVED PROTEIN
Kaki Oysters
Bass
Nigiri
Beef
Octopus
Catfish
Sashimi
Chicken
Scallops
Eggs
Sea Urchin
Lamb
Seabass
Perch
Spicy Tuna w Avocado
Squid
Turkey
Sweet Shrimps
Whitefish
Tobiko
Tuna
Yellowtail
Trout
APPROVED VEGETABLES
Artichoke
Avocado
Beets
Broccoli
Brussels sprouts
Cabbage
Carrots
Cucumber
Garlic
Olives
Onions
Peppers
Salad greens
Spinach
APPROVED NUTS
Anything unsalted and unroasted
Peanuts
Pecans
Pistachios
CONDIMENTS
Most fresh herbs, and olive oil
AVOID
Dairy
Fruits
Gluten
Grains
Ketchup
Mayo
Mushroom
Processed foods
Soy Sauce
Sugar
Tomato
Vinegar
MEAL SUGGESTIONS
Breakfast Egg whites or greens/salad with some form of protein (Steak etc)
Elon Musk’s rocket and satellite company SpaceX is acquiring his AI startup xAI, the centibillionaire announced on Monday. In a blog post, Musk said the acquisition was warranted because global electricity demand for AI cannot be met with “terrestrial solutions,” and Silicon Valley will soon need to build data centers in space to power its AI ambitions.
“In the long term, space-based AI is obviously the only way to scale,” Musk wrote. “The only logical solution therefore is to transport these resource-intensive efforts to a location with vast power and space. I mean, space is called ‘space’ for a reason.”
The deal, which pulls together two of Musk’s largest private ventures, values the combined entity at $1.25 trillion, making it the most valuable private company in the world, according to a report from Bloomberg.
SpaceX was in the process of preparing to go public later this year before the xAI acquisition was announced. The space firm’s plans for an initial public offering are still on, according to Bloomberg.
In December, SpaceX told employees that it would buy insider shares in a deal that would value the rocket company at $800 billion, according to The New York Times. Last month, xAI announced that it had raised $20 billion from investors, bringing the company’s valuation to roughly $230 billion.
This isn’t the first time Musk has sought to consolidate parts of his vast business empire, which is largely privately owned and includes xAI, SpaceX, the brain interface company Neuralink, and the tunnel transportation firm the Boring Company.
Last year, xAI acquired Musk’s social media platform, X, formerly known as Twitter, in a deal that valued the combined entity at more than $110 billion. Since then, xAI’s core product, Grok, has become further integrated into the social media platform. Grok is featured prominently in various X features, and Musk has claimed the app’s content-recommendation algorithm is powered by xAI’s technology.
A decade ago, Musk also used shares of his electric car company Tesla to purchase SolarCity, a renewable energy firm that was run at the time by cousin Lyndon Rive.
The xAI acquisition demonstrates how Musk can use his expansive network of companies to help power his own often grandiose visions of the future. Elon Musk said in the blog post that SpaceX will immediately focus on launching satellites into space to power AI development on Earth, but eventually, the space-based data centers he envisions building could power civilizations on other planets, such as Mars.
“This marks not just the next chapter, but the next book in SpaceX and xAI’s mission: scaling to make a sentient sun to understand the Universe and extend the light of consciousness to the stars,” Musk said in the blog post.