Companies that pay ransom demands to cyber criminals in the hope of restoring their IT systems may be at risk of greater negative publicity than those that refuse.
An initial analysis of data seized by the National Crime Agency (NCA) in the takedown of the LockBit ransomware group suggests that the best way to avoid bad publicity may be to refuse to pay up.
Max Smeets, author of the book Ransom War, was given supervised access to data on LockBit 3.0 seized by the NCA during Operation Chronos, which took down the LockBit ransomware operation, and examined leaked data from LockBit 4.0.
Smeets compared press reporting of 100 companies that paid ransomware with reporting on 100 companies that refused to pay.
“It turns out that you are more likely to have a story written about you if you have paid than if you have not paid,” he said in an interview with Computer Weekly.
Smeets’ conclusions fly in the face of claims by criminal ransomware gangs that companies that pay up can avoid bad publicity. He calls it the Streisand effect, whereby in paying a ransom to avoid publicity, companies end up attracting the very publicity they are trying to avoid.
You are more likely to have a story written about you if you have paid [a ransom] than if you have not paid Max Smeets, ransomware expert
Law enforcement has long argued that companies should not pay ransom fees because it supports the ransomware ecosystem and there is no guarantee that they will get their data back.
“What the data also suggests is that you also shouldn’t pay if you are afraid of public exposure,” said Smeets, speaking to Computer Weekly at the Black Hat security conference in London.
The art of the bad deal
Smeets’ analysis also revealed just how ill-prepared many organisations were when negotiating ransomware payments with LockBit’s criminal affiliates.
Some companies told crime gangs upfront that they were desperate to get their data back as they had no backups, putting them instantly on the back foot in negotiations.
Others tried unsuccessfully to win sympathy with the hackers by claiming that they couldn’t afford to pay the ransom, or that they served the local community.
Smeets also found that some victims had sent ransomware gangs copies of their insurance documents to show how much they could afford to pay.
Ransomware victims that pay up are more likely to hit the headlines than those that refuse
His findings show that companies need to be better prepared for ransomware negotiations if the worst happens.
“There is a major opportunity, especially for small and medium-sized enterprises, to become better in understanding how to engage with these criminals without making extreme and obvious mistakes,” he said.
LockBit’s criminal affiliates follow a standard playbook for negotiating ransom payments, which typically involves demanding an initial ransom, offering to decrypt two files for free, and threatening to leak data if organisations don’t pay up.
Smeets found that the criminal groups have so many victims that they don’t spend time analysing the data they capture to look for compromising material that could push up the value of a ransom demand – they are more interested in the next victim.
If companies don’t pay up within a few weeks, affiliates may be inclined to assume that their victim’s lack of desperation may mean their ransomware attack did not cause much damage. They may be willing to accept smaller payments in return for an agreement not to publish the hacked data.
The trust paradox
Ransomware groups like LockBit deceive and steal, but somehow have to convince victims that they are trustworthy enough to restore their data in return for a ransomware payment, so reputation matters.
Operation Chronos not only destroyed the infrastructure of LockBit, but also destroyed its reputation, Smeets’ research shows.
In February 2024, the international police operation seized LockBit’s servers, its administrative hub, its public-facing website and its internal communications.
“The NCA not only went after their technical infrastructure, but also tarnished their reputation by disclosing their lies,” he said.
For example, the group said it would ban the affiliates that hit a children’s hospital in Toronto – it didn’t, said Smeets. LockBit also promised to delete victims’ data from its servers if they agreed to pay, but often didn’t.
When criminal gangs attempted to revive LockBit in December 2024, its reputation had been irretrievably damaged.
Before Operation Chronos, between May 2022 and February 2022, 80 affiliates of LockBit 3.0 received ransomware payments.
LockBit 4.0, an attempt to resurrect the ransomware operation after the police take-down, only received eight ransomware payments between December 2024 and April 2025, according to Smeets’ research.
“LockBit is so tarnished that even if it can put up its infrastructure again, it’s a shadow of its former self,” he said.
Operation Chronos could form a blueprint for future ransomware takedowns by destroying not just the infrastructure but also the reputations of ransomware gangs.
Smeets hopes to conduct further research into the relationship between paying ransoms and negative press coverage to test his initial findings.
Starting today, eager customers of the US pizza restaurant chain Papa Johns living in one corner of southern North Carolina will have the opportunity to receive their food from the sky, thanks to a new collaboration with Alphabet’s drone company, Wing. But Papa Johns’ signature pizzas won’t be on offer. Instead, drone-loving North Carolinians will have to choose between three kinds of sandwiches, a newer product for the fast-food chain: Philly cheesesteak, chicken bacon ranch, or steak and mushroom varieties.
Drone deliveries are popping up in more communities across the US and the world. Questions about the long-term economics and regulatory picture around unmanned aerial vehicles persist, but Wing boasts partnerships with Walmart, Panera, and DoorDash and is delivering through the sky to customers in four metro areas: Atlanta, Charlotte, Dallas-Fort Worth, and Houston. (In 2019, Wing received the US Federal Aviation Administration’s first certificate allowing a drone delivery company to operate in the country.) Competing drone companies, including Zipline, Amazon Prime Air, and Flytrex, fly packages, medical supplies, and Chipotle burritos in select communities across countries like Ghana, Japan, and the US.
But until very recently, drone operators have struggled to fly full-size pizzas. For companies hoping to break into the food delivery space, this is unfortunate: 11 percent of the US population eats a slice on any given day, according to the US Department of Agriculture. In a fast-diversifying restaurant industry, getting them to customers is still big business. But the realities of physics, engineering, and the restaurant business conspire to make pizzas a challenge for drones.
Flying Pizzas
Traditionally, pizza is the experimental tech delivery of choice. The familiar and cheap cheese-sauce-bread combo has been loaded onto self-driving cars and autonomous sidewalk delivery vehicles and has been assembled by robots. It’s a fast and satisfying option, especially for busy families tight on time. And theoretically, a great fit for automated drones, among one of the faster delivery options—people love fresh, piping-hot pizza.
But transporting one by drone requires some extra work, says Wing CEO Adam Woodworth. “Pizza comes in a very different box, with a big, flat surface area,” he says. They’re not naturally aerodynamic. Also, “you don’t want a pizza tilted.”
Wing’s relatively lightweight drones are engineered to carry three specific package sizes; right now, pizza boxes aren’t one of them. Woodworth says a new design is on the horizon. “I want to see pizzas coming at me from the sky,” he says.
Flytrex, an Israel-based drone delivery company, announced late last month that it had finally solved the problem. In collaboration with rival pizza chain Little Caesars, the company began delivering via drone up to two large pizzas (16 inches each), plus sodas and bread, in Wylie, Texas, a suburb of Dallas. The leap comes courtesy of a much bigger new drone, capable of carrying up to 8.8 pounds for four miles.
A major oil company is seeking a state tax break in Texas worth hundreds of millions of dollars to build a massive power plant. The energy won’t be going to residential customers, though. Instead, the gas plant will be used to power a data center whose eventual tenant could be Microsoft.
Chevron subsidiary Energy Forge One has filed an application with the State Comptroller’s board to obtain a tax abatement for a power plant it’s building in West Texas. In late January, the comptroller’s office made a recommendation to support the application’s approval—the first such approval under the program for a power plant intended solely for data center use.
In March, following news reports that Microsoft was looking into purchasing power from the Energy Forge project, Chevron said that it had entered into an “exclusivity agreement” with Microsoft and Engine 1, an investment fund involved in the project. In January, Microsoft pledged to be a “good neighbor” in communities where it is building data centers, including promising to pay a “full and fair share of local property taxes.”
The potential tax abatement for the project comes as big tech companies are battling rising public fury about data centers and electricity costs. It also comes as lawmakers start to cast a more critical eye on ballooning incentives for data centers, some of which have cost some states—including Texas—$1 billion or more each year.
Chevron spokesperson Paula Beasley told WIRED in an email that all tax incentives under consideration for the Energy Forge project “apply solely to the power generation facility” to “support new energy infrastructure, and do not extend to any future data center facilities that may be served.” Beasley also said that there is currently “no definitive agreement” with Microsoft for this power plant.
“Microsoft is in discussions with Chevron,” Rima Alaily, Microsoft’s corporate vice president and general counsel for infrastructure, said in a statement to WIRED. “No commercial terms have been finalized, and there is no definitive agreement at this time.”
Chevron is applying for a tax abatement for the project under Texas’ Jobs, Energy, Technology, and Innovation (JETI) Act. Passed in 2023, the program is intended to incentivize businesses to build large infrastructure projects in the state in exchange for guarantees to bring jobs and revenue. Accepted projects get a cap set on the amount of taxable property they can be charged through local school district taxes.
The Pecos-Barstow-Toyah school board approved the project’s application at a meeting in February. The state pays for the tax abatement, so the school district itself does not lose out on any money.
According to documents from the state, the Chevron project could net more than $227 million in savings for the company over a 10-year period, depending on the eventual size of the project and investment. The application says the plant will provide “over 25 permanent, full-time jobs,” though there’s no requirement to do so because it’s considered an electricity generation facility.
The planned gas plant won’t connect to the grid, instead providing “electricity for direct consumption by a data center,” according to its application. So-called behind-the-meter gas plants have become increasingly popular for data center developers facing yearslong waits to connect to the grid. According to data from nonprofit Global Energy Monitor, the US at the start of the year had nearly 100 gigawatts of gas-fired power in the development pipeline solely to power data centers, with several more massive gas projects announced since the data was published.
A WIRED analysis of less than a dozen power plants being constructed to explicitly serve data centers, including the Chevron project, found that these power plants are permitted to emit more greenhouse gases than many small- to medium-size countries. The Energy Forge plant alone could emit more than 11.5 million tons of CO2 equivalent annually—more than the country of Jamaica emitted in 2024. Beasley told WIRED that the plant “is being designed to comply with applicable environmental regulations, including all applicable federal and state air quality standards.”
Forgive me for starting with a cliché, a piece of finance jargon that has recently slipped into the tech lexicon, but I’m afraid I must talk about “moats.” Popularized decades ago by Warren Buffett to refer to a company’s competitive advantage, the word found its way into Silicon Valley pitch decks when a memo purportedly leaked from Google, titled “We Have No Moat, and Neither Does OpenAI,” fretted that open-source AI would pillage Big Tech’s castle.
A few years on, the castle walls remain safe. Apart from a brief bout of panic when DeepSeek first appeared, open-source AI models have not vastly outperformed proprietary models. Still, none of the frontier labs—OpenAI, Anthropic, Google—has a moat to speak of.
The company that does have a moat is Nvidia. CEO Jensen Huang has called it his most precious “treasure.” It is not, as you might assume for a chip company, a piece of hardware. It’s something called CUDA. What sounds like a chemical compound banned by the FDA may be the one true moat in AI.
CUDA technically stands for Compute Unified Device Architecture, but much like laser or scuba, no one bothers to expand the acronym; we just say “KOO-duh.” So what is this all-important treasure good for? If forced to give a one-word answer: parallelization.
Here’s a simple example. Let’s say we task a machine with filling out a 9×9 multiplication table. Using a computer with a single core, all 81 operations are executed dutifully one by one. But a GPU with nine cores can assign tasks so that each core takes a different column—one from 1×1 to 1×9, another from 2×1 to 2×9, and so on—for a ninefold speed gain. Modern GPUs can be even cleverer. For example, if programmed to recognize commutativity—7×9 = 9×7—they can avoid duplicate work, reducing 81 operations to 45, nearly halving the workload. When a single training run costs a hundred million dollars, every optimization counts.
Nvidia’s GPUs were originally built to render graphics for video games. In the early 2000s, a Stanford PhD student named Ian Buck, who first got into GPUs as a gamer, realized their architecture could be repurposed for general high-performance computing. He created a programming language called Brook, was hired by Nvidia, and, with John Nickolls, led the development of CUDA. If AI ushers in the age of a permanent white-collar underclass and autonomous weapons, just know that it would all be because someone somewhere playing Doom thought a demon’s scrotum should jiggle at 60 frames per second.
CUDA is not a programming language in itself but a “platform.” I use that weasel word because, not unlike how The New York Times is a newspaper that’s also a gaming company, CUDA has, over the years, become a nested bundle of software libraries for AI. Each function shaves nanoseconds off single mathematical operations—added up, they make GPUs, in industry parlance, go brrr.
A modern graphics card is not just a circuit board crammed with chips and memory and fans. It’s an elaborate confection of cache hierarchies and specialized units called “tensor cores” and “streaming multiprocessors.” In that sense, what chip companies sell is like a professional kitchen, and more cores are akin to more grilling stations. But even a kitchen with 30 grilling stations won’t run any faster without a capable head chef deftly assigning tasks—as CUDA does for GPU cores.
To extend the metaphor, hand-tuned CUDA libraries optimized for one matrix operation are the equivalent of kitchen tools designed for a single job and nothing more—a cherry pitter, a shrimp deveiner—which are indulgences for home cooks but not if you have 10,000 shrimp guts to yank out. Which brings us back to DeepSeek. Its engineers went below this already deep layer of abstraction to work directly in PTX, a kind of assembly language for Nvidia GPUs. Let’s say the task is peeling garlic. An unoptimized GPU would go: “Peel the skin with your fingernails.” CUDA can instruct: “Smash the clove with the flat of a knife.” PTX lets you dictate every sub-instruction: “Lift the blade 2.35 inches above the cutting board, make it parallel to the clove’s equator, and strike downward with your palm at a force of 36.2 newtons.”