Tech
Just One Lonely Product Still Uses Apple’s Lightning Connector—Can You Guess Which One?
While the world focuses on Apple’s latest slew of new products, we are taking a moment for the last bastion of Apple’s proprietary past—the one remaining product with a Lightning connector that, somehow, Apple still sells.
We have previously lamented Apple’s drawn-out transition to USB-C. It’s been far from quick and far from straightforward, leaving a mess of dongles and confusion in its wake.
It was last year, at its September 2024 “Glowtime” event, that Apple made the move to change that, transitioning all of its newest products to USB-C. The following month, it—somewhat quietly—moved the remaining current-generation accessories, including the Magic Keyboard, Magic Mouse, and Magic Trackpad, over to USB-C.
By February this year, it had completely discontinued the remaining Lightning-supporting iPhones—the iPhone SE (3rd Gen) and the iPhone 14—following the EU’s ruling for all of its devices to move to a nonproprietary connector by 2025.
But one solitary device is still hanging on as the final Lightning product that Apple sells. That product is the Apple Pencil (1st Gen)—a product that was released 10 years ago, in 2015.
The Apple Pencil strategy has been pretty complicated, with Apple selling no less than four different models. The absence of backward compatibility of newer Pencils has kept the Gen 1 Pencil in the lineup to service the older Lightning-supporting iPads—as well as being compatible with the 10th- and 11th-Gen USB-C iPads, for anyone who upgraded.
Apple generally supports its hardware with OS updates for five to seven years. Even though it no longer sells these products, Apple has confirmed that iPadOS 26 will be compatible with the iPad Air (3rd Gen) and iPad Mini (5th Gen), both released in 2019, and the iPad (8th and 9th Gen), released in 2020 and 2021, respectively. All of these only support the Apple Pencil (1st Gen), and none of the other Pencils above it, meaning it’s seemingly a hard product for Apple to get rid of—despite its desperately aging connector.
Based on that five-to-seven-year timeline, that could mean the Lightning still has as many as three years left in it, unless Apple makes the call to update the original Pencil to USB-C and finally retires Lightning for good.
Tech
Chevron Wants a School District Tax Break for a Data Center Power Plant in Texas
A major oil company is seeking a state tax break in Texas worth hundreds of millions of dollars to build a massive power plant. The energy won’t be going to residential customers, though. Instead, the gas plant will be used to power a data center whose eventual tenant could be Microsoft.
Chevron subsidiary Energy Forge One has filed an application with the State Comptroller’s board to obtain a tax abatement for a power plant it’s building in West Texas. In late January, the comptroller’s office made a recommendation to support the application’s approval—the first such approval under the program for a power plant intended solely for data center use.
In March, following news reports that Microsoft was looking into purchasing power from the Energy Forge project, Chevron said that it had entered into an “exclusivity agreement” with Microsoft and Engine 1, an investment fund involved in the project. In January, Microsoft pledged to be a “good neighbor” in communities where it is building data centers, including promising to pay a “full and fair share of local property taxes.”
The potential tax abatement for the project comes as big tech companies are battling rising public fury about data centers and electricity costs. It also comes as lawmakers start to cast a more critical eye on ballooning incentives for data centers, some of which have cost some states—including Texas—$1 billion or more each year.
Chevron spokesperson Paula Beasley told WIRED in an email that all tax incentives under consideration for the Energy Forge project “apply solely to the power generation facility” to “support new energy infrastructure, and do not extend to any future data center facilities that may be served.” Beasley also said that there is currently “no definitive agreement” with Microsoft for this power plant.
“Microsoft is in discussions with Chevron,” Rima Alaily, Microsoft’s corporate vice president and general counsel for infrastructure, said in a statement to WIRED. “No commercial terms have been finalized, and there is no definitive agreement at this time.”
Chevron is applying for a tax abatement for the project under Texas’ Jobs, Energy, Technology, and Innovation (JETI) Act. Passed in 2023, the program is intended to incentivize businesses to build large infrastructure projects in the state in exchange for guarantees to bring jobs and revenue. Accepted projects get a cap set on the amount of taxable property they can be charged through local school district taxes.
The Pecos-Barstow-Toyah school board approved the project’s application at a meeting in February. The state pays for the tax abatement, so the school district itself does not lose out on any money.
According to documents from the state, the Chevron project could net more than $227 million in savings for the company over a 10-year period, depending on the eventual size of the project and investment. The application says the plant will provide “over 25 permanent, full-time jobs,” though there’s no requirement to do so because it’s considered an electricity generation facility.
The planned gas plant won’t connect to the grid, instead providing “electricity for direct consumption by a data center,” according to its application. So-called behind-the-meter gas plants have become increasingly popular for data center developers facing yearslong waits to connect to the grid. According to data from nonprofit Global Energy Monitor, the US at the start of the year had nearly 100 gigawatts of gas-fired power in the development pipeline solely to power data centers, with several more massive gas projects announced since the data was published.
A WIRED analysis of less than a dozen power plants being constructed to explicitly serve data centers, including the Chevron project, found that these power plants are permitted to emit more greenhouse gases than many small- to medium-size countries. The Energy Forge plant alone could emit more than 11.5 million tons of CO2 equivalent annually—more than the country of Jamaica emitted in 2024. Beasley told WIRED that the plant “is being designed to comply with applicable environmental regulations, including all applicable federal and state air quality standards.”
Tech
CUDA Proves Nvidia Is a Software Company
Forgive me for starting with a cliché, a piece of finance jargon that has recently slipped into the tech lexicon, but I’m afraid I must talk about “moats.” Popularized decades ago by Warren Buffett to refer to a company’s competitive advantage, the word found its way into Silicon Valley pitch decks when a memo purportedly leaked from Google, titled “We Have No Moat, and Neither Does OpenAI,” fretted that open-source AI would pillage Big Tech’s castle.
A few years on, the castle walls remain safe. Apart from a brief bout of panic when DeepSeek first appeared, open-source AI models have not vastly outperformed proprietary models. Still, none of the frontier labs—OpenAI, Anthropic, Google—has a moat to speak of.
The company that does have a moat is Nvidia. CEO Jensen Huang has called it his most precious “treasure.” It is not, as you might assume for a chip company, a piece of hardware. It’s something called CUDA. What sounds like a chemical compound banned by the FDA may be the one true moat in AI.
CUDA technically stands for Compute Unified Device Architecture, but much like laser or scuba, no one bothers to expand the acronym; we just say “KOO-duh.” So what is this all-important treasure good for? If forced to give a one-word answer: parallelization.
Here’s a simple example. Let’s say we task a machine with filling out a 9×9 multiplication table. Using a computer with a single core, all 81 operations are executed dutifully one by one. But a GPU with nine cores can assign tasks so that each core takes a different column—one from 1×1 to 1×9, another from 2×1 to 2×9, and so on—for a ninefold speed gain. Modern GPUs can be even cleverer. For example, if programmed to recognize commutativity—7×9 = 9×7—they can avoid duplicate work, reducing 81 operations to 45, nearly halving the workload. When a single training run costs a hundred million dollars, every optimization counts.
Nvidia’s GPUs were originally built to render graphics for video games. In the early 2000s, a Stanford PhD student named Ian Buck, who first got into GPUs as a gamer, realized their architecture could be repurposed for general high-performance computing. He created a programming language called Brook, was hired by Nvidia, and, with John Nickolls, led the development of CUDA. If AI ushers in the age of a permanent white-collar underclass and autonomous weapons, just know that it would all be because someone somewhere playing Doom thought a demon’s scrotum should jiggle at 60 frames per second.
CUDA is not a programming language in itself but a “platform.” I use that weasel word because, not unlike how The New York Times is a newspaper that’s also a gaming company, CUDA has, over the years, become a nested bundle of software libraries for AI. Each function shaves nanoseconds off single mathematical operations—added up, they make GPUs, in industry parlance, go brrr.
A modern graphics card is not just a circuit board crammed with chips and memory and fans. It’s an elaborate confection of cache hierarchies and specialized units called “tensor cores” and “streaming multiprocessors.” In that sense, what chip companies sell is like a professional kitchen, and more cores are akin to more grilling stations. But even a kitchen with 30 grilling stations won’t run any faster without a capable head chef deftly assigning tasks—as CUDA does for GPU cores.
To extend the metaphor, hand-tuned CUDA libraries optimized for one matrix operation are the equivalent of kitchen tools designed for a single job and nothing more—a cherry pitter, a shrimp deveiner—which are indulgences for home cooks but not if you have 10,000 shrimp guts to yank out. Which brings us back to DeepSeek. Its engineers went below this already deep layer of abstraction to work directly in PTX, a kind of assembly language for Nvidia GPUs. Let’s say the task is peeling garlic. An unoptimized GPU would go: “Peel the skin with your fingernails.” CUDA can instruct: “Smash the clove with the flat of a knife.” PTX lets you dictate every sub-instruction: “Lift the blade 2.35 inches above the cutting board, make it parallel to the clove’s equator, and strike downward with your palm at a force of 36.2 newtons.”
Tech
Could Contact-Tracing Apps Help With the Hantavirus? Not Really
After three people died on a cruise ship struck by a hantavirus, authorities are actively tracking down 29 people who had left the ship. They’re trying to trace the spread of the virus. It’s a long, arduous, global process to find and notify people who might be at risk of infection.
Hey, wasn’t there supposed to be an app for that?
Contact-tracing apps were a global effort starting in 2020 during the Covid-19 pandemic. Enabled by phone companies like Apple and Google, contact tracing was designed to use Bluetooth connections to detect when people had come in contact with someone who had or would later test positive for Covid and report as much. It didn’t do much to solve the spread of the pandemic, but tracking the virus became more effective at least. The same process wouldn’t go well for the hantavirus problem.
“There is no use of apps for this hantavirus outbreak,” Emily Gurley, an epidemiologist at Johns Hopkins University, wrote in an email response to WIRED. “The number of cases are small, and it’s important to trace all contacts exactly to stop transmission.”
On a smaller scale of infection like this, officials have to start at the source (an infected individual), then go person-by-person, confirming where they went and who they might have come into contact with. Data collected by apps from a broad swath of devices would not be anywhere close to accurate enough to give a good idea of where the virus might have hitchhiked to next.
Contact tracing on a wider scale, like, say, a global pandemic, is less about tracking the individual infections and more about understanding what parts of the population might be affected, giving people the opportunity to self-quarantine after exposure. But that depends on how people choose to respond, and how the technology is utilized by public emergency systems. During the Covid pandemic, contact-tracing via apps tended to work better in more carefully managed European countries, but did not slow the spread in the US.
Making devices accessible to that kind of proximity information has also brought all sorts of concerns about privacy, given that the technology would require always-on access to work properly. Contact tracing also struggled to maintain accuracy, and in some cases could be providing false negatives or positives that don’t help further real information about the spread of the virus.
Especially in the case of something like the Hantavirus, where every person on that cruise ship can theoretically be directly tracked and contacted, it’s better to do that process the hard way.
“During small but highly fatal outbreaks, more precision is required,” Gurley wrote.
-
Politics1 week agoIran weighs US reply delivered via Pakistan as Trump signals opposition to deal terms
-
Fashion1 week agoUS cotton export sales show strong recovery, Upland rise 36%
-
Sports1 week agoSajid Ali Sadpara summits world’s fifth-highest peak
-
Tech7 days agoDHS Demanded Google Surrender Data on Canadian’s Activity, Location Over Anti-ICE Posts
-
Business1 week agoHeineken to invest £44.5m in hundreds of pubs creating 850 jobs
-
Business1 week agoGovernment notifies FDI changes on China funds – The Times of India
-
Business1 week agoUK airlines to be allowed to cancel flights in advance over fuel shortages
-
Fashion1 week agoMiddle East conflict clouds India’s FY27 GDP forecast of 7-7.4%: Govt
