Connect with us

Tech

IT Sustainability Think Tank: How IT sustainability entered the mandate era during 2025 | Computer Weekly

Published

on

IT Sustainability Think Tank: How IT sustainability entered the mandate era during 2025 | Computer Weekly


As the calendar turns the final pages on 2025, the information technology sector stands at a critical juncture regarding its environmental commitments. This year was not marked by technological breakthroughs solving decarbonisation, but by the decisive maturation of sustainability from a strategic differentiator into an operational and regulatory imperative.

This transition involved a painful reckoning with data complexity, supply chain reality, and the sheer energy appetite of modern computing, driven primarily by the rapid proliferation of artificial intelligence (AI).

We entered 2025 with goals framed by aspiration; we exit under the binding mandate of actuality. The central shift is profound: IT sustainability is no longer a parallel environmental, social and governance (ESG) initiative.

It has become deeply intertwined with core business continuity, geopolitical supply chain risk, and mandatory financial disclosure. While this shift signals progress, momentum is driven more by necessity and the threat of liability than by shared ethical commitment.

The conversation evolves from aspirational to accountable

The most profound shift over the past year has been the forced elevation of the sustainability dialogue directly onto the executive committee’s core risk portfolio. This movement is not voluntary; it is driven by impending regulation and the sobering realisation that environmental failure now carries direct, auditable financial penalties and board-level liability.

Only a year ago, discussions circled around unquantifiable reputational benefits. Today, the lexicon is dominated by acronyms signalling mandatory compliance: CSDDD, CSRD, and the tightening of the SBTi Net-Zero Standard V2. These frameworks compel executives to move past narratives and confront the granular, auditable data attached to every asset, vendor, and cloud usage.

For the CIO, this manifests in two critical areas. First, energy efficiency is decisively reframed as a cost of doing business, crucial for operational expenditure control amid volatile global energy markets. Second, the sudden energy demand of generative AI has triggered a rapid, internal debate on responsible compute architecture.

Leaders are increasingly compelled to justify AI investment not solely on traditional ROI, but via a nascent “return on compute” model that necessarily integrates and accounts for carbon expenditure. This makes the environmental cost of IT an integrated input in the total cost of ownership calculation, rather than a polite footnote.

Despite this high-level engagement, progress remains complicated. The IT function often lacks the authority to enforce change across complex internal silos, and the necessary budget and risk tolerance for truly transformative shifts remain stubbornly limited.

Genuine progress where the green shoots are taking hold

Despite systemic inertia, 2025 delivered solid, tangible progress in certain operational domains, offering a partial blueprint for future net-zero efforts. Our confidence is bolstered by three examples, though it is crucial to understand that wide-scale adoption across the average enterprise remains nascent and often confined to pilot programs:

1. Decoupling cloud growth from carbon: Hyperscale cloud providers have largely won the battle for renewable energy procurement. The next frontier — optimising physical operations — has seen enterprise engagement. We saw accelerated adoption of advanced liquid cooling technologies (still primarily concentrated in hyperscale environments, but critical for future AI scaling). Enterprises optimising workloads for low-carbon regions and utilising serverless architectures successfully decoupled rapid cloud expansion from a proportional rise in emissions. This success belongs predominantly to the hyperscalers, and enterprise optimisation remains an ongoing campaign.

2. Maturing the circular IT model (As-a-Service): The year 2025 saw the Managed Device-as-a-Service (MDaaS) model transition into a critical environmental enabler. By outsourcing the entire device lifecycle, enterprises commit practically to refurbishment and robust reverse logistics. Successful enterprises leverage these contracts to guarantee asset re-entry into the value chain via certified refurbishment, drastically reducing e-waste. The caveats are two-fold: MDaaS adoption is far from universal, and the verification of these circular chains still lacks necessary, robust third-party scrutiny.

3. The nascent rise of green software engineering: The formal emergence of green software engineering (GSE) is perhaps the most encouraging development. For too long, the environmental focus was only on hardware. This year, organisations began measuring code energy consumption — optimising algorithms and refactoring applications to reduce reliance on resource-intensive computing.

An important development this year was the publication of the W3C Web Sustainability Guidelines (WSG) Draft Note. Developed through a global, collaborative effort — in which I was pleased to participate — the guidelines offer a structured and internationally relevant set of best practices for reducing the environmental footprint of web products and services. While the scope focuses specifically on the web rather than the full breadth of enterprise IT, the Draft Note nonetheless represents a significant step forward for the industry.

The persistent gaps undermining net-zero momentum

For all the genuine acceleration, 2025 was equally defined by two persistent, critical gaps that threaten to derail net-zero pathways and demand urgent attention.

1. The Scope 3 emissions chasm: The most pervasive and frustrating gap remains the measurement and meaningful reduction of Scope 3 emissions, particularly from purchased goods and downstream asset end-of-life.

Despite regulatory urgency, the vast majority of enterprises still rely on highly aggregated, industry-average supplier data (spend-based or activity-based), which is neither auditable nor sufficient for mandatory disclosure. The necessary mechanism — detailed, granular product carbon footprints (PCF) provided by every vendor — is simply not available at scale or with sufficient fidelity.

The problem persists because it requires collaboration across complex, often proprietary global supply chains. Suppliers are reticent to disclose granular data, citing competitive concerns, while buyers lack the leverage to mandate it. The result is a ‘Scope 3 plateau’: targets are set, but underlying emissions remain stubbornly high, creating a significant credibility risk. We are still largely measuring a reflection, not the reality.

2. The generative AI energy debt: While AI is a powerful tool for sustainability optimisation, the immediate, unmanaged energy demand of Large Language Models (LLMs) represents a profound and growing gap. The speed of AI adoption, combined with the inherently expensive High-Performance Computing (HPC) required, creates an “energy debt” that offsets hard-won gains elsewhere.

The challenge is governance. Enterprises are deploying AI solutions without robust, mandatory policies on model selection, inference efficiency, or resource decommissioning. Crucially, most organisations remain focused on achieving initial ROI metrics, relegating energy efficiency to an optional performance tweak. Failure to enforce a framework for ‘responsible compute’ risks the transformative power of AI being negated by its own expanding environmental impact. This is the single greatest risk to the IT sector’s net-zero journey.

Strategic priorities for 2026 and beyond

As the IT Sustainability Think Tank looks towards 2026, the focus must shift from identifying the problem to systematically closing the remaining gaps with institutional discipline. We must treat these priorities as non-negotiable elements of future business resilience:

  1. Mandate data granularity for Scope 3: Leverage procurement influence to force supplier compliance on verifiable Product Carbon Footprints (PCF). The mandate must be non-negotiable, enforced with clear vendor scorecards and contractual requirements.
  2. Institutionalise green software engineering: Invest heavily in training and tooling to embed energy efficiency into the software development lifecycle (SDLC). Software architecture must be treated with the same environmental scrutiny as data centre cooling, making efficiency an audited requirement.
  3. Govern the AI energy cost: Implement a Responsible AI framework that includes mandatory energy consumption metrics and resource allocation policies for all Generative AI deployments.

The year 2025 was when IT sustainability moved into the board’s audit file. Next year must be the year we finally gather the granular data, enforce the necessary discipline, and manage the rapidly growing energy appetite of our own invention. The time for aspirational statements is definitively over; the urgent task now is to move these nascent efforts into full, verifiable accountability.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

CUDA Proves Nvidia Is a Software Company

Published

on

CUDA Proves Nvidia Is a Software Company


Forgive me for starting with a cliché, a piece of finance jargon that has recently slipped into the tech lexicon, but I’m afraid I must talk about “moats.” Popularized decades ago by Warren Buffett to refer to a company’s competitive advantage, the word found its way into Silicon Valley pitch decks when a memo purportedly leaked from Google, titled “We Have No Moat, and Neither Does OpenAI,” fretted that open-source AI would pillage Big Tech’s castle.

A few years on, the castle walls remain safe. Apart from a brief bout of panic when DeepSeek first appeared, open-source AI models have not vastly outperformed proprietary models. Still, none of the frontier labs—OpenAI, Anthropic, Google—has a moat to speak of.

The company that does have a moat is Nvidia. CEO Jensen Huang has called it his most precious “treasure.” It is not, as you might assume for a chip company, a piece of hardware. It’s something called CUDA. What sounds like a chemical compound banned by the FDA may be the one true moat in AI.

CUDA technically stands for Compute Unified Device Architecture, but much like laser or scuba, no one bothers to expand the acronym; we just say “KOO-duh.” So what is this all-important treasure good for? If forced to give a one-word answer: parallelization.

Here’s a simple example. Let’s say we task a machine with filling out a 9×9 multiplication table. Using a computer with a single core, all 81 operations are executed dutifully one by one. But a GPU with nine cores can assign tasks so that each core takes a different column—one from 1×1 to 1×9, another from 2×1 to 2×9, and so on—for a ninefold speed gain. Modern GPUs can be even cleverer. For example, if programmed to recognize commutativity—7×9 = 9×7—they can avoid duplicate work, reducing 81 operations to 45, nearly halving the workload. When a single training run costs a hundred million dollars, every optimization counts.

Nvidia’s GPUs were originally built to render graphics for video games. In the early 2000s, a Stanford PhD student named Ian Buck, who first got into GPUs as a gamer, realized their architecture could be repurposed for general high-performance computing. He created a programming language called Brook, was hired by Nvidia, and, with John Nickolls, led the development of CUDA. If AI ushers in the age of a permanent white-collar underclass and autonomous weapons, just know that it would all be because someone somewhere playing Doom thought a demon’s scrotum should jiggle at 60 frames per second.

CUDA is not a programming language in itself but a “platform.” I use that weasel word because, not unlike how The New York Times is a newspaper that’s also a gaming company, CUDA has, over the years, become a nested bundle of software libraries for AI. Each function shaves nanoseconds off single mathematical operations—added up, they make GPUs, in industry parlance, go brrr.

A modern graphics card is not just a circuit board crammed with chips and memory and fans. It’s an elaborate confection of cache hierarchies and specialized units called “tensor cores” and “streaming multiprocessors.” In that sense, what chip companies sell is like a professional kitchen, and more cores are akin to more grilling stations. But even a kitchen with 30 grilling stations won’t run any faster without a capable head chef deftly assigning tasks—as CUDA does for GPU cores.

To extend the metaphor, hand-tuned CUDA libraries optimized for one matrix operation are the equivalent of kitchen tools designed for a single job and nothing more—a cherry pitter, a shrimp deveiner—which are indulgences for home cooks but not if you have 10,000 shrimp guts to yank out. Which brings us back to DeepSeek. Its engineers went below this already deep layer of abstraction to work directly in PTX, a kind of assembly language for Nvidia GPUs. Let’s say the task is peeling garlic. An unoptimized GPU would go: “Peel the skin with your fingernails.” CUDA can instruct: “Smash the clove with the flat of a knife.” PTX lets you dictate every sub-instruction: “Lift the blade 2.35 inches above the cutting board, make it parallel to the clove’s equator, and strike downward with your palm at a force of 36.2 newtons.”



Source link

Continue Reading

Tech

Could Contact-Tracing Apps Help With the Hantavirus? Not Really

Published

on

Could Contact-Tracing Apps Help With the Hantavirus? Not Really


After three people died on a cruise ship struck by a hantavirus, authorities are actively tracking down 29 people who had left the ship. They’re trying to trace the spread of the virus. It’s a long, arduous, global process to find and notify people who might be at risk of infection.

Hey, wasn’t there supposed to be an app for that?

Contact-tracing apps were a global effort starting in 2020 during the Covid-19 pandemic. Enabled by phone companies like Apple and Google, contact tracing was designed to use Bluetooth connections to detect when people had come in contact with someone who had or would later test positive for Covid and report as much. It didn’t do much to solve the spread of the pandemic, but tracking the virus became more effective at least. The same process wouldn’t go well for the hantavirus problem.

“There is no use of apps for this hantavirus outbreak,” Emily Gurley, an epidemiologist at Johns Hopkins University, wrote in an email response to WIRED. “The number of cases are small, and it’s important to trace all contacts exactly to stop transmission.”

On a smaller scale of infection like this, officials have to start at the source (an infected individual), then go person-by-person, confirming where they went and who they might have come into contact with. Data collected by apps from a broad swath of devices would not be anywhere close to accurate enough to give a good idea of where the virus might have hitchhiked to next.

Contact tracing on a wider scale, like, say, a global pandemic, is less about tracking the individual infections and more about understanding what parts of the population might be affected, giving people the opportunity to self-quarantine after exposure. But that depends on how people choose to respond, and how the technology is utilized by public emergency systems. During the Covid pandemic, contact-tracing via apps tended to work better in more carefully managed European countries, but did not slow the spread in the US.

Making devices accessible to that kind of proximity information has also brought all sorts of concerns about privacy, given that the technology would require always-on access to work properly. Contact tracing also struggled to maintain accuracy, and in some cases could be providing false negatives or positives that don’t help further real information about the spread of the virus.

Especially in the case of something like the Hantavirus, where every person on that cruise ship can theoretically be directly tracked and contacted, it’s better to do that process the hard way.

“During small but highly fatal outbreaks, more precision is required,” Gurley wrote.



Source link

Continue Reading

Tech

‘Reservation Hijacking’ Scams Target Travelers. Here’s How to Stay Safe

Published

on

‘Reservation Hijacking’ Scams Target Travelers. Here’s How to Stay Safe


There’s another type of digital scam to be aware of, as per the BBC. It’s called “reservation hijacking.”

The name gives you a clue as to how it works. Essentially, scammers use details about a booking you’ve placed (perhaps with a hotel or airline) to trick you into sending money somewhere you shouldn’t.

While this type of scam isn’t brand new, a recent data breach at Booking.com has raised the risk of people being caught out. With data about you and your reservation, a far more convincing setup can be put in place—why wouldn’t you believe that someone purporting to be an employee from a spa you’ve got a reservation with is telling the truth about who they are, especially if they know the dates of your trip, your phone number, and your email address?

According to Booking.com, no financial information was exposed in the April 2026 hack. However, names, email addresses, phone numbers, and booking details have been leaked. The travel portal says affected customers have been emailed about the heightened risk of scams, so that’s the first thing to check for when it comes to staying safe.

Minimizing the risk of getting scammed by a reservation hijack involves many of the same security precautions you may already be following, and just being aware that this is a way you might be targeted will make a difference.

How Reservation Hijacks Work

Scammers can get hold of your booking details.

Courtesy of David Nield

We’ve already outlined the basics of a reservation hijack, but it can take several forms. As with other types of scams, it tends to evolve over time. The basic premise is that someone will get in touch with you claiming to be from a place you have a reservation with, whether it’s a car rental company or a hotel.

The scammers will try to pull together as much information as they can on you and your booking. Sometimes they’ll target employees of the place you’ve got the reservation with in order to get access to their systems, and other times they may take advantage of a wider data breach (as with the recent Booking.com hack).

They might also get information through other means. Maybe they’ve somehow got access to your email, or to some of your social media posts (where you’ve shared your next vacation destination and a countdown of how many days are left to go). Don’t be caught out if you find yourself speaking to someone who knows a lot about your travel plans.



Source link

Continue Reading

Trending