Conversations around diversity in the UK’s technology sector have evolved over the past decade, from a focus on increasing the number of women in tech roles to the importance of making the sector an inclusive place for anyone to work.
Unfortunately, the numbers do not reflect the effort made. The past 10 years have seen the number of women in the UK’s tech sector creep up from 16% in 2015 to 22% in 2025, and black women still only account for 0.6% of people in tech roles.
There are countless reasons for this, including a lack of inclusive culture in the sector, limited visibility of career role models, insufficient flexibility in the workplace and misconceptions about the type of people who work in tech roles, along with the influence of unconscious bias.
Furthermore, the recent Lovelace report found that between 40,000 and 60,000 women are leaving digital roles each year, whether for other tech roles or to leave tech for good, with a quarter citing the reason as an absence of opportunities to advance their career in their current roles.
“We are seeing this happening across the industry – that’s what all the data indicates to us – and at every stage of a woman’s career. So we have to acknowledge that we are dealing with a systemic problem across the whole piece,” said Karen Blake, tech inclusion strategist and co-author of the Lovelace report.
The “becoming influential” theme of the 2025 Computer Weekly diversity in tech event, in partnership with Harvey Nash, highlighted a goal that many from underrepresented groups struggle to achieve due to industry challenges.
To tackle this, audience members comprising tech decision-makers and experts in the diversity, equity and inclusion (DEI) field shared advice on how tech career pathways can be changed to allow more underrepresented groups to move into the industry and make their way to becoming influential.
Focus on planting the seeds early
Start early with girls, and deliver a curriculum they feel part of
Children’s first exposure to various careers often happens at school, yet many feel technology is not accessible to them.
Several experts at the Computer Weekly and Harvey Nash event claimed education reform could be a key factor in ensuring young women and minorities are more likely to view a tech career as a path they want to follow.
The first piece of advice given was to ensure the curriculum is built in a way that makes everyone feel they can participate. For example, by including women and people of colour among the figures from computing history studied, as well as emphasising that not all tech roles require coding skills.
In many cases, girls and women develop an interest in technology because it solves real-world problems, so it would also be helpful if lessons included case studies of the applications of technology across a range of roles and sectors, and utilised the inquisitive nature of children by including fun and exciting examples of what the day-to-day role of different tech workers looks like.
A lack of knowledge about what tech roles involve is known to lead to misconceptions among young people about tech jobs and the type of person who pursues them, so experts in attendance also conveyed how important it is to make sure children know what pathways there are into technology.
As pointed out by one audience member, there isn’t currently a single dedicated pathway into tech, and unlike other well-known professions, such as becoming a doctor, lawyer or accountant, there are many different ways to become a tech professional, partly because the industry evolves so quickly.
Helping young people understand that there are many different routes into the sector and roles within it, and what those might look like, including the fact these roles may not necessarily be technical or involve coding, may contribute to more young people from a variety of backgrounds considering a tech career.
Having the right skills to engage in these roles, however, can be dependent on what school a child attends and where in the UK they come from, so curriculum reform to equip children with at least basic digital skills, regardless of where they are from, would be a welcome step forward.
Alongside this, experts recommended developing skills frameworks within businesses to map career pathways for employees, regardless of their background. Such an approach could help hiring managers identify existing skills, select candidates effectively and prioritise internal training to address skill gaps instead of hiring externally.
With such a large number of women claiming to have “fallen into” tech, experts also suggested that dedicated pathways could encourage underrepresented groups to intentionally pursue and thrive in tech careers, rather than stumbling into roles and leaving due to a lack of inclusion or advancement opportunities.
Individual efforts
Diversity is being invited to the party, but inclusion is being asked to dance
The way the technology sector has worked for years is that hiring managers – predominantly white men – hire people they already know or those who they identify with, perpetuating unconscious bias in the sector.
One way in which underrepresented individuals can address this themselves, according to experts in the audience at Computer Weekly’s event, is by building a strong network and utilising it. This involves being aware of the people who are willing to help you with your career and who will advocate for you in circles you’re not already present in.
These sponsors actively advocate for individuals, putting them forward for opportunities and using their influence to develop the careers of others.
Mentorship and coaching were also highlighted as ways to provide individuals with guidance to help them navigate the sector and make career choices.
Experts at the event claimed that while allyship for underrepresented groups is important, it means nothing without active participation to create an inclusive environment within a team and help others to have influence and use it.
Individuals can be a huge support in advancing their own careers and the careers of others if they actively use what influence they have to give advice and build up other talented tech workers.
Do not wait for change to happen – be the change you want to see.
Who has influence?
Influential does not mean senior
Experts at the annual diversity event pointed out that influence is present at all stages of someone’s career, from student, to junior team member, to manager.
They said there is a need for inclusive tech leadership, whereby leaders use influence to grow and promote those in their teams, but also to make teams a space where members are encouraged to share what they want from their roles and from the firm, and where they want their career to go.
Firms can encourage this behaviour by making it part of leaders’ job descriptions to champion diverse teams, tying it into their performance reviews.
As discussed, influence is not reserved for those who are already in senior positions – people at every level can have a voice, and it’s likely there is always at least one person looking to you for guidance, whether you’re aware of it or not.
But whether or not someone is heard depends on the culture of an organisation. If employees know they will be listened to, regardless of their level of seniority, they’ll have the confidence to use their influence. If they don’t, they’ll probably find somewhere else to work where their voice will be heard.
To reach a point where everyone in the sector feels they have influence over their own career and the progression of diversity in the tech sector, those who are already in such a position need to use their platform to advocate for others, helping to build a technology industry that people want to work in and stay in.
A major oil company is seeking a state tax break in Texas worth hundreds of millions of dollars to build a massive power plant. The energy won’t be going to residential customers, though. Instead, the gas plant will be used to power a data center whose eventual tenant could be Microsoft.
Chevron subsidiary Energy Forge One has filed an application with the State Comptroller’s board to obtain a tax abatement for a power plant it’s building in West Texas. In late January, the comptroller’s office made a recommendation to support the application’s approval—the first such approval under the program for a power plant intended solely for data center use.
In March, following news reports that Microsoft was looking into purchasing power from the Energy Forge project, Chevron said that it had entered into an “exclusivity agreement” with Microsoft and Engine 1, an investment fund involved in the project. In January, Microsoft pledged to be a “good neighbor” in communities where it is building data centers, including promising to pay a “full and fair share of local property taxes.”
The potential tax abatement for the project comes as big tech companies are battling rising public fury about data centers and electricity costs. It also comes as lawmakers start to cast a more critical eye on ballooning incentives for data centers, some of which have cost some states—including Texas—$1 billion or more each year.
Chevron spokesperson Paula Beasley told WIRED in an email that all tax incentives under consideration for the Energy Forge project “apply solely to the power generation facility” to “support new energy infrastructure, and do not extend to any future data center facilities that may be served.” Beasley also said that there is currently “no definitive agreement” with Microsoft for this power plant.
“Microsoft is in discussions with Chevron,” Rima Alaily, Microsoft’s corporate vice president and general counsel for infrastructure, said in a statement to WIRED. “No commercial terms have been finalized, and there is no definitive agreement at this time.”
Chevron is applying for a tax abatement for the project under Texas’ Jobs, Energy, Technology, and Innovation (JETI) Act. Passed in 2023, the program is intended to incentivize businesses to build large infrastructure projects in the state in exchange for guarantees to bring jobs and revenue. Accepted projects get a cap set on the amount of taxable property they can be charged through local school district taxes.
The Pecos-Barstow-Toyah school board approved the project’s application at a meeting in February. The state pays for the tax abatement, so the school district itself does not lose out on any money.
According to documents from the state, the Chevron project could net more than $227 million in savings for the company over a 10-year period, depending on the eventual size of the project and investment. The application says the plant will provide “over 25 permanent, full-time jobs,” though there’s no requirement to do so because it’s considered an electricity generation facility.
The planned gas plant won’t connect to the grid, instead providing “electricity for direct consumption by a data center,” according to its application. So-called behind-the-meter gas plants have become increasingly popular for data center developers facing yearslong waits to connect to the grid. According to data from nonprofit Global Energy Monitor, the US at the start of the year had nearly 100 gigawatts of gas-fired power in the development pipeline solely to power data centers, with several more massive gas projects announced since the data was published.
A WIRED analysis of less than a dozen power plants being constructed to explicitly serve data centers, including the Chevron project, found that these power plants are permitted to emit more greenhouse gases than many small- to medium-size countries. The Energy Forge plant alone could emit more than 11.5 million tons of CO2 equivalent annually—more than the country of Jamaica emitted in 2024. Beasley told WIRED that the plant “is being designed to comply with applicable environmental regulations, including all applicable federal and state air quality standards.”
Forgive me for starting with a cliché, a piece of finance jargon that has recently slipped into the tech lexicon, but I’m afraid I must talk about “moats.” Popularized decades ago by Warren Buffett to refer to a company’s competitive advantage, the word found its way into Silicon Valley pitch decks when a memo purportedly leaked from Google, titled “We Have No Moat, and Neither Does OpenAI,” fretted that open-source AI would pillage Big Tech’s castle.
A few years on, the castle walls remain safe. Apart from a brief bout of panic when DeepSeek first appeared, open-source AI models have not vastly outperformed proprietary models. Still, none of the frontier labs—OpenAI, Anthropic, Google—has a moat to speak of.
The company that does have a moat is Nvidia. CEO Jensen Huang has called it his most precious “treasure.” It is not, as you might assume for a chip company, a piece of hardware. It’s something called CUDA. What sounds like a chemical compound banned by the FDA may be the one true moat in AI.
CUDA technically stands for Compute Unified Device Architecture, but much like laser or scuba, no one bothers to expand the acronym; we just say “KOO-duh.” So what is this all-important treasure good for? If forced to give a one-word answer: parallelization.
Here’s a simple example. Let’s say we task a machine with filling out a 9×9 multiplication table. Using a computer with a single core, all 81 operations are executed dutifully one by one. But a GPU with nine cores can assign tasks so that each core takes a different column—one from 1×1 to 1×9, another from 2×1 to 2×9, and so on—for a ninefold speed gain. Modern GPUs can be even cleverer. For example, if programmed to recognize commutativity—7×9 = 9×7—they can avoid duplicate work, reducing 81 operations to 45, nearly halving the workload. When a single training run costs a hundred million dollars, every optimization counts.
Nvidia’s GPUs were originally built to render graphics for video games. In the early 2000s, a Stanford PhD student named Ian Buck, who first got into GPUs as a gamer, realized their architecture could be repurposed for general high-performance computing. He created a programming language called Brook, was hired by Nvidia, and, with John Nickolls, led the development of CUDA. If AI ushers in the age of a permanent white-collar underclass and autonomous weapons, just know that it would all be because someone somewhere playing Doom thought a demon’s scrotum should jiggle at 60 frames per second.
CUDA is not a programming language in itself but a “platform.” I use that weasel word because, not unlike how The New York Times is a newspaper that’s also a gaming company, CUDA has, over the years, become a nested bundle of software libraries for AI. Each function shaves nanoseconds off single mathematical operations—added up, they make GPUs, in industry parlance, go brrr.
A modern graphics card is not just a circuit board crammed with chips and memory and fans. It’s an elaborate confection of cache hierarchies and specialized units called “tensor cores” and “streaming multiprocessors.” In that sense, what chip companies sell is like a professional kitchen, and more cores are akin to more grilling stations. But even a kitchen with 30 grilling stations won’t run any faster without a capable head chef deftly assigning tasks—as CUDA does for GPU cores.
To extend the metaphor, hand-tuned CUDA libraries optimized for one matrix operation are the equivalent of kitchen tools designed for a single job and nothing more—a cherry pitter, a shrimp deveiner—which are indulgences for home cooks but not if you have 10,000 shrimp guts to yank out. Which brings us back to DeepSeek. Its engineers went below this already deep layer of abstraction to work directly in PTX, a kind of assembly language for Nvidia GPUs. Let’s say the task is peeling garlic. An unoptimized GPU would go: “Peel the skin with your fingernails.” CUDA can instruct: “Smash the clove with the flat of a knife.” PTX lets you dictate every sub-instruction: “Lift the blade 2.35 inches above the cutting board, make it parallel to the clove’s equator, and strike downward with your palm at a force of 36.2 newtons.”
After three people died on a cruise ship struck by a hantavirus, authorities are actively tracking down 29 people who had left the ship. They’re trying to trace the spread of the virus. It’s a long, arduous, global process to find and notify people who might be at risk of infection.
Hey, wasn’t there supposed to be an app for that?
Contact-tracing apps were a global effort starting in 2020 during the Covid-19 pandemic. Enabled by phone companies like Apple and Google, contact tracing was designed to use Bluetooth connections to detect when people had come in contact with someone who had or would later test positive for Covid and report as much. It didn’t do much to solve the spread of the pandemic, but tracking the virus became more effective at least. The same process wouldn’t go well for the hantavirus problem.
“There is no use of apps for this hantavirus outbreak,” Emily Gurley, an epidemiologist at Johns Hopkins University, wrote in an email response to WIRED. “The number of cases are small, and it’s important to trace all contacts exactly to stop transmission.”
On a smaller scale of infection like this, officials have to start at the source (an infected individual), then go person-by-person, confirming where they went and who they might have come into contact with. Data collected by apps from a broad swath of devices would not be anywhere close to accurate enough to give a good idea of where the virus might have hitchhiked to next.
Contact tracing on a wider scale, like, say, a global pandemic, is less about tracking the individual infections and more about understanding what parts of the population might be affected, giving people the opportunity to self-quarantine after exposure. But that depends on how people choose to respond, and how the technology is utilized by public emergency systems. During the Covid pandemic, contact-tracing via apps tended to work better in more carefully managed European countries, but did not slow the spread in the US.
Making devices accessible to that kind of proximity information has also brought all sorts of concerns about privacy, given that the technology would require always-on access to work properly. Contact tracing also struggled to maintain accuracy, and in some cases could be providing false negatives or positives that don’t help further real information about the spread of the virus.
Especially in the case of something like the Hantavirus, where every person on that cruise ship can theoretically be directly tracked and contacted, it’s better to do that process the hard way.
“During small but highly fatal outbreaks, more precision is required,” Gurley wrote.