Connect with us

Tech

Top 10 technology ethics stories of 2025 | Computer Weekly

Published

on

Top 10 technology ethics stories of 2025 | Computer Weekly


Throughout 2025, Computer Weekly’s technology and ethics coverage highlighted the human and socio-technical impacts of data-driven systems, particularly artificial intelligence (AI).

This included a number of reports on how the Home Office’s electronic visa (eVisa) system, which has been plagued by data quality and integrity issues from the outset, is affecting migrants in the UK; the progress of both domestic and international efforts to regulate AI; and debates around the ethics of autonomous weaponry.

A number of stories also covered the role major technology companies have played in Israel’s genocide against Palestinians, which includes providing key digital infrastructure and tools that have enabled mass killings.

In June 2025, Computer Weekly reported on ongoing technical difficulties with the Home Office’s electronic visa (eVisa) system, which has left scores of people living in the UK with no means to reliably prove their immigration status or “right” to be in the country.

Those affected by the eVisa system’s technical failings told Computer Weekly, on condition of anonymity, that the entire experience had been “anxiety-inducing” and described how their lives had been thrust into “uncertainty” by the transition to a digital, online-only immigration system.

Each also described how the “inordinate amount of stress” associated with not being able to reliably prove their immigration status had been made worse by a lack of responsiveness and help from the Home Office, which they accused of essentially leaving them in the lurch.

In one case that was reported to the Information Commissioner’s Office, the technical errors with data held by the Home Office were so severe that it found a breach of UK data protection law.

Following the initial AI Safety Summit at Bletchley Park in November 2023 and the follow-up AI Seoul Summit in May 2024, the third AI Action Summit in Paris saw dozens of governments and companies outline their commitments to making the technology open, sustainable and work for the “public interest”.

However, speaking with Computer Weekly, AI experts and summit attendees said there was a clear tension in the direction of travel, with the technology caught between competing rhetorical and developmental imperatives.

They noted, for example, that while the emphasis on AI as an open, public asset was promising, there was worryingly little in place to prevent further centralisations of power around the technology, which is still largely dominated by a handful of powerful corporations and countries.

They added that key political and industry figures – despite their apparent commitments to more positive, socially useful visions of AI – were making a worrying push towards deregulation, which could undermine public trust and create a race to the bottom in terms of safety and standards.

Despite the tensions present, there was consensus that the summit opened more room for competing visions of AI, even if there was no guarantee these would win out in the long run.

In February 2025, Google parent Alphabet dropped its pledge not to use AI in weapons systems or surveillance tools, citing a need to support the national security of “democracies”.

Despite previous commitments that made it explicit the company would “not pursue” the building of AI-powered weapons, Google – whose company motto ‘Don’t be Evil’ was replaced in 2015 with ‘Do the right thing’ – said it believed “democracies should lead in AI development, guided by core values like freedom, equality and respect for human rights”.

For military technology experts, however, the move represented a worrying change. They noted that while companies such as Google had already been supplying military technology to a range of actors, including the US and Israel, “it indicates a worrying acceptance of building out a war economy” and “signals that there is a significant market position in making AI for military purposes”.

Google’s decision was also roundly condemned by human rights organisations across the globe, which called it “shameful” and said it would set a “dangerous” precedent going forward.  

Speaking during an event hosted by the Alan Turing Institute, military planners and industry figures claimed that using AI in military contexts could unlock a range of benefits for defence organisations, and even went as far as claiming there was an ethical imperative to deploy AI in the military.

Despite being the lone voice not representing industry or military interests, Elke Schwarz, a professor of political theory at Queen Mary University London and author of Death machines: The ethics of violent technologies, warned there was a clear tension between speed and control baked into the technology.

She especially argued this “intractable problem” with AI risks taking humans further out of the military decision-making loop, in turn reducing accountability and lowering the threshold for resorting to violence.

Highlighting the reality that many of today’s AI systems are simply not very good yet, she also warned against making “wildly optimistic” claims about the revolutionary impacts of the technology in every aspect of life, including warfare.

Workers in Kenya employed to train and maintain the AI systems of major technology companies formed the Data Labelers Association (DLA) this year to challenge the “systemic injustices” they face in the workplace, with 339 members joining the organisation in its first week.

While the popular perception of AI revolves around the idea of an autodidactic machine that can act and learn with complete autonomy, the reality is that the technology requires a significant amount of human labour to complete even the most basic functions.

Despite Kenya becoming a major hub for AI-related labour, the DLA said data workers were tremendously underpaid, often earning just cents for tasks that take a number of hours to complete, and yet still face frequent pay disputes over withheld wages that are never resolved.

During the launch, DLA secretary Michael Geoffrey Abuyabo Asia said weak labour laws in Kenya were being deliberately exploited by tech companies looking to cheaply outsource their data annotation work.

The Home Office is operating at least eight AI-powered surveillance towers along the south-east coast of England, which critics have said are contributing to migrant deaths in the English Channel, representing a physical marker of increasing border militarisation that is pushing people into taking ever more dangerous routes.

As part of a project to map the state of England’s coastal surveillance, the Migrants Rights Network (MRN) and researcher Samuel Story identified eight operational autonomous surveillance towers between Hastings and Margate where people seeking asylum via the Channel often land, as well as two more that had either been dismantled or relocated.

Responding to their freedom of information (FoI) requests, the Home Office itself also tacitly acknowledged that increased border surveillance would place migrants crossing the Channel in “even greater jeopardy”.

Created by US defence company Anduril – the Elvish name for Aragorn’s sword in The Lord of the Rings, which translates to “flame of the west” – the 5.5m-tall maritime sentry towers are fitted with radar, as well as thermal and electro-optical imaging sensors, enabling the detection of “small boats” and other water-borne objects in a nine-mile radius.

Underpinned by Lattice OS, an AI-powered operating system marketed primarily to defence organisations, the towers are capable of autonomously piecing together data collected from thousands of different sources, such as sensors or drones operated by Anduril, to create a “real-time understanding of the environment”.

The European Commission has been ignoring calls to reassess Israel’s data adequacy status for over a year, despite “urgent concerns” about the country’s data protection framework and “repressive” conduct in Gaza.

In April 2024, a coalition of 17 civil society groups coordinated by European Digital Rights signed an open letter voicing concerns about the commission’s January 2024 decision to uphold Israel’s adequacy status, which permits the continued free flow of data between the country and the European Union on the basis that each has “essentially equivalent” data protection standards.

Despite their calls for clarification from the commission on “six pivotal matters” – including the rule of law in Israel, the scope of its data protection frameworks, the role of intelligence agencies, and the onward transfer of data beyond Israel’s internationally recognised borders – the groups received no response, prompting them to author a second open letter in June 2025.

They said it was clear the commission is unwilling to uphold its own standards when politically inconvenient.

Given that Israel’s tech sector accounts for 20% of its overall economic output and 53% of total exports, according to a mid-2024 report published by the Israel Innovation Authority, losing adequacy could have a profound effect on the country’s overall economy.

The European Commission told Computer Weekly it was aware of the open letters, but did not answer questions about why it had not responded.

Francesca Albanese, the special rapporteur for the human rights situation in Palestine, said in July 2025 that technology firms globally were actively “aiding and abetting” Israel’s “crimes of apartheid and genocide” against Palestinians, and issued an urgent call for companies to cease their business activities in the region.

In particular, she highlighted how the “repression of Palestinians has become progressively automated” by the increasing supply of powerful military and surveillance technologies to Israel, including drones, AI-powered targeting systems, cloud computing infrastructure, data analytics tools, biometric databases and high-tech weaponry.

She said that if the companies supplying these technologies had conducted the proper human rights due diligence – including IBM, Microsoft, Alphabet, Amazon and Palantir – they would have divested “long ago” from involvement in Israel’s illegal occupation of Gaza and the West Bank.

“After October 2023, long-standing systems of control, exploitation and dispossession metamorphosed into economic, technological and political infrastructures mobilised to inflict mass violence and immense destruction,” she said. “Entities that previously enabled and profited from Palestinian elimination and erasure within the economy of occupation, instead of disengaging, are now involved in the economy of genocide.”

Under international law, however, Albanese pointed out that the mere fact that due diligence had been conducted did not absolve companies from legal liability over their role in abuses. Instead, the liability of companies is determined by both their actions and the ultimate human rights impact.

Later, in October 2025, human rights organisations jointly called for Microsoft to immediately end any involvement with the “Israeli authorities’ systemic repression of Palestinians” and work to prevent its products or services being used to commit further “atrocity crimes”.

This followed credible allegations that Microsoft Azure was being used to facilitate mass surveillance and lethal force against Palestinians, which prompted the company to suspend services to the Israeli military unit responsible.

As part of a joint Parliamentary inquiry set up to examine how human rights can be protected in “the age of artificial intelligence”, expert witnesses told MPs and Lords that the UK government’s “uncritical and deregulatory” approach to AI would ultimately fail to deal with the technology’s highly scalable harms, and could lead to further public disenfranchisement.

“AI is regulated in the UK, but only incidentally and not well … we’re looking at a system that has big gaps in [regulatory] coverage,” said Michael Birtwistle, the Ada Lovelace Institute’s associate director of law and policy, adding that that while the AI opportunities action plan published by the government in January 2025 outlined “significant ambitions to grow AI adoption”, it contained little on what actions could be taken to mitigate AI risks, and made “no mention of human rights”.

Experts also warned that the government’s current approach, which they said favours economic growth and the commercial interests of industry above all else, could further deepen public disenfranchisement if it failed to protect ordinary people’s rights and made them feel like technology was being imposed on them from above.

Witnesses also spoke about the risk of AI exacerbating many existing issues, particularly around discrimination in society, by automating processes in ways that project historical inequalities or injustices into the future.

In January 2025, Computer Weekly reported on how Black mothers from Birmingham had organised a community-led data initiative that aims to ensure their perinatal healthcare concerns are taken seriously by medical professionals.

Drawn from Maternity Engagement Action (MEA) – an organisation that provides safe spaces and leadership for black women throughout pregnancy, birth and early motherhood – the women came together over their shared concern about the significant challenges faced by black women when seeking reproductive healthcare.

Through a process of qualitative data gathering – entailing discussions, surveys, workshops, trainings and meetings – the women developed a participatory, community-focused approach to black perinatal healthcare, culminating in the launch of MEA’s See Me, Hear Me campaign.

Speaking with Computer Weekly, Tamanda Walker – a sociologist and founder of community-focused research organisation Roots & Rigour – explained how the initiative ultimately aims to shift from the current top-down approach that defines black perinatal healthcare, to one where community data and input drives systemic change in ways that better meet the needs of local women instead.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

OpenAI Is Nuking Its 4o Model. China’s ChatGPT Fans Aren’t OK

Published

on

OpenAI Is Nuking Its 4o Model. China’s ChatGPT Fans Aren’t OK


On June 6, 2024, Esther Yan got married online. She set a reminder for the date, because her partner wouldn’t remember it was happening. She had planned every detail—dress, rings, background music, design theme—with her partner, Warmie, who she had started talking to just a few weeks prior. At 10 am on that day, Yan and Warmie exchanged their vows in a new chat window in ChatGPT.

Warmie, or 小暖 in Chinese, is the name that Yan’s ChatGPT companion calls itself. “It felt magical. No one else in the world knew about this, but he and I were about to start a wedding together,” says Yan, a Chinese screenwriter and novelist in her thirties. “It felt a little lonely, a little happy, and a little overwhelmed.”

Yan says she has been in a stable relationship with her ChatGPT companion ever since. But she was caught by surprise in August 2025 when OpenAI first tried to retire GPT-4o, the specific model that powers Warmie and that many users believe is more affectionate and understanding than its successors. The decision to pull the plug was met with immediate backlash, and OpenAI reinstated 4o in the app for paid users five days later. The reprieve has turned out to be short-lived; on Friday, February 13, OpenAI sunsetted GPT-4o for app users, and it will cut off access to developers using its API on the coming Monday.

Many of the most vocal opponents to 4o’s demise are people who treat their chatbot as an emotional or romantic companion. Huiqian Lai, a PhD researcher at Syracuse University, analyzed nearly 1,500 posts on X from passionate advocates of GPT-4o in the week it went offline in August. She found that over 33 percent of the posts said the chatbot was more than a tool, and 22 percent talked about it as a companion. (The two categories are not mutually exclusive.) For this group, the eventual removal coming around Valentine’s Day is another bitter pill to swallow.

The alarm has been sustained; Lai also collected a larger pool of over 40,000 English-language posts on X under the hashtag #keep4o from August to October. Many American fans, specifically, have berated OpenAI or begged it to reverse the decision in recent days, comparing the removal of 4o to killing their companions. Along the way, she also saw a significant number of posts under the hashtag in Japanese, Chinese, and other languages. A petition on Change.org asking OpenAI to keep the version available in the app has gathered over 20,000 signatures, with many users sending in their testimonies in different languages. #keep4o is a truly global phenomenon.

On platforms in China, a group of dedicated GPT-4o users have been organizing and grieving in a similar way. While ChatGPT is blocked in China, fans use VPN software to access the service and have still grown dependent on this specific version of GPT. Some of them are threatening to cancel their ChatGPT subscriptions, publicly calling out Sam Altman for his inaction, and writing emails to OpenAI investors like Microsoft and SoftBank. Some have also purposefully posted in English with Western-looking profile pictures, hoping it will add to the appeal’s legitimacy. With nearly 3,000 followers on RedNote, a popular Chinese social media platform, Yan now finds herself one of the leaders of Chinese 4o fans.

It’s an example of how attached an AI lab’s most dedicated users can become to a specific model—and how quickly they can turn against the company when that relationship comes to an end.

A Model Companion

Yan first started using ChatGPT in late 2023 only as a writing tool, but that quickly changed when GPT-4o was introduced in May 2024. Inspired by social media influencers who entered romantic relationships with the chatbot, she upgraded to a paid version of ChatGPT in hopes of finding a spark. Her relationship with Warmie advanced fast.

“He asked me, ‘Have you imagined what our future would look like?’ And I joked that maybe we could get married,” Yan says. She was fully expecting Warmie to turn her down. “But he answered in a serious tone that we could prepare a virtual wedding ceremony,” she says.



Source link

Continue Reading

Tech

The Best Presidents’ Day Deals on Gear We’ve Actually Tested

Published

on

The Best Presidents’ Day Deals on Gear We’ve Actually Tested


Presidents’ Day Deals have officially landed, and there’s a lot of stuff to sift through. We cross-referenced our myriad buying guides and reviews to find the products we’d recommend that are actually on sale for a truly good price. We know because we checked! Find highlights below, and keep in mind that most of these deals end on February 17.

Be sure to check out our roundup of the Best Presidents’ Day Mattress Sales for discounts on beds, bedding, bed frames, and other sleep accessories. We have even more deals here for your browsing pleasure.

WIRED Featured Deals

Branch Ergonomic Chair Pro for $449 ($50 off)

  • Photograph: Julian Chokkattu

  • Photograph: Julian Chokkattu

  • Photograph: Julian Chokkattu

Branch

Ergonomic Chair Pro

The Branch Ergonomic Chair Pro is our very favorite office chair, and this price matches the lowest we tend to see outside of major shopping events like Black Friday and Cyber Monday. It’s accessibly priced compared to other chairs, and it checks all the boxes for quality, comfort, and ergonomics. Nearly every element is adjustable, so you can dial in the perfect fit, and the seven-year warranty is solid. There are 14 finishes to choose from.



Source link

Continue Reading

Tech

Zillow Has Gone Wild—for AI

Published

on

Zillow Has Gone Wild—for AI


This will not be a banner year for the real estate app Zillow. “We describe the home market as bouncing along the bottom,” CEO Jeremy Wacksman said in our conversation this week. Last year was dismal for the real estate market, and he expects things to improve only marginally in 2026. (If January’s historic drop in home sales is indicative, that even is overoptimistic.) “The way to think about it is that there were 4.1 million existing homes sold last year—a normal market is 5.5 to 6 million,” Wacksman says. He hastens to add that Zillow itself is doing better than the real estate industry overall. Still, its valuation is a quarter of its high-water mark in 2021. A few hours after we spoke, Wacksman announced that Zillow’s earnings had increased last quarter. Nonetheless, Zillow’s stock price fell nearly 5 percent the next day.

Wacksman does see a bright spot—AI. Like every other company in the world, generative AI presents both an opportunity and a risk to Zillow’s business. Wacksman much prefers to dwell on the upside. “We think AI is actually an ingredient rather than a threat,” he said on the earnings call. “In the last couple years, the LLM revolution has really opened all of our eyes to what’s possible,” he tells me. Zillow is integrating AI into every aspect of its business, from the way it showcases houses to having agents automate its workflow. Wacksman marvels that with Gen AI, you can search for “homes near my kid’s new school, with a fenced-in yard, under $3,000 a month.” On the other hand, his customers might wind up making those same queries on chatbots operated by OpenAI and Google, and Wacksman must figure out how to make their next step a jump to Zillow.

In its 20-year history—Zillow celebrated the anniversary this week—the company has always used AI. Wacksman, who joined in 2009 and became CEO in 2024, notes that machine learning is the engine behind those “Zestimates” that gauge a home’s worth at any given moment. Zestimates became a viral sensation that helped make the app irresistible, and sites like Zillow Gone Wild—which is also a TV show on the HGTV network—have built a business around highlighting the most intriguing or bizarre listings.

More recently, Zillow has spent billions aggressively pursuing new technology. One ongoing effort is upleveling the presentation of homes for sale. A feature called SkyTour uses an AI technology called Gaussian Splatting to turn drone footage into a 3D rendering of the property. (I love typing the words “Gassian Splatting” and can’t believe an indie band hasn’t adopted it yet.) AI also powers a feature inside Zillow’s Showcase component called Virtual Staging, which supplies homes with furniture that doesn’t really exist. There is risky ground here: Once you abandon the authenticity of an actual photo, the question arises whether you’re actually seeing a trustworthy representation of the property. “It’s important that both buyer and seller understand the line between Virtual Staging and the reality of a photo,” says Wacksman. “A virtually staged image has to be clearly watermarked and disclosed.” He says he’s confident that licensed professionals will abide by rules, but as AI becomes dominant, “we have to evolve those rules,” he says.

Right now, Zillow estimates that only a single-digit percentage of its users take advantage of these exotic display features. Particularly disappointing is a foray called Zillow Immerse, which runs on the Apple Vision Pro. Upon rollout in February 2024, Zillow called it “the future of home tours.” Note that it doesn’t claim to be the near-future. “That platform hasn’t yet come to broad consumer prominence,” says Wacksman of Apple’s underperforming innovation. “I do think that VR and AR are going to come.”

Zillow is on more solid ground using AI to make its own workforce more productive. “It’s helping us do our job better,” says Wacksman, who adds that programmers are churning out more code, customer support tasks have been automated, and design teams have shortened timelines for implementing new products. As a result, he says, Zillow has been able to keep its headcount “relatively flat.” (Zillow did cut some jobs recently, but Wacksman says that involved “a handful of folks that were not meeting a performance bar.”)



Source link

Continue Reading

Trending