Tech
Top 10 technology ethics stories of 2025 | Computer Weekly
Throughout 2025, Computer Weekly’s technology and ethics coverage highlighted the human and socio-technical impacts of data-driven systems, particularly artificial intelligence (AI).
This included a number of reports on how the Home Office’s electronic visa (eVisa) system, which has been plagued by data quality and integrity issues from the outset, is affecting migrants in the UK; the progress of both domestic and international efforts to regulate AI; and debates around the ethics of autonomous weaponry.
A number of stories also covered the role major technology companies have played in Israel’s genocide against Palestinians, which includes providing key digital infrastructure and tools that have enabled mass killings.
In June 2025, Computer Weekly reported on ongoing technical difficulties with the Home Office’s electronic visa (eVisa) system, which has left scores of people living in the UK with no means to reliably prove their immigration status or “right” to be in the country.
Those affected by the eVisa system’s technical failings told Computer Weekly, on condition of anonymity, that the entire experience had been “anxiety-inducing” and described how their lives had been thrust into “uncertainty” by the transition to a digital, online-only immigration system.
Each also described how the “inordinate amount of stress” associated with not being able to reliably prove their immigration status had been made worse by a lack of responsiveness and help from the Home Office, which they accused of essentially leaving them in the lurch.
In one case that was reported to the Information Commissioner’s Office, the technical errors with data held by the Home Office were so severe that it found a breach of UK data protection law.
Following the initial AI Safety Summit at Bletchley Park in November 2023 and the follow-up AI Seoul Summit in May 2024, the third AI Action Summit in Paris saw dozens of governments and companies outline their commitments to making the technology open, sustainable and work for the “public interest”.
However, speaking with Computer Weekly, AI experts and summit attendees said there was a clear tension in the direction of travel, with the technology caught between competing rhetorical and developmental imperatives.
They noted, for example, that while the emphasis on AI as an open, public asset was promising, there was worryingly little in place to prevent further centralisations of power around the technology, which is still largely dominated by a handful of powerful corporations and countries.
They added that key political and industry figures – despite their apparent commitments to more positive, socially useful visions of AI – were making a worrying push towards deregulation, which could undermine public trust and create a race to the bottom in terms of safety and standards.
Despite the tensions present, there was consensus that the summit opened more room for competing visions of AI, even if there was no guarantee these would win out in the long run.
In February 2025, Google parent Alphabet dropped its pledge not to use AI in weapons systems or surveillance tools, citing a need to support the national security of “democracies”.
Despite previous commitments that made it explicit the company would “not pursue” the building of AI-powered weapons, Google – whose company motto ‘Don’t be Evil’ was replaced in 2015 with ‘Do the right thing’ – said it believed “democracies should lead in AI development, guided by core values like freedom, equality and respect for human rights”.
For military technology experts, however, the move represented a worrying change. They noted that while companies such as Google had already been supplying military technology to a range of actors, including the US and Israel, “it indicates a worrying acceptance of building out a war economy” and “signals that there is a significant market position in making AI for military purposes”.
Google’s decision was also roundly condemned by human rights organisations across the globe, which called it “shameful” and said it would set a “dangerous” precedent going forward.
Speaking during an event hosted by the Alan Turing Institute, military planners and industry figures claimed that using AI in military contexts could unlock a range of benefits for defence organisations, and even went as far as claiming there was an ethical imperative to deploy AI in the military.
Despite being the lone voice not representing industry or military interests, Elke Schwarz, a professor of political theory at Queen Mary University London and author of Death machines: The ethics of violent technologies, warned there was a clear tension between speed and control baked into the technology.
She especially argued this “intractable problem” with AI risks taking humans further out of the military decision-making loop, in turn reducing accountability and lowering the threshold for resorting to violence.
Highlighting the reality that many of today’s AI systems are simply not very good yet, she also warned against making “wildly optimistic” claims about the revolutionary impacts of the technology in every aspect of life, including warfare.
Workers in Kenya employed to train and maintain the AI systems of major technology companies formed the Data Labelers Association (DLA) this year to challenge the “systemic injustices” they face in the workplace, with 339 members joining the organisation in its first week.
While the popular perception of AI revolves around the idea of an autodidactic machine that can act and learn with complete autonomy, the reality is that the technology requires a significant amount of human labour to complete even the most basic functions.
Despite Kenya becoming a major hub for AI-related labour, the DLA said data workers were tremendously underpaid, often earning just cents for tasks that take a number of hours to complete, and yet still face frequent pay disputes over withheld wages that are never resolved.
During the launch, DLA secretary Michael Geoffrey Abuyabo Asia said weak labour laws in Kenya were being deliberately exploited by tech companies looking to cheaply outsource their data annotation work.
The Home Office is operating at least eight AI-powered surveillance towers along the south-east coast of England, which critics have said are contributing to migrant deaths in the English Channel, representing a physical marker of increasing border militarisation that is pushing people into taking ever more dangerous routes.
As part of a project to map the state of England’s coastal surveillance, the Migrants Rights Network (MRN) and researcher Samuel Story identified eight operational autonomous surveillance towers between Hastings and Margate where people seeking asylum via the Channel often land, as well as two more that had either been dismantled or relocated.
Responding to their freedom of information (FoI) requests, the Home Office itself also tacitly acknowledged that increased border surveillance would place migrants crossing the Channel in “even greater jeopardy”.
Created by US defence company Anduril – the Elvish name for Aragorn’s sword in The Lord of the Rings, which translates to “flame of the west” – the 5.5m-tall maritime sentry towers are fitted with radar, as well as thermal and electro-optical imaging sensors, enabling the detection of “small boats” and other water-borne objects in a nine-mile radius.
Underpinned by Lattice OS, an AI-powered operating system marketed primarily to defence organisations, the towers are capable of autonomously piecing together data collected from thousands of different sources, such as sensors or drones operated by Anduril, to create a “real-time understanding of the environment”.
The European Commission has been ignoring calls to reassess Israel’s data adequacy status for over a year, despite “urgent concerns” about the country’s data protection framework and “repressive” conduct in Gaza.
In April 2024, a coalition of 17 civil society groups coordinated by European Digital Rights signed an open letter voicing concerns about the commission’s January 2024 decision to uphold Israel’s adequacy status, which permits the continued free flow of data between the country and the European Union on the basis that each has “essentially equivalent” data protection standards.
Despite their calls for clarification from the commission on “six pivotal matters” – including the rule of law in Israel, the scope of its data protection frameworks, the role of intelligence agencies, and the onward transfer of data beyond Israel’s internationally recognised borders – the groups received no response, prompting them to author a second open letter in June 2025.
They said it was clear the commission is unwilling to uphold its own standards when politically inconvenient.
Given that Israel’s tech sector accounts for 20% of its overall economic output and 53% of total exports, according to a mid-2024 report published by the Israel Innovation Authority, losing adequacy could have a profound effect on the country’s overall economy.
The European Commission told Computer Weekly it was aware of the open letters, but did not answer questions about why it had not responded.
Francesca Albanese, the special rapporteur for the human rights situation in Palestine, said in July 2025 that technology firms globally were actively “aiding and abetting” Israel’s “crimes of apartheid and genocide” against Palestinians, and issued an urgent call for companies to cease their business activities in the region.
In particular, she highlighted how the “repression of Palestinians has become progressively automated” by the increasing supply of powerful military and surveillance technologies to Israel, including drones, AI-powered targeting systems, cloud computing infrastructure, data analytics tools, biometric databases and high-tech weaponry.
She said that if the companies supplying these technologies had conducted the proper human rights due diligence – including IBM, Microsoft, Alphabet, Amazon and Palantir – they would have divested “long ago” from involvement in Israel’s illegal occupation of Gaza and the West Bank.
“After October 2023, long-standing systems of control, exploitation and dispossession metamorphosed into economic, technological and political infrastructures mobilised to inflict mass violence and immense destruction,” she said. “Entities that previously enabled and profited from Palestinian elimination and erasure within the economy of occupation, instead of disengaging, are now involved in the economy of genocide.”
Under international law, however, Albanese pointed out that the mere fact that due diligence had been conducted did not absolve companies from legal liability over their role in abuses. Instead, the liability of companies is determined by both their actions and the ultimate human rights impact.
Later, in October 2025, human rights organisations jointly called for Microsoft to immediately end any involvement with the “Israeli authorities’ systemic repression of Palestinians” and work to prevent its products or services being used to commit further “atrocity crimes”.
This followed credible allegations that Microsoft Azure was being used to facilitate mass surveillance and lethal force against Palestinians, which prompted the company to suspend services to the Israeli military unit responsible.
As part of a joint Parliamentary inquiry set up to examine how human rights can be protected in “the age of artificial intelligence”, expert witnesses told MPs and Lords that the UK government’s “uncritical and deregulatory” approach to AI would ultimately fail to deal with the technology’s highly scalable harms, and could lead to further public disenfranchisement.
“AI is regulated in the UK, but only incidentally and not well … we’re looking at a system that has big gaps in [regulatory] coverage,” said Michael Birtwistle, the Ada Lovelace Institute’s associate director of law and policy, adding that that while the AI opportunities action plan published by the government in January 2025 outlined “significant ambitions to grow AI adoption”, it contained little on what actions could be taken to mitigate AI risks, and made “no mention of human rights”.
Experts also warned that the government’s current approach, which they said favours economic growth and the commercial interests of industry above all else, could further deepen public disenfranchisement if it failed to protect ordinary people’s rights and made them feel like technology was being imposed on them from above.
Witnesses also spoke about the risk of AI exacerbating many existing issues, particularly around discrimination in society, by automating processes in ways that project historical inequalities or injustices into the future.
In January 2025, Computer Weekly reported on how Black mothers from Birmingham had organised a community-led data initiative that aims to ensure their perinatal healthcare concerns are taken seriously by medical professionals.
Drawn from Maternity Engagement Action (MEA) – an organisation that provides safe spaces and leadership for black women throughout pregnancy, birth and early motherhood – the women came together over their shared concern about the significant challenges faced by black women when seeking reproductive healthcare.
Through a process of qualitative data gathering – entailing discussions, surveys, workshops, trainings and meetings – the women developed a participatory, community-focused approach to black perinatal healthcare, culminating in the launch of MEA’s See Me, Hear Me campaign.
Speaking with Computer Weekly, Tamanda Walker – a sociologist and founder of community-focused research organisation Roots & Rigour – explained how the initiative ultimately aims to shift from the current top-down approach that defines black perinatal healthcare, to one where community data and input drives systemic change in ways that better meet the needs of local women instead.
Tech
The Best Babbel Promo Codes and Deals for April 2026
I’ve been trying to become fluent in Spanish for the last decade. After spending most of my adult life surrounded by multilinguals, I often feel like I’m playing an impossible game of catch-up. Like everyone else, I’ve tried to become regimented with practicing on an in-phone app like Duolingo, which attempts to ‘game-ify’ language learning, but mostly ends up with a sad and sick-looking green bird icon guilting me to practice every time I open up my phone.
Babbel aims to help people actually learn the language through practical conversation and grammar, using proven pedagogical methods and speech recognition technology. Each lesson is short, with 10 to 15 minute lessons developed by a team of over 150 linguists. Instead of learning the same simple phrases in ad-ridden games on an endless loop, take charge of your language learning this year and make that commitment a reality. No more excuses—we’ve got a Babbel promo code and a Babbel coupon to help you hit your goals. Maybe you’ll be fluent by your next vacation (or at least able to order a chopped cheese with confidence at the bodega).
Unlock Your Babbel Promo Code and Save Big in April 2026
Not only is Babbel a helpful interactive app to simplify language learning, but it also has holistic services to help introduce the language to every part of your life. These are things like Babbel videos, which do a deep dive into what makes a language so fascinating, Babbel podcasts, which are led by Babbel experts who take an inside look at local culture and break down language secrets, and Babbel magazine, which highlights stories from around the world so you can better understand the history, culture, and people from the language you’re learning (and maybe will inspire you to take a trip to practice that language IRL!).
Make sure you check back often to find the latest Babbel promo code for sitewide savings. There are often discounts on the subscription tiers, which range from three month plans to annual memberships. Plus, springtime is usually when there are significant Babbel discounts for new users. And, if you sign up for the Babbel newsletter, you can receive a link for a Babbel coupon in your inbox.
Save 60% on 6-Month Plans With the Healthcare Workers Discount
As stated, knowing another language is an invaluable life skill, and a skill that is immeasurably valuable to healthcare workers, who may be able to more easily give lifesaving care. Healthcare professionals and nurses get a Babbel discount of 60% off a six-month Babbel subscription. To claim the Babbel discount, users just need to verify their medical credentials via ID.me.
Claim Your 60% Military Discount on 6-Month Subscriptions
This Babbel discount also applies to active duty military, veterans, and their families, who are also eligible for 60% off six-month Babbel subscriptions. This Babbel military coupon is valid for National Guard, reserve members, and immediate family members of service personnel, and all you need to do is verify your status at ID.me.
Snag a 60% Teacher Discount on Your Next 6 Months
Babbel is also extending the 60% discount to the real unsung heroes, teachers. Knowing more than one language is an invaluable tool for educators to be able to talk more effectively to parents or guardians, as well as to more deeply understand their students’ cultural identities. Educators and teachers, like K-12 teachers, university professors, and other educational staff members, are eligible for 60% off a six-month Babbel subscription. And like the others, you just need to verify credentials through ID.me.
Grab Top Lifetime Subscription Deals and Save in April 2026
Everyone knows that learning a language is a lifetime process, and Babbel wants to make it even easier for you to commit to it. If you pay once, you’ll get access to all available Babbel languages forever with Lifetime deals. You’ll just need to look for the “Lifetime Subscription” Babbel promos that could potentially save you hundreds of dollars over several years. Be sure to check back often, as these rotating deals often pop up during major holiday sales. While the upfront cost is higher, you’ll get access to all 14 available languages with this Babbel promo code lifetime subscription deal.
Tech
Robotaxi Outage in China Leaves Passengers Stranded on Highways
An unknown technical problem caused a number of robotaxis owned by the Chinese tech giant Baidu to freeze on Tuesday in the middle of traffic, trapping some passengers in the vehicles for more than an hour.
In Wuhan, a city in central China where Baidu has deployed hundreds of its Apollo Go self-driving taxis, people on Chinese social media reported witnessing the cars suddenly malfunction and stop operating. Photos and videos shared online show the Baidu cars halted on busy highways, often in the fast lane.
A college student in Wuhan tells WIRED that she was stuck in a Baidu robotaxi with two friends for about 90 minutes on Tuesday. (She asked to be only identified with her last name, He, to protect her privacy.) The student says the car malfunctioned and stopped four or five times during the trip before it eventually parked in front of an intersection in eastern Wuhan. Luckily, it was not a busy road, and the group was not in immediate danger. The screen display in the car asked the passengers to remain in the car with seatbelt on and wait for a company representative to come “in five minutes,” according to a photo He shared with WIRED.
He says it took about 30 minutes to reach a Baidu customer representative on the phone. “They kept saying it would be reported to their superior. But they didn’t explain what caused [the outage] or let us know how long we needed to wait for the staff to come,” He says. But no one ever came, and after another hour of waiting, the three passengers decided to just get out and go home by themselves (the doors weren’t locked).
On Chinese social media, other passengers also complained about being unable to reach Baidu’s customer support. “I tried every way I could think of to call for help using the options the app showed, but the phone line wouldn’t go through, and when I pressed the SOS button it told me it was unavailable. So then what exactly is the SOS for?” wrote one person in a post on RedNote alongside a video showing the button not working. She said she had to force the door to open and get out of the car as traffic halted to a complete stop behind her robotaxi. “Apollo Go, you really owe me an apology,” she wrote.
Baidu didn’t immediately respond to a request for comment. Local police in Wuhan issued a statement around midnight in China that said the situation was “likely caused by a system malfunction,” but the incident is still under investigation. No one was injured and all passengers have exited the vehicles, the police added. It’s unclear how many of Baidu’s robotaxis may have been impacted.
One dash cam recording posted to RedNote shows a car passing 16 Apollo Go vehicles parked on the road in the span of 90 minutes. On several occasions, the video shows the driver narrowly avoiding hitting the robotaxis by braking or changing lanes at the last minute.
Others were apparently not as fortunate. In another RedNote post, a man claimed he crashed into one of the malfunctioning Baidu vehicles. The man wrote in the caption that he was driving over 40 mph on a highway when the car in front of him suddenly changed lanes to avoid the stopped robotaxi. He couldn’t react fast enough and ended up running into the self-driving car. Photos of the man’s orange SUV being towed away show that the car’s front-right fender was completely torn off, and other parts appeared to have sustained major damage.
Tech
Our Favorite Affordable Air Purifier Is Temporarily Even Cheaper
Tired of the stale, fetid air looming over your apartment like a cloud? Check out the Coway Airmega Mighty, an already wallet-friendly home air purifier that’s even cheaper right now as part of the Amazon Big Spring Sale. It’s currently marked down to just $154, a $76 discount from its typical price, but you’ll want to move quickly if you’re interested, as the deal is only available for a limited time.
Despite its low price tag and squat stature, the Airmega Mighty is capable of cleaning a substantial amount of space. At full bore, it can handle a 361-square-foot space, although you’ll get the best performance, and save your ears, if you’re closer to a 200-square-foot room. If you don’t want it running constantly, there are built-in timers to automatically shut off after 1, 4, or 8 hours, or you can use Eco Mode, which will run until the Might doesn’t sense any dirty air for half an hour.
That’s right, the Airmega Mighty has a built-in air quality sensor, and it reflects the current state of the air quality using a colored light with three levels. It uses those readings to automatically adjust the fan speed and timing settings on the fly, as well as giving you a peak into how bad the air you’re breathing right now is for you. While it lacks integration with smart home setups like Google Home, it makes up for it by handling all of its own business without Wi-Fi or extra apps on your phone.
While the Coway Airmega Mighty is available in three colors, only the black and silver model is currently discounted, so you’ll have to pay full price if it doesn’t match your living room’s color scheme. We’ve put in the work testing every air purifier we could get our hands on, so make sure to check out the full guide if you’re trying to clean up your space. The Coway is discounted as part of Amazon’s Big Spring Sale, and we’ve got the best deals from products we’ve tested gathered in one place if you want to save some bucks.
-
Politics1 week agoAfghanistan announces release of detained US citizen
-
Sports1 week agoBroadcast industry CEO says consolidation is ‘essential’ to compete for NFL soaring media rights prices
-
Tech1 week agoCan a Home Appliance Fix the Problem of Soft-Plastic Waste?
-
Business1 week agoProperty Play: Home flippers see smallest profits since the Great Recession, real estate data firm says
-
Entertainment1 week agoUN warns migratory freshwater fish numbers are spiralling
-
Business1 week agoGold prices soar in Pakistan – SUCH TV
-
Fashion1 week agoICE cotton slips on weaker crude, profit booking
-
Business1 week agoMore women are entering wealth management, but few are in advisory roles, study finds
