When writing program code, software developers often work in pairs—a practice that reduces errors and encourages knowledge sharing. Increasingly, AI assistants are now being used for this role.
But this shift in working practice isn’t without its drawbacks, as a new empirical study by computer scientists in Saarbrücken reveals. Developers tend to scrutinize AI-generated code less critically and they learn less from it. These findings will be presented at the 40th IEEE/ACM International Conference on Automated Software Engineering (ASE 2025) in Seoul.
When two software developers collaborate on a programming project—known in technical circles as pair programming—it tends to yield a significant improvement in the quality of the resulting software.
“Developers can often inspire one another and help avoid problematic solutions. They can also share their expertise, thus ensuring that more people in their organization are familiar with the codebase,” explains Sven Apel, professor of computer science at Saarland University.
Together with his team, Apel has examined whether this collaborative approach works equally well when one of the partners is an AI assistant. In the study, 19 students with programming experience were divided into pairs: Six worked with a human partner, while seven collaborated with an AI assistant. The methodology for measuring knowledge transfer was developed by Niklas Schneider as part of his bachelor’s thesis.
For the study, the researchers used GitHub Copilot, an AI-powered coding assistant introduced by Microsoft in 2021, which—like similar products from other companies—has now been widely adopted by software developers. These tools have significantly changed how software is written.
“It enables faster development and the generation of large volumes of code in a short time. But this also makes it easier for mistakes to creep in unnoticed, with consequences that may only surface later on,” says Apel. The team wanted to understand which aspects of human collaboration enhance programming and whether these can be replicated in human-AI pairings. Participants were tasked with developing algorithms and integrating them into a shared project environment.
“Knowledge transfer is a key part of pair programming,” Apel explains. “Developers will continuously discuss current problems and work together to find solutions. This does not involve simply asking and answering questions, it also means that the developers share effective programming strategies and volunteer their own insights.”
According to the study, such exchanges also occurred in the AI-assisted teams—but the interactions were less intense and covered a narrower range of topics.
“In many cases, the focus was solely on the code,” says Apel. “By contrast, human programmers working together were more likely to digress and engage in broader discussions and were less focused on the immediate task.”
One finding particularly surprised the research team: “The programmers who were working with an AI assistant were more likely to accept AI-generated suggestions without critical evaluation. They assumed the code would work as intended,” says Apel. “The human pairs, in contrast, were much more likely to ask critical questions and were more inclined to carefully examine each other’s contributions.”
He believes this tendency to trust AI more readily than human colleagues may extend to other domains as well, stating, “I think it has to do with a certain degree of complacency—a tendency to assume the AI’s output is probably good enough, even though we know AI assistants can also make mistakes.
Apel warns that this uncritical reliance on AI could lead to the accumulation of “technical debt,” which can be thought of as the hidden costs of the future work needed to correct these mistakes, thereby complicating the future development of the software.
For Apel, the study highlights the fact that AI assistants are not yet capable of replicating the richness of human collaboration in software development.
“They are certainly useful for simple, repetitive tasks,” says Apel. “But for more complex problems, knowledge exchange is essential—and that currently works best between humans, possibly with AI assistants as supporting tools.”
Apel emphasizes the need for further research into how humans and AI can collaborate effectively while still retaining the kind of critical eye that characterizes human collaboration.
Citation:
Software developers show less constructive skepticism when using AI assistants than when working with human colleagues (2025, November 3)
retrieved 3 November 2025
from https://techxplore.com/news/2025-11-software-skepticism-ai-human-colleagues.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
We’ve all been in situations where we’ve had to sleep on a sofa bed. I can recall many childhood vacations where I’d be tossing and turning on a squeaky setup. If this was also you, sofa beds might not jump out as the most appealing option. But they’ve evolved from the rickety pull-out mattresses of yore—today’s sofa beds are a far more comfortable and efficient way to create a guest bed wherever you need, whether in a spare room or a small apartment.
That said, sofa beds, also known as sleeper sofas, are not all of the same caliber. This is where Australian furniture brand Koala aims to stand out. Since entering the US market in the fall of 2023, it has focused on comfortable, stylish, and easy-to-assemble sofa beds. However, as a professional mattress tester, I was very curious to see if the latest Koala sofa bed offering, the Wanda, was as comfortable and supportive as the mattresses I usually test. So I went on a testing side quest and dedicated a whole week to sleeping on the Wanda. What I found is that it’s a cozy short-term solution for guests and general lounging, but I wouldn’t replace your mattress setup with it.
Quadruple Threat
Sofa beds typically use a “2-in-1” design, combining a couch with a pull-out bed that folds away under the seat cushion when not in use. The Wanda offers a “4-in-1” design that combines a couch with a daybed, a reversible chaise, and a queen-size, slide-out bed.
The Wanda arrived in four large boxes—you will most definitely need help moving them, especially if you plan to go up any stairs. Aside from size, these boxes range in weight from a doable 47 pounds to 104 pounds, which I struggled to move upstairs on my own.
All Together Now
Photograph: Julia Forbes
In honor of all my previous sleeper sofa experiences, I wanted to know how the Wanda would fare in a small room. So instead of my usual spacious studio setup, with dimensions of 13 feet by 15 feet, I decided to use my upstairs home office. Since I didn’t move my desk out of the way, the Wanda took up half the room, which was only 10.5 feet by 10.5 feet, give or take, with other furniture in it. The sofa bed is 99 inches long (8.25 feet) and resembles a sideways “L,” with the chaise jutting out 69 inches (5.75 feet). As if this weren’t cozy enough, my husband and two small dogs decided to set up shop with me.
Industry analysis from nPerf of the UK’s mobile market arena in 2025 has revealed “intense” competition at the top tier, with the best internet performance delivered not just by market leader EE but also rival provider Three.
The nPerf Speed Test analysed the overall quality of the connection experienced by UK mobile users across all four quarters of 2025, considering several criteria, namely download speed, upload speed, latency, browsing performance and streaming quality. Measurements were based on 57,299 tests conducted via the nPerf application on both Android and iOS devices.
Download speeds above 25 Mbps were classified as excellent, and a latency of 0-30 ms was considered excellent, enabling activities such as 4K video streaming and real-time interaction, while 31-100 ms enabled optimal internet performance with minimal delay. Browsing performance was seen as a measure of how quickly and efficiently internet pages were loaded and navigated. A browsing score between 75% and 100% meant near-instant page loading.
A score between 75% and 100% indicates smooth streaming in terms of the quality benchmark. Scores between 50% and 75% were seen as adequate for YouTube streaming. Below 50% resulted in compromised video quality.
Overall, EE and Three were found to have tied for leadership with both operators delivering download speeds exceeding 110 Mbps, supporting what were described as “comprehensive” user experiences.
Both EE and Three Users were seen as benefiting from high-quality mobile experiences, particularly for streaming, gaming and real-time communications.
EE’s performance was rated as having “comprehensive strengths” and demonstrating “strong performances” across all metrics. The BT-owned operator had an overall score of 86,470 nPoints in the study, complementing download speeds of 110.44 Mbps with upload speeds of 15.77 Mbps. EE rated scores of 69.65% for browsing and 76.51% as regards video streaming, delivering “fluid content viewing” for users.
For its part, Three was said to have a “solid top position” additionally guaranteeing “high-quality” video calls and “efficient” content sharing. The operator achieved the best latency in the sector (35.31 ms), perfect for online gaming and real-time communications. Three’s overall rating was 84,993 nPoints, with download speeds reach 110.72 Mbps and upload speeds of 16.50 Mbps, but also with the best latency in the sector (35.31 ms). Leading in upload performance, the operator could guarantee high-quality video calls and efficient content sharing.
In competitive positioning, Vodafone ranked third with 74,892 nPoints and delivered 74.56 Mbps in download speed. The operator was said to have demonstrate solid video streaming performance at 74.35%.
The fourth UK mobile operator, O2, was said to have experienced a “focused” performance in 2025, totalling 66,557 nPoints and displaying “efficient” latency of 37.56 ms, suitable for what were called “responsive” online experiences.
Looking specifically at the mobile internet experience of the UK’s 5G networks, Three subscribers enjoyed the best 5G internet in 2025 with both fastest download and upload speeds. Both O2 and Three offered the best 5G connections with the lowest latency. EE, Three and Vodafone tied for best 5G web browsing performances while Three and Vodafone were seen as delivering the leading 5G streaming performances.
Commenting on the results, Sébastien de Rosbo, CEO of nPerf, said: “The UK mobile market demonstrates strong competition at the top, with both leaders delivering comprehensive performance and user experiences that benefit from high-speed connectivity across all key indicators.”
Last month, the UK government announced the AI Skills Boost programme, promising “free AI training for all” and claiming that the courses will give people the skills needed to use artificial intelligence (AI) tools effectively. There are multiple reasons why we don’t agree.
US dependency over UK sovereignty
The “AI Skills Boost” is the free, badged “foundation” element of the government’s AI Skills Hub which was launched with great fanfare. There are 14 courses, exclusively from big US organisations, promoting and training on their platforms. The initiative increases dependency on US big tech – the opposite of the government’s recent conclusion, in its new AI opportunities action plan, to position the UK “to be an AI maker, not an AI taker”. It is also not clear how increasing UK workers’ reliance and usage of US big tech tools and platforms is intended to increase the UK’s homegrown AI talent.
In stark contrast to President Macron’s announcement last week that the French government will phase out dependency on US-based big tech, by using local providers to enhance digital sovereignty and privacy, technology secretary Liz Kendall’s speech was a lesson in contradictions.
Right after affirming that AI is “far too important a technology to depend entirely on other countries, especially in areas like defence, financial services and healthcare”, the secretary of state went on that the country’s strategy is to adopt existing technologies based overseas.
Microsoft, one of the founding partners for this initiative, has already admitted that “US authorities can compel access to data held by American cloud providers, regardless of where that data physically resides”, further acknowledging that the company will honour any data requests from the US state, regardless of where in the world the data is housed. Is this the sovereignty and privacy the UK government is trying to achieve?
Commercial content rather than quality skills provision
The AI Skills Hub indexes hundreds of AI-related courses. That means the hub, which cost £4.1m to build, is simply a bookmark or affiliate list of online courses and resources that are already available, with seemingly no quality control or oversight. The decision to award the contract to a “Big Four” commercial consultancy, PwC, rather than the proven national data, AI and digital skills providers who tendered, needs to be investigated.
The press releases focus on the “free” element of the training, but 60% of the courses are paid, even some of those which are marked as free, providing a deceptive funnel for paid commercial training providers.
We need to have greater national ambition than simply providing skills training. That the only substandard skills provision available is provided by those with commercial interests in controlling how people think about and use AI is a further insult
The package launched includes 595 courses, but only 14 have been benchmarked by Skills England, and there has been a critical outcry over the dangerously poor quality of many courses, some of which are 10 years’ old, don’t exist, or are poor quality AI slop.
An example of why this is so concerning is that many courses are not relevant to the UK. One of the courses promoted has already been shown to misrepresent the UK law on intellectual property, with the course creators later denying they had any contractual arrangement with the site and admitting that they were “not consulted before our materials were posted and linked from there”.
Warnings on the need for public AI literacy provision ignored
Aside from concerns over the standards, safety, sovereignty and cost of the content offered, there is a much bigger issue, which we have been warning about.
Currently, 84% of the UK public feel disenfranchised and excluded from AI decision-making, and mistrust key institutions, and 91% prioritise safe and fair usage of AI over economic gain and adoption speed.
In 2021, the UK’s AI Council provided a roadmap for developing the UK’s National AI Strategy. It advised on programmes of public and educational AI literacy beyond teaching technical or practical skills. This call has been repeated, especially in the wake of greater public exposure to generative AI since 2023, which now requires the public to understand not just how to prompt or code, but to use critical thinking to navigate a number of related implications of the technology.
In July 2025, we represented a number of specialists, education experts and public representatives, and wrote an open letter calling for investment in the UK’s AI capabilities beyond being passive users of US tools. Despite initial agreements to meet and discuss from the Department for Education and Department for Science, Innovation and Technology, the offer was rescinded.
Without comprehensive public understanding and sustained engagement, developing AI for public good and maintaining public trust will be a significant challenge. By investing in independent AI literacy initiatives that are accessible to all and not just aimed at onboarding uncritical users and consumers, the UK can help to ensure that its AI future is shaped with the UK public’s benefit at the heart.
Wasted opportunity to develop a beneficial UK approach to AI
We need to have greater national ambition than simply providing skills training. That the only substandard skills provision available is provided – at great public cost – by those with commercial interests in controlling how people think about and use AI is a further insult.
Indeed, Kendall’s claim that AI has the potential to add £400bn to the economy by 2030 is lifted from a report built by a sector consultancy that only focuses on the positive impact of Google technologies in the UK. Her announcement leaned heavily on claims such as “AI is now the engine of economic power and of hard power”, which come from a Silicon Valley playbook.
The focus on practical skills undermines the nation’s AI and tech sovereignty, harms the economy, with money leaving the nation to fund big tech. It entrenches political disenfranchisement, with decisions about AI framed as too complex for the general population to meaningfully engage with. It stands on fictitious narratives about inevitable big tech AI futures, in which public voice and public good are irrelevant.
If you wish to sign a second version of the open letter, which we are currently drafting, or to submit a critical AI literacy resource to We and AI’s resource hub, contact us here.
This article is co-authored by:
Tania Duarte, founder, We and AI
Bruna Martins, director at Tecer Digital
Dr. Elinor Carmi, senior lecturer in data politics and social justice, City St. George’s University of London
Dr. Mark Wong, head of social and urban policy, University of Glasgow
Dr Susan Oman, senior Lecturer, data, AI & society, The University of Sheffield
Ismael Kherroubi Garcia, founder & CEO, Kairoi
Cinzia Pusceddu, senior fellow of the Higher Education Academy, independent researcher
Dylan Orchard, postgraduate researcher, King’s College London
Tim Davies, director of research & practice, Connected by Data
Steph Wright, co-founder & managing director, Our AI Collective