Connect with us

Tech

Software developers show less constructive skepticism when using AI assistants than when working with human colleagues

Published

on

Software developers show less constructive skepticism when using AI assistants than when working with human colleagues


Credit: Unsplash/CC0 Public Domain

When writing program code, software developers often work in pairs—a practice that reduces errors and encourages knowledge sharing. Increasingly, AI assistants are now being used for this role.

But this shift in working practice isn’t without its drawbacks, as a new empirical study by computer scientists in Saarbrücken reveals. Developers tend to scrutinize AI-generated code less critically and they learn less from it. These findings will be presented at the 40th IEEE/ACM International Conference on Automated Software Engineering (ASE 2025) in Seoul.

When two software developers collaborate on a programming project—known in technical circles as pair programming—it tends to yield a significant improvement in the quality of the resulting software.

“Developers can often inspire one another and help avoid problematic solutions. They can also share their expertise, thus ensuring that more people in their organization are familiar with the codebase,” explains Sven Apel, professor of computer science at Saarland University.

Together with his team, Apel has examined whether this works equally well when one of the partners is an AI assistant. In the study, 19 students with programming experience were divided into pairs: Six worked with a human partner, while seven collaborated with an AI assistant. The methodology for measuring was developed by Niklas Schneider as part of his bachelor’s thesis.

For the study, the researchers used GitHub Copilot, an AI-powered coding assistant introduced by Microsoft in 2021, which—like similar products from other companies—has now been widely adopted by . These tools have significantly changed how software is written.

“It enables faster development and the generation of large volumes of code in a short time. But this also makes it easier for mistakes to creep in unnoticed, with consequences that may only surface later on,” says Apel. The team wanted to understand which aspects of human collaboration enhance programming and whether these can be replicated in human-AI pairings. Participants were tasked with developing algorithms and integrating them into a shared project environment.

“Knowledge transfer is a key part of pair programming,” Apel explains. “Developers will continuously discuss current problems and work together to find solutions. This does not involve simply asking and answering questions, it also means that the developers share effective programming strategies and volunteer their own insights.”

According to the study, such exchanges also occurred in the AI-assisted teams—but the interactions were less intense and covered a narrower range of topics.

“In many cases, the focus was solely on the code,” says Apel. “By contrast, human programmers working together were more likely to digress and engage in broader discussions and were less focused on the immediate task.”

One finding particularly surprised the research team: “The programmers who were working with an AI assistant were more likely to accept AI-generated suggestions without critical evaluation. They assumed the code would work as intended,” says Apel. “The human pairs, in contrast, were much more likely to ask critical questions and were more inclined to carefully examine each other’s contributions.”

He believes this tendency to trust AI more readily than human colleagues may extend to other domains as well, stating, “I think it has to do with a certain degree of complacency—a tendency to assume the AI’s output is probably good enough, even though we know AI assistants can also make mistakes.

Apel warns that this uncritical reliance on AI could lead to the accumulation of “technical debt,” which can be thought of as the hidden costs of the future work needed to correct these mistakes, thereby complicating the future development of the software.

For Apel, the study highlights the fact that AI assistants are not yet capable of replicating the richness of human collaboration in software development.

“They are certainly useful for simple, repetitive tasks,” says Apel. “But for more , knowledge exchange is essential—and that currently works best between humans, possibly with AI assistants as supporting tools.”

Apel emphasizes the need for further research into how humans and AI can collaborate effectively while still retaining the kind of critical eye that characterizes human collaboration.

More information:
Abstract: An Empirical Study of Knowledge Transfer in AI Pair Programming (2025).

Citation:
Software developers show less constructive skepticism when using AI assistants than when working with human colleagues (2025, November 3)
retrieved 3 November 2025
from https://techxplore.com/news/2025-11-software-skepticism-ai-human-colleagues.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Tech

Europe’s Online Age Verification App Is Here

Published

on

Europe’s Online Age Verification App Is Here


The European online age verification app is ready.

The app works with passports or ID cards, is built to be “completely anonymous” for the people who use it, works on any device (smartphones, tablets, and PCs), and is open source. “Best of all, online platforms can easily rely on our age verification app, so there are no more excuses,” said European Commission president Ursula von der Leyen at a press conference on Wednesday. “Europe offers a free and easy-to-use solution that can protect our children from harmful and illegal content.”

High Expectations

“It is our duty to protect our children in the online world just as we do in the offline world. And to do that effectively, we need a harmonized European approach,” von der Leyen said at Wednesday’s press conference. “And one of the central issues is the question, how can we ensure a technical solution for age verification that is valid throughout Europe? Today, I can announce that we have the answer.”

This answer takes the form of an open source app that any private company can repurpose, as long as it complies with European privacy standards and offers the same technical solution throughout the European Union. The user downloads the app, agrees to the terms and conditions, sets up a pin or biometric access, and proves their age through an electronic identification system, or by showing a passport or ID card (in which case biometric verification is also provided). The app does not store your name, date of birth, ID number, or any other personal information, according to the European Commission—only the fact that you are over a certain age.

After that, when a person using the app wants to access a social network (minimum age: 13), pornographic site (minimum age: 18), or any other age-protected content, if they are logged in from a computer, they need only scan the QR code shown on the site they want to visit. If, on the other hand, the person logs in from a smartphone, the app sends the proof of age directly. The platform does not access the document with which the user proved it in the first place.

Adoption Event

The need to introduce a common system for the entire European Union has been discussed for some time, and according to commission technicians, the technical work is now complete. Of course, it will still be possible to circumvent the system—all it takes is for an adult to lend their phone to a younger friend—but the technological architecture exists, and it will be up to EU member states to decide whether to integrate it into national digital wallets or develop independent apps.

“No More Excuses”

For the app to really be effective, platforms must be obligated to verify the age of their users—that’s where things get tricky. The Digital Services Act, which went into effect in 2024, requires “very large online platforms”—those with more than 45 million monthly users in the European Union—to take concrete steps to mitigate systemic risks related to child protection, with heavy penalties for noncompliance.

“And that’s why Europe has the DSA: to call online platforms to their responsibilities. Because Europe will not tolerate platforms making money at the expense of our children,” European Commission executive vice president Henna Virkkunen told a press conference. She added that after an investigation into TikTok, the European institutions plan to take similar action against Facebook, Instagram, and Snapchat, as well as four porn sites. “Since the platforms do not have adequate age verification tools, we developed the solution ourselves,” he concluded. In short, as von der Leyen also remarked, “there are no more excuses.”

Bare Minimum

So far, this is the European framework that sets the general rules. On this basis, member states can consider more restrictive measures. Italy was among the first to discuss how to regulate the use of social media by minors but has so far not landed on anything concrete. Elsewhere in the EU, France’s Emmanuel Macron has been a trailblazer on the issue, pushing France to discuss a rule to ban social networks for minors under the age of 15 entirely. So far, this measure has received broad political support—but the outcome depends largely on compatibility with the Digital Services Act and the availability of effective age verification systems like the app the European Commission just released.

This article originally appeared on WIRED Italia and has been translated.



Source link

Continue Reading

Tech

Anthropic Plots Major London Expansion

Published

on

Anthropic Plots Major London Expansion


Anthropic is moving into a new London office as it seeks to expand its research and commercial footprint in Europe, setting up a scrap between the leading AI labs for talent emerging from British universities.

The company, which opened its first London office in 2023, is moving to the same neighborhood as Google DeepMind, OpenAI, Meta, Wayve, Isomorphic Labs, Synthesia, and various AI research institutions.

Anthropic’s new, 158,000-square-foot office footprint will have space enough for 800 people—four times its current head count—giving it room to potentially outscale OpenAI, which itself recently announced an expansion in London.

“Europe’s largest businesses and fastest-growing startups are choosing Claude, and we’re scaling to match,” says Pip White, head of EMEA North at Anthropic. “The UK combines ambitious enterprises and institutions that understand what’s at stake with AI safety with an exceptional pool of AI talent—we want to be where all of that comes together.

UK government officials had reportedly attempted to coax Anthropic into expanding its presence in London after the company recently fell out with the US administration. Anthropic refused to allow its models to be used in mass surveillance and autonomous weapon systems, leading to an ongoing legal battle between the AI lab and the Pentagon.

As part of the expansion, Anthropic says it will deepen its work with the UK’s AI Security Institute, a government body that this week published a risk evaluation of its latest model, Claude Mythos Preview. According to Politico, the UK government is one of few across Europe to have been granted access to the model, which Anthropic has released to only select parties, citing concerns over the potential for its abuse by cybercriminals.

The increasing concentration of AI companies in the same London district is an important step in creating a pathway for research to translate into AI products, says Geraint Rees, vice-provost at University College London, whose campus is around the corner from Anthropic’s new office.

“This cluster didn’t emerge from a planning document. It grew because serious researchers and companies understand that proximity isn’t a nice-to-have,” he said last month, speaking at an event attended by WIRED. “That’s how the innovation system actually works. It’s not a clean, linear transfer from lab to market. It’s messier, richer, more human than that.”



Source link

Continue Reading

Tech

CYBERUK ’26: UK lagging on legal protections for cyber pros | Computer Weekly

Published

on

CYBERUK ’26: UK lagging on legal protections for cyber pros | Computer Weekly


The increasingly long-in-the-tooth Computer Misuse Act (CMA) of 1990 remains an albatross around the neck of British cyber security professionals, and even though the UK government committed last December to reforming it, every minute of delay is holding back the nation’s security innovation, resilience, talent, and ability to defend itself against cyber attacks, campaigners have warned.

Ahead of the National Cyber Security Centre’s (NCSC’s) upcoming CYBERUK conference in Glasgow, the CyberUp Campaign for reform of the Computer Misuse Act (CMA) has published a new report, titled Protections for Cyber Researchers: How the UK is being left behind to maintain pressure on Westminster.

The CMA defines the vague offence of unauthorised access to a computer, which the campaigners want changed because it was written 35 years ago and fails to account for the development of the cyber security profession, and the fact that in the course of their day-to-day work, cyber pros may sometimes need to hack into other systems.

“Cyber attacks are growing in scale, sophistication and severity, with a devastating impact on infrastructure, businesses and charities,” said a CyberUp campaign spokesperson.

“While other countries have moved to refresh their cyber laws in response, the UK’s Computer Misuse Act hasn’t been updated since before the modern internet – hardly the best platform for accelerating our defences into the next decade.”

The group’s report highlights how other nations, Australia, Belgium, France, Germany, Hong Kong, Malta, Portugal, and the USA, have already secured legal protections for cyber professionals that enable them to go about their business without fear of prosecution.

In Portugal – Britain’s oldest formal ally under a treaty dating back to the 14th Century – the government last year published Decreto-Lei 125/2025, implementing the European Union (EU) Network and Information Systems (NIS2) Directive and revising the country’s cyber crime law to ensure that ethical hackers and professional cyber security practitioners working in good faith are both recognised and protected.

Portgual’s laws now accept some elements of cyber work may have to happen without explicit permission or involve unanticipated technical overreach that has a legitimate purpose.

As such, Portugal says that security work undertaken in good faith won’t be punished as long as the researcher fulfills a set of conditions. For example, they can act only to find vulnerabilities and these must be reported immediately, they must avoid taking harmful actions, like conducting DDoS attacks or installing malware, and they must respect the integrity of any data they may find or access and delete it within 10 days once the issue is addressed.

CyberUp said Portugal’s example demonstrates how cyber crime laws can be modernised to legally protect research carried out in the public interest.

“Portugal has demonstrated how to modernise their equivalent law through cyber legislation. We urge the government to follow this example and act swiftly through the Cyber Security and Resilience Bill to achieve meaningful reform, or risk lagging even further behind our peers,” the spokesperson said.

Defence Framework

Working with cyber security experts and legal advisors, the CyberUp campaign has developed its own Defence Framework that would allow cyber professionals to present a statutory defence in court as long as they adhere to the Framework’s four core principles.

  • Harm Vs. Benefit: The benefits of the activity must outweigh the potential harms;
  • Proportionality: Cyber pros must take all reasonable steps to minimise the risks of their activity;
  • Intent: They must act honestly, sincerely, and clearly direct themselves towards improving security;
  • Competence: Their qualifications and professional memberships should demonstrate they are suitably equipped to perform cyber security work.

The campaigners say this framework will bring clarity and confidence to the security sector, enabling cyber pros to run essential research tasks without fear of criminal prosecution, helping organisations operate to recognised legal standards, and enabling a more open and collaborative relationship between the cyber sector and the UK government.



Source link

Continue Reading

Trending