Connect with us

Tech

Software developers show less constructive skepticism when using AI assistants than when working with human colleagues

Published

on

Software developers show less constructive skepticism when using AI assistants than when working with human colleagues


Credit: Unsplash/CC0 Public Domain

When writing program code, software developers often work in pairs—a practice that reduces errors and encourages knowledge sharing. Increasingly, AI assistants are now being used for this role.

But this shift in working practice isn’t without its drawbacks, as a new empirical study by computer scientists in Saarbrücken reveals. Developers tend to scrutinize AI-generated code less critically and they learn less from it. These findings will be presented at the 40th IEEE/ACM International Conference on Automated Software Engineering (ASE 2025) in Seoul.

When two software developers collaborate on a programming project—known in technical circles as pair programming—it tends to yield a significant improvement in the quality of the resulting software.

“Developers can often inspire one another and help avoid problematic solutions. They can also share their expertise, thus ensuring that more people in their organization are familiar with the codebase,” explains Sven Apel, professor of computer science at Saarland University.

Together with his team, Apel has examined whether this works equally well when one of the partners is an AI assistant. In the study, 19 students with programming experience were divided into pairs: Six worked with a human partner, while seven collaborated with an AI assistant. The methodology for measuring was developed by Niklas Schneider as part of his bachelor’s thesis.

For the study, the researchers used GitHub Copilot, an AI-powered coding assistant introduced by Microsoft in 2021, which—like similar products from other companies—has now been widely adopted by . These tools have significantly changed how software is written.

“It enables faster development and the generation of large volumes of code in a short time. But this also makes it easier for mistakes to creep in unnoticed, with consequences that may only surface later on,” says Apel. The team wanted to understand which aspects of human collaboration enhance programming and whether these can be replicated in human-AI pairings. Participants were tasked with developing algorithms and integrating them into a shared project environment.

“Knowledge transfer is a key part of pair programming,” Apel explains. “Developers will continuously discuss current problems and work together to find solutions. This does not involve simply asking and answering questions, it also means that the developers share effective programming strategies and volunteer their own insights.”

According to the study, such exchanges also occurred in the AI-assisted teams—but the interactions were less intense and covered a narrower range of topics.

“In many cases, the focus was solely on the code,” says Apel. “By contrast, human programmers working together were more likely to digress and engage in broader discussions and were less focused on the immediate task.”

One finding particularly surprised the research team: “The programmers who were working with an AI assistant were more likely to accept AI-generated suggestions without critical evaluation. They assumed the code would work as intended,” says Apel. “The human pairs, in contrast, were much more likely to ask critical questions and were more inclined to carefully examine each other’s contributions.”

He believes this tendency to trust AI more readily than human colleagues may extend to other domains as well, stating, “I think it has to do with a certain degree of complacency—a tendency to assume the AI’s output is probably good enough, even though we know AI assistants can also make mistakes.

Apel warns that this uncritical reliance on AI could lead to the accumulation of “technical debt,” which can be thought of as the hidden costs of the future work needed to correct these mistakes, thereby complicating the future development of the software.

For Apel, the study highlights the fact that AI assistants are not yet capable of replicating the richness of human collaboration in software development.

“They are certainly useful for simple, repetitive tasks,” says Apel. “But for more , knowledge exchange is essential—and that currently works best between humans, possibly with AI assistants as supporting tools.”

Apel emphasizes the need for further research into how humans and AI can collaborate effectively while still retaining the kind of critical eye that characterizes human collaboration.

More information:
Abstract: An Empirical Study of Knowledge Transfer in AI Pair Programming (2025).

Citation:
Software developers show less constructive skepticism when using AI assistants than when working with human colleagues (2025, November 3)
retrieved 3 November 2025
from https://techxplore.com/news/2025-11-software-skepticism-ai-human-colleagues.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Tech

The UK government’s AI skills programme betrays UK workers and our digital sovereignty | Computer Weekly

Published

on

The UK government’s AI skills programme betrays UK workers and our digital sovereignty | Computer Weekly


Last month, the UK government announced the AI Skills Boost programme, promising “free AI training for all” and claiming that the courses will give people the skills needed to use artificial intelligence (AI) tools effectively. There are multiple reasons why we don’t agree.

US dependency over UK sovereignty

The “AI Skills Boost” is the free, badged “foundation” element of the government’s AI Skills Hub which was launched with great fanfare. There are 14 courses, exclusively from big US organisations, promoting and training on their platforms. The initiative increases dependency on US big tech – the opposite of the government’s recent conclusion, in its new AI opportunities action plan, to position the UK “to be an AI maker, not an AI taker”. It is also not clear how increasing UK workers’ reliance and usage of US big tech tools and platforms is intended to increase the UK’s homegrown AI talent.

In stark contrast to President Macron’s announcement last week that the French government will phase out dependency on US-based big tech, by using local providers to enhance digital sovereignty and privacy, technology secretary Liz Kendall’s speech was a lesson in contradictions.

Right after affirming that AI is “far too important a technology to depend entirely on other countries, especially in areas like defence, financial services and healthcare”, the secretary of state went on that the country’s strategy is to adopt existing technologies based overseas.

Microsoft, one of the founding partners for this initiative, has already admitted that “US authorities can compel access to data held by American cloud providers, regardless of where that data physically resides”, further acknowledging that the company will honour any data requests from the US state, regardless of where in the world the data is housed. Is this the sovereignty and privacy the UK government is trying to achieve?

Commercial content rather than quality skills provision

The AI Skills Hub indexes hundreds of AI-related courses. That means the hub, which cost £4.1m to build, is simply a bookmark or affiliate list of online courses and resources that are already available, with seemingly no quality control or oversight. The decision to award the contract to a “Big Four” commercial consultancy, PwC, rather than the proven national data, AI and digital skills providers who tendered, needs to be investigated.

The press releases focus on the “free” element of the training, but 60% of the courses are paid, even some of those which are marked as free, providing a deceptive funnel for paid commercial training providers.

We need to have greater national ambition than simply providing skills training. That the only substandard skills provision available is provided by those with commercial interests in controlling how people think about and use AI is a further insult

The package launched includes 595 courses, but only 14 have been benchmarked by Skills England, and there has been a critical outcry over the dangerously poor quality of many courses, some of which are 10 years’ old, don’t exist, or are poor quality AI slop.

An example of why this is so concerning is that many courses are not relevant to the UK. One of the courses promoted has already been shown to misrepresent the UK law on intellectual property, with the course creators later denying they had any contractual arrangement with the site and admitting that they were “not consulted before our materials were posted and linked from there”.

Warnings on the need for public AI literacy provision ignored

Aside from concerns over the standards, safety, sovereignty and cost of the content offered, there is a much bigger issue, which we have been warning about.

Currently, 84% of the UK public feel disenfranchised and excluded from AI decision-making, and mistrust key institutions, and 91% prioritise safe and fair usage of AI over economic gain and adoption speed.

In 2021, the UK’s AI Council provided a roadmap for developing the UK’s National AI Strategy. It advised on programmes of public and educational AI literacy beyond teaching technical or practical skills. This call has been repeated, especially in the wake of greater public exposure to generative AI since 2023, which now requires the public to understand not just how to prompt or code, but to use critical thinking to navigate a number of related implications of the technology.

In July 2025, we represented a number of specialists, education experts and public representatives, and wrote an open letter calling for investment in the UK’s AI capabilities beyond being passive users of US tools. Despite initial agreements to meet and discuss from the Department for Education and Department for Science, Innovation and Technology, the offer was rescinded.

Without comprehensive public understanding and sustained engagement, developing AI for public good and maintaining public trust will be a significant challenge. By investing in independent AI literacy initiatives that are accessible to all and not just aimed at onboarding uncritical users and consumers, the UK can help to ensure that its AI future is shaped with the UK public’s benefit at the heart.

Wasted opportunity to develop a beneficial UK approach to AI

We need to have greater national ambition than simply providing skills training. That the only substandard skills provision available is provided – at great public cost – by those with commercial interests in controlling how people think about and use AI is a further insult.

Indeed, Kendall’s claim that AI has the potential to add £400bn to the economy by 2030 is lifted from a report built by a sector consultancy that only focuses on the positive impact of Google technologies in the UK. Her announcement leaned heavily on claims such as “AI is now the engine of economic power and of hard power”, which come from a Silicon Valley playbook.

The focus on practical skills undermines the nation’s AI and tech sovereignty, harms the economy, with money leaving the nation to fund big tech. It entrenches political disenfranchisement, with decisions about AI framed as too complex for the general population to meaningfully engage with. It stands on fictitious narratives about inevitable big tech AI futures, in which public voice and public good are irrelevant.

If you wish to sign a second version of the open letter, which we are currently drafting, or to submit a critical AI literacy resource to We and AI’s resource hub, contact us here

This article is co-authored by:

  • Tania Duarte, founder, We and AI
  • Bruna Martins, director at Tecer Digital
  • Dr. Elinor Carmi, senior lecturer in data politics and social justice, City St. George’s University of London
  • Dr. Mark Wong, head of social and urban policy, University of Glasgow
  • Dr Susan Oman, senior Lecturer, data, AI & society, The University of Sheffield
  • Ismael Kherroubi Garcia, founder & CEO, Kairoi
  • Cinzia Pusceddu, senior fellow of the Higher Education Academy, independent researcher
  • Dylan Orchard, postgraduate researcher, King’s College London
  • Tim Davies, director of research & practice, Connected by Data
  • Steph Wright, co-founder & managing director, Our AI Collective



Source link

Continue Reading

Tech

Epstein Files Reveal Peter Thiel’s Elaborate Dietary Restrictions

Published

on

Epstein Files Reveal Peter Thiel’s Elaborate Dietary Restrictions


Peter Thiel—the billionaire venture capitalist, PayPal and Palantir cofounder, and outspoken commentator on all matters relating to the “Antichrist”—appears at least 2,200 times in the latest batch of files released by the Department of Justice related to convicted sex offender and disgraced financier Jeffrey Epstein.

The tranche of records demonstrate how Epstein managed to cultivate an extensive network of wealthy and influential figures in Silicon Valley. A number of them, including Thiel, continued to interact with Epstein even after his 2008 guilty plea for solicitation of prostitution and of procurement of minors to engage in prostitution.

The new files show that Thiel arranged to meet with Epstein several times between 2014 and 2017. “What are you up to on Friday?” Thiel wrote to Epstein on April 5, 2016. “Should we try for lunch?” The bulk of the communications between the two men in the data dump concern scheduling meals, calls, and meetings with one another. Thiel did not immediately return a request for comment from WIRED.

One piece of correspondence stands out for being particularly bizarre. On February 3, 2016, Thiel’s former chief of staff and senior executive assistant, Alisa Bekins, sent an email with the subject line “Meeting – Feb 4 – 9:30 AM – Peter Thiel dietary restrictions – CONFIDENTIAL.” The initial recipient of the email is redacted, but it was later forwarded directly to Epstein.

The contents of the message are also redacted in at least one version of the email chain uploaded by the Justice Department on Friday. However, two other files from what appears to be the same set of messages have less information redacted.

In one email, Bekins listed some two dozen approved kinds of sushi and animal protein, 14 approved vegetables, and 0 approved fruits for Thiel to eat. “Fresh herbs” and “olive oil” were permitted, however, ketchup, mayonnaise, and soy sauce should be avoided. Only one actual meal was explicitly outlined: “egg whites or greens/salad with some form of protein,” such as steak, which Bekins included “in the event they eat breakfast.” It’s unclear if the February 4 meeting ultimately occurred; other emails indicate Thiel got stuck in traffic on his way to meet Epstein that day.

According to a recording of an undated conversation between Epstein and former Israeli Prime Minister Ehud Barak that was also part of the files the DOJ released on Friday, Epstein told Barak that he was hoping to meet Thiel the following week. He added that he was familiar with Thiel’s company Palantir, but proceeded to spell it out loud for Barak as “Pallentier.” Epstein speculated that Thiel may put Barak on the board of Palantir, though there’s no evidence that ever occurred.

“I’ve never met Peter Thiel, and everybody says he sort of jumps around and acts really strange, like he’s on drugs,” Epstein said at one point in the audio recording, referring to Thiel. The former prime minister expressed agreement with Epstein’s assessment.

In 2015 and 2016, Epstein put $40 million in two funds managed by one of Thiel’s investment firms, Valar Ventures, according to The New York Times. Epstein and Thiel continued to communicate and were discussing meeting with one another as recently as January 2019, according to the files released by the DOJ. Epstein committed suicide in his prison cell in August of that year.

Below are Thiel’s dietary restrictions as outlined in the February 2016 email. (The following list has been reformatted slightly for clarity.)

APPROVED SUSHI + APPROVED PROTEIN

  • Kaki Oysters
  • Bass
  • Nigiri
  • Beef
  • Octopus
  • Catfish
  • Sashimi
  • Chicken
  • Scallops
  • Eggs
  • Sea Urchin
  • Lamb
  • Seabass
  • Perch
  • Spicy Tuna w Avocado
  • Squid
  • Turkey
  • Sweet Shrimps
  • Whitefish
  • Tobiko
  • Tuna
  • Yellowtail
  • Trout

APPROVED VEGETABLES

  • Artichoke
  • Avocado
  • Beets
  • Broccoli
  • Brussels sprouts
  • Cabbage
  • Carrots
  • Cucumber
  • Garlic
  • Olives
  • Onions
  • Peppers
  • Salad greens
  • Spinach

APPROVED NUTS

  • Anything unsalted and unroasted
  • Peanuts
  • Pecans
  • Pistachios

CONDIMENTS

  • Most fresh herbs, and olive oil

AVOID

  • Dairy
  • Fruits
  • Gluten
  • Grains
  • Ketchup
  • Mayo
  • Mushroom
  • Processed foods
  • Soy Sauce
  • Sugar
  • Tomato
  • Vinegar

MEAL SUGGESTIONS

  • Breakfast Egg whites or greens/salad with some form of protein (Steak etc)



Source link

Continue Reading

Tech

Elon Musk Is Rolling xAI Into SpaceX—Creating the World’s Most Valuable Private Company

Published

on

Elon Musk Is Rolling xAI Into SpaceX—Creating the World’s Most Valuable Private Company


Elon Musk’s rocket and satellite company SpaceX is acquiring his AI startup xAI, the centibillionaire announced on Monday. In a blog post, Musk said the acquisition was warranted because global electricity demand for AI cannot be met with “terrestrial solutions,” and Silicon Valley will soon need to build data centers in space to power its AI ambitions.

“In the long term, space-based AI is obviously the only way to scale,” Musk wrote. “The only logical solution therefore is to transport these resource-intensive efforts to a location with vast power and space. I mean, space is called ‘space’ for a reason.”

The deal, which pulls together two of Musk’s largest private ventures, values the combined entity at $1.25 trillion, making it the most valuable private company in the world, according to a report from Bloomberg.

SpaceX was in the process of preparing to go public later this year before the xAI acquisition was announced. The space firm’s plans for an initial public offering are still on, according to Bloomberg.

In December, SpaceX told employees that it would buy insider shares in a deal that would value the rocket company at $800 billion, according to The New York Times. Last month, xAI announced that it had raised $20 billion from investors, bringing the company’s valuation to roughly $230 billion.

This isn’t the first time Musk has sought to consolidate parts of his vast business empire, which is largely privately owned and includes xAI, SpaceX, the brain interface company Neuralink, and the tunnel transportation firm the Boring Company.

Last year, xAI acquired Musk’s social media platform, X, formerly known as Twitter, in a deal that valued the combined entity at more than $110 billion. Since then, xAI’s core product, Grok, has become further integrated into the social media platform. Grok is featured prominently in various X features, and Musk has claimed the app’s content-recommendation algorithm is powered by xAI’s technology.

A decade ago, Musk also used shares of his electric car company Tesla to purchase SolarCity, a renewable energy firm that was run at the time by cousin Lyndon Rive.

The xAI acquisition demonstrates how Musk can use his expansive network of companies to help power his own often grandiose visions of the future. Elon Musk said in the blog post that SpaceX will immediately focus on launching satellites into space to power AI development on Earth, but eventually, the space-based data centers he envisions building could power civilizations on other planets, such as Mars.

“This marks not just the next chapter, but the next book in SpaceX and xAI’s mission: scaling to make a sentient sun to understand the Universe and extend the light of consciousness to the stars,” Musk said in the blog post.



Source link

Continue Reading

Trending