Tech
Student trust in AI coding tools grows briefly, then levels off with experience
How much do undergraduate computer science students trust chatbots powered by large language models like GitHub Copilot and ChatGPT? And how should computer science educators modify their teaching based on these levels of trust?
These were the questions that a group of U.S. computer scientists set out to answer in a study that will be presented at the Koli Calling conference Nov. 11 to 16 in Finland. In the course of the study’s few weeks, researchers found that trust in generative AI tools increased in the short run for a majority of students.
But in the long run, students said they realized they needed to be competent programmers without the help of AI tools. This is because these tools often generate incorrect code or would not help students with code comprehension tasks.
The study was motivated by the dramatic change in the skills required from undergraduate computer science students since the advent of generative AI tools that can create code from scratch. The work is published on the arXiv preprint server.
“Computer science and programming is changing immensely,” said Gerald Soosairaj, one of the paper’s senior authors and an associate teaching professor in the Department of Computer Science and Engineering at the University of California San Diego.
Today, students are tempted to overly rely on chatbots to generate code and, as a result, might not learn the basics of programming, researchers said. These tools also might generate code that is incorrect or vulnerable to cybersecurity attacks. Conversely, students who refuse to use chatbots miss out on the opportunity to program faster and be more productive.
But once they graduate, computer science students will most likely use generative AI tools in their day-to-day, and need to be able to do so effectively. This means they will still need to have a solid understanding of the fundamentals of computing and how programs work, so they can evaluate the AI-generated code they will be working with, researchers said.
“We found that student trust, on average, increased as they used GitHub Copilot throughout the study. But after completing the second part of the study–a more elaborate project–students felt that using Copilot to its full extent requires a competent programmer that can complete some tasks manually,” said Soosairaj.
The study surveyed 71 junior and senior computer science students, half of whom had never used GitHub Copilot. After an 80-minute class where researchers explained how GitHub Copilot works and had students use the tool, half of the students said their trust in the tool had increased, while about 17% said it had decreased. Students then took part in a 10-day-long project where they worked on a large open-source codebase using GitHub Copilot throughout the project to add a small new functionality to the codebase.
At the end of the project, about 39% of students said their trust in Copilot had increased. But about 37% said their trust in Copilot had decreased somewhat while about 24% said it had not changed.
The results of this study have important implications for how computer science educators should approach the introduction of AI assistants in introductory and advanced courses. Researchers make a series of recommendations for computer science educators in an undergraduate setting.
- To help students calibrate their trust and expectations of AI assistants, computer science educators should provide opportunities for students to use AI programming assistants for tasks with a range of difficulty, including tasks within large codebases.
- To help students determine how much they can trust AI assistants’ output, computer science educators should ensure that students can still comprehend, modify, debug, and test code in large codebases without AI assistants.
- Computer science educators should ensure that students are aware of how AI assistants generate output via natural language processing so that students understand the AI assistants’ expected behavior.
- Computer science educators should explicitly inform and demonstrate key features of AI assistants that are useful for contributing to a large code base, such as adding files as context while using the ‘explain code’ feature and using keywords such as “/explain,” “/fix,” and “/docs” in GitHub Copilot.
“CS educators should be mindful that how we present and discuss AI assistants can impact how students perceive such assistants,” the researchers write.
Researchers plan to repeat their experiment and survey with a larger pool of 200 students this winter quarter.
More information:
Anshul Shah et al, Evolution of Programmers’ Trust in Generative AI Programming Assistants, arXiv (2025). DOI: 10.48550/arxiv.2509.13253
Conference: www.kolicalling.fi/
Citation:
Student trust in AI coding tools grows briefly, then levels off with experience (2025, November 3)
retrieved 3 November 2025
from https://techxplore.com/news/2025-11-student-ai-coding-tools-briefly.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
Anthropic Plots Major London Expansion
Anthropic is moving into a new London office as it seeks to expand its research and commercial footprint in Europe, setting up a scrap between the leading AI labs for talent emerging from British universities.
The company, which opened its first London office in 2023, is moving to the same neighborhood as Google DeepMind, OpenAI, Meta, Wayve, Isomorphic Labs, Synthesia, and various AI research institutions.
Anthropic’s new, 158,000-square-foot office footprint will have space enough for 800 people—four times its current head count—giving it room to potentially outscale OpenAI, which itself recently announced an expansion in London.
“Europe’s largest businesses and fastest-growing startups are choosing Claude, and we’re scaling to match,” says Pip White, head of EMEA North at Anthropic. “The UK combines ambitious enterprises and institutions that understand what’s at stake with AI safety with an exceptional pool of AI talent—we want to be where all of that comes together.
UK government officials had reportedly attempted to coax Anthropic into expanding its presence in London after the company recently fell out with the US administration. Anthropic refused to allow its models to be used in mass surveillance and autonomous weapon systems, leading to an ongoing legal battle between the AI lab and the Pentagon.
As part of the expansion, Anthropic says it will deepen its work with the UK’s AI Security Institute, a government body that this week published a risk evaluation of its latest model, Claude Mythos Preview. According to Politico, the UK government is one of few across Europe to have been granted access to the model, which Anthropic has released to only select parties, citing concerns over the potential for its abuse by cybercriminals.
The increasing concentration of AI companies in the same London district is an important step in creating a pathway for research to translate into AI products, says Geraint Rees, vice-provost at University College London, whose campus is around the corner from Anthropic’s new office.
“This cluster didn’t emerge from a planning document. It grew because serious researchers and companies understand that proximity isn’t a nice-to-have,” he said last month, speaking at an event attended by WIRED. “That’s how the innovation system actually works. It’s not a clean, linear transfer from lab to market. It’s messier, richer, more human than that.”
Tech
LG’s High-End Soundbar System Makes My Living Room Feel Like a Home Theater
Setup was relatively quick and painless. You just have to unbox four speakers, a soundbar, and a subwoofer, attach their power cables, and plug in everything. Pairing happens through the LG ThinQ app, which allows you to set up the Sound Suite system and tune it to exactly where you’re sitting in the room using your cell phone’s microphone.
You can also set up each speaker to play music and group it with any other LG smart speakers you might have around your home, like the more affordable $250 M5 bookshelf speaker, to create a whole-home system.
Once all the components were synced, I plugged the soundbar into the C5 OLED via HDMI, and was able to easily control everything via the TV remote’s volume and mute buttons. More in-depth settings had to happen in the app, but if you’re anything like me, this won’t become a regular chore. You’ll set it how you like it once and move on. While the pairing functionality with the LG TV was nice, it’s not required–the eARC port lets the Sound Suite work perfectly with any modern TV.
The bar itself runs the show, with a black-and-white display on the far left that shows your mode and volume, among other settings. In the center of the bar and below each speaker, an LED light strip that also shows you the volume when you change it, which is a nice touch.
Getting Musical
Photograph: Parker Hall
The sound of the LG Sound Suite is full and cinematic, thanks in no small part to the extra dedicated speakers. Most competitors lack front left and right, simply opting to use the soundbar for these channels. As such, the width and breadth of the soundstage were bigger than most competitors I’ve tried, with only Samsung’s flagship HW-Q990F as a real contender. Even the Samsung lacked the lower-frequency audio quality that these LG speakers provide.
Tech
Cyber Essentials closes the MFA loophole but leaves some organisations adrift | Computer Weekly
On 27 April, the government backed security certification scheme, Cyber Essentials v3.3, takes effect and multi-factor authentication (MFA) becomes a pass-or-fail requirement for the first time.
If a cloud service your organisation uses offers MFA and you have not enabled it, you fail. No discretion, no partial credit, no route to remediate inside the assessment cycle.
This is the right call. I want to say that clearly, because what follows is a problem with the implementation, not the policy. MFA is the single most effective control against credential-based attacks, and the scheme has needed to stop tolerating its absence for a long time. The National Cyber Security Centre (NCSC), part of GCHQ, which developed Cyber Essentials and certification company, IASME have got this decision right.
But in the assessments we have conducted this year, I have seen two organisations that will hit a wall on 27 April, and I do not think they are unusual.
Train company could not deploy MFA
The first is a train operating company in the South East. Station operations rooms run on shared terminals where staff rotate through shifts in time-critical conditions. A transport union raised formal concerns that MFA would introduce delays at the keyboard that could affect train operations and, in their view, the safety of train movements.
The company listened and chose not to enable MFA in those environments. Under v3.2 they passed, with the relevant questions marked as non-compliant but not fatal. Under Cyber Essentials v3.3 they will fail.
Charity run by volunteers faces MFA hurdle
The second is a nationally known charity with hundreds of high street shops. The shops are staffed largely by volunteers many of whom work a few hours a week, and staff turnover is high.
The cost and management overhead of enrolling every volunteer onto MFA, using personal phones they may not have and authenticator apps they would not keep, was considered prohibitive. So MFA was never switched on. Same story: they passed under v3.2. Under v3.3 they fail.
Neither of these organisations is ignoring security. Both made considered decisions based on how their people actually work. The problem is not that they do not want to comply. It is that the standard toolkit of MFA methods, including SMS codes, authenticator apps on personal phones, and push notifications, does not fit a six-person shared terminal that has to be available in seconds, or a volunteer workforce that changes every week.
FIDO2 could offer solutions
The frustrating part is that there is a solution, and it is already proven in healthcare, manufacturing and retail. FIDO2 authentication delivered through NFC badge-taps lets a staff member authenticate in under two seconds: tap a badge, enter a short PIN, session opens.
It satisfies the MFA requirement by combining possession of the badge with knowledge of the PIN. It is faster than typing a password. Crucially, it is compliant, because each badge is enrolled as that individual’s unique FIDO2 credential, so the Cyber Essentials requirement for unique user accounts is met. Shared keys or shared PINs would not work. Individual badges do.
Need for better guidance
v3.3 explicitly recognises FIDO2 authenticators and passkeys as valid MFA methods. The compliance path is clear. What is missing is anyone telling the organisations most affected that this path exists.
That is the gap that must close. The NCSC and IASME have made the right policy decision; the scheme would be weaker without it.
But implementation guidance for shared-terminal, shift-based and high-turnover environments is thin, and these organisations are running out of time to find their way through it. Many of them hold Cyber Essentials because it is required for government contracts or in their supply chains; losing certification has a direct commercial cost.
The answer is not to soften the requirement. The answer is to make sure no one fails for lack of information about how to meet it.
Jonathan Krause is Founder and Managing Director of Forensic Control
-
Entertainment1 week agoQueen Elizabeth II emotional message for Archie, Lilibet sparks speculation
-
Tech1 week agoAzure customers up in arms over ‘full’ UK South region | Computer Weekly
-
Tech1 week agoAs the Strait of Hormuz Reopens, Global Shipping Will Take Months to Recover
-
Fashion1 week agoCII submits 20-pt agenda to Indian govt to back firms hit by Iran war
-
Tech1 week agoThis AI Button Wearable From Ex-Apple Engineers Looks Like an iPod Shuffle
-
Politics6 days agoIndian airlines hit hardest after Dubai limits foreign flights until May 31
-
Entertainment3 days agoPalace left in shock as Prince William cancels grand ceremony
-
Politics6 days agoChinese, Taiwanese will unite, Xi tells Taiwan opposition leader
