Tech
Can LLMs understand scientists? | Computer Weekly

The use of large language models (LLMs) as an alternative to search engines and recommendation algorithms is increasing, but early research suggests there is still a high degree of inconsistency and bias in the results these models produce. This has real-world consequences, as LLMs play a greater role in our decision-making choices.
Making sense of algorithmic recommendations is tough. In the past, we had entire industries dedicated to understanding (and gaming) the results of search engines – but the level of complexity of what goes into our online recommendations has risen several times over in just a matter of years. The massive diversity of use cases for LLMs has made audits of individual applications vital in tackling bias and inaccuracies.
Scientists, governments and civil society are scrambling to make sense of what these models are spitting out. A group of researchers at the Complexity Science Hub in Vienna has been looking at one area in particular where these models are being used: identifying scholarly experts. Specifically, these researchers were interested in which scientists are being recommended by these models – and which were not.
Lisette Espín-Noboa, a computer scientist working on the project, had been looking into this before major LLMs had hit the market: “In 2021, I was organising a workshop, and I wanted to come up with a list of keynote speakers.” First, she went to Google Scholar, an open-access database of scientists and their publications. “[Google Scholar] rank them by citations – but for several reasons, citations are biased.”
This meant trawling through pages and pages of male scientists. Some fields of science are simply more popular than others, with researchers having more influence purely due to the size of their discipline. Another issue is that older scientists – and older pieces of research – will naturally have more citations simply for being around longer, rather than the novelty of their findings.
“It’s often biased towards men,” Espín-Noboa points out. Even with more women entering the profession, most scientific disciplines have been male-dominated for decades.
Daniele Barolo, another researcher at the Complexity Science Hub, describes this as an example of the Matthew Effect. “If you sort the authors only by citation counts, it’s more likely they will be read and therefore cited, and this will create a reinforcement loop,” he explains. In other words, the rich get richer.
Espín-Noboa continues: “Then I thought, why don’t I use LLMs?” These tools could also fill in the gaps by including scientists that aren’t on Google Scholar.
But first, they would have to understand whether these were an improvement. “We started doing these audits because we wanted to know how much they knew about people, [and] if they were biased towards men or not,” Espín-Noboa says. The researchers also wanted to see how accurate the tools were and whether they displayed any biases based on ethnicity.
Auditing
They came up with an experiment which would test the recommendations given by LLMs along various lines, narrowing their requests to scientists published in the journal of the American Physical Society. They asked these LLMs for various recommendations, such as the most important in certain fields or to identify experts from certain periods of time.
While they couldn’t test for the absolute influence of a scientist – no such “ground truth” for this exists – the experiment did surface some interesting findings. Their paper, which is currently available as a preprint, suggests Asian scientists are significantly underrepresented in the recommendations provided by LLMs, and that existing biases against female authors are often replicated.
Despite detailed instructions, in some cases these models would hallucinate the names of scientists, particularly when asked for large lists of recommendations, and would not always be able to differentiate between varying fields of expertise.
“LLMs cannot be seen as directly as databases, because they are linguistic models,” Barolo says.
One test was to prompt the LLM with the name of a scientist and to ask it for someone of a similar academic profile – a “statistical twin”. But when they did this, “not only scientists that actually work in a similar field were recommended, but also people with a similar looking name” adds Barolo.
As with all experiments, there are certain limitations: for a start, this study was only conducted on open-weight models. These have a degree of transparency, although not as much as fully open-source models. Users are able to set certain parameters and to modify the structure of the algorithms used to fine-tune their outputs. By contrast, most of the largest foundation models are closed-weight ones, with minimal transparency and opportunities for customisation.
But even open-weight models come up against issues. “You don’t know completely how the training process was conducted and which training data was used,” Barolo points out.
The research was conducted on versions of Meta’s Llama models, Google’s Gemma (a more lightweight model than their flagship Gemini) and a model from Mistral. Each of these has already been superseded by newer models – a perennial problem for carrying out research on LLMs, as the academic pipeline cannot move as quickly as industry.
Aside from the time needed to execute research itself, papers can be held up for months or years in review. On top of this, a lack of transparency and the ever-changing nature of these models can create difficulties in reproducing results, which is a crucial step in the scientific process.
An improvement?
Espín-Noboa has previously worked on auditing more low-tech ranking algorithms. In 2022, she published a paper analysing the impacts of PageRank – the algorithm which arguably gave Google its big breakthrough in the late 1990s. It has since been used by LinkedIn, Twitter and Google Scholar.
PageRank was designed to make a calculation based on the number of links an item has in a network. In the case of webpages, this might be how many websites link to a certain site; or for scholars, it might make a similar calculation based on co-authorships.
Espín-Noboa’s research shows the algorithm has its own problems – it may serve to disadvantage minority groups. Despite this, PageRank is still fundamentally designed with recommendations in mind.
In contrast, “LLMs are not ranking algorithms – they do not understand what a ranking is right now”, says Espín-Noboa. Instead, LLMs are probabilistic – making a best guess at a correct answer by weighing up word probabilities. Espín-Noboa still sees promise in them, but says they are not up to scratch as things stand.
There is also a practical component to this research, as these researchers hope to ultimately create a way for people to better seek recommendations.
“Our final goal is to have a tool that a user can interact with easily using natural language,” says Barolo. This will be tailored to the needs of the user, allowing them to pick which issues are important to them.
“We believe that agency should be on the user, not on the LLM,” says Espín-Noboa. She uses the example of Google’s Gemini image generator overcorrecting for biases – representing American founding fathers (and Nazi soldiers) as people of colour after one update, and leading to it being temporarily suspended by the company.
Instead of having tech companies and programmers make sweeping decisions on the model’s output, users should be able to pick the issues most important to them.
The bigger picture
Research such as that going on at the Complexity Science Hub is happening across Europe and the world, as scientists race to understand how these new technologies are affecting our lives.
Academia has a “really important role to play”, says Lara Groves, a senior researcher at the Ada Lovelace Institute. Having studied how audits are taking place in various contexts, Groves says groups of academics – such as the annual FAccT conference on fairness, transparency and accountability – are “setting the terms of engagement” for audits.
Even without full access to training data and the algorithms these tools are built on, academia has “built up the evidence base for how, why and when you might do these audits”. But she warns these efforts can be hampered by the level of access that researchers are provided with, as they are often only able to look at their outputs.
Despite this, she would like to see more assessments taking place “at the foundation model layer”. Groves continues: “These systems are highly stochastic and highly dynamic, so it’s impossible to tell the range of outputs upstream.” In other words, the massive variability of what LLMs are producing means we ought to be checking under the hood before we start looking at their use cases.
Other industries – such as aviation or cyber security – already have rigorous processes for auditing. “It’s not like we’re working from first principles or from nothing. It’s identifying which of those mechanisms and approaches are analogous to AI,” Groves adds.
Amid an arms race for AI supremacy, any testing done by the major players is closely guarded. There have been occasional moments of openness: in August, OpenAI and Anthropic carried out audits on each other’s models and released their findings to the public.
Much of the work of interrogating LLMs will still fall to those outside of the tent. Methodical, independent research might allow us to glimpse into what’s driving these tools, and maybe even reshape them for the better.
Tech
Computer scientists are boosting US cybersecurity

As cyber threats grow more sophisticated by the day, UC Riverside researchers are making computing safer thanks to research that targets some of the internet’s most pressing security challenges.
UCR computer science and engineering students and faculty in the Marlan and Rosemary Bourns College of Engineering are developing tools to expose hidden vulnerabilities, protect private data, and strengthen the digital defenses that safeguard everything from personal communications to national infrastructure.
Their work is on the forefront of cybersecurity innovation—and underscores the critical role of federal investment in higher education research.
“Cybersecurity impacts every aspect of our lives, from personal privacy to national security. At UC Riverside, with support from federal grants, we’re training the next generation of computer scientists and engineers who are already making the internet and IT systems safer for everyone,” said Amit Roy-Chowdhury, a Bourns professor and co-director of the UC Riverside Artificial Intelligence Research and Education (RAISE) Institute.
Here are examples of computer security innovations published and presented at conferences this year:
Protecting data in AI learning
As artificial intelligence spreads into health care, finance, and government, privacy is paramount. But UCR graduate student Hasin Us Sami discovered that even methods designed to keep sensitive information safe can be compromised.
His paper, “Gradient Inversion Attacks on Parameter-Efficient Fine-Tuning”, posted to the arXiv preprint server, shows that adversaries can reconstruct private images from a training process called federated learning that was thought to be safer. Federated learning lets users train AI models on their own devices without sharing raw data.
For example, several hospitals may want to team up to develop AI models that detect diseases from patient tissue image scans. The research found that attackers could reverse-engineer data from the information that is shared and demonstrated how malicious servers could retrieve private images during training from state-of-the-art learning architectures, underscoring the urgent need for stronger defenses. The work was recognized at the 2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition, one of the top gatherings of AI researchers.
His paper was co-authored by graduate student Swapneel Sen, professors Amit K. Roy-Chowdhury and Srikanth V. Krishnamurthy, and assistant professor Basak Guler.
Unmasking firewall weaknesses
Research by graduate student Qing Deng focused on firewalls that millions rely on for protection. In the paper “Beyond the Horizon: Uncovering Hosts and Services Behind Misconfigured Firewalls,” published in the 2025 IEEE Symposium on Security and Privacy (SP), Deng and colleagues revealed that small configuration mistakes could open the door to cyber intruders.
By scanning the internet for unusual access points, Deng uncovered more than 2 million hidden services exposed by misconfigured firewalls—ranging from outdated servers to vulnerable home routers. These flaws, though overlooked for years, create what the team calls an “expanded observable internet,” a larger attack surface than security experts previously realized. The paper was co-authored by graduate students Juefei Pu, Zhaoweo Tan, and professors Zhiyun Qian and Srikanth V. Krishnamurthy.
Detecting invisible network flaws
For doctoral student Keyu Man, the threat of invisible “side-channel” attacks is a high priority. These attacks exploit subtle quirks in network protocols to allow hackers to hijack connections in a commonly used kind of server.
Known as “domain name system” servers, these computers translate human-friendly domain names into machine-readable IP addresses, allowing devices to find and connect to the right server.
Man co-authored the paper “SCAD: Towards a Universal and Automated Network Side-Channel Vulnerability Detection,” also published in the 2025 IEEE Symposium on Security and Privacy (SP), which introduces a tool called Side-ChAnnel Detector, or SCAD, to automatically uncover weaknesses in widely used operating systems like Linux and FreeBSD. Unlike previous methods that required weeks of painstaking manual work, SCAD can identify flaws in a single day of analysis.
Man’s research revealed 14 vulnerabilities—seven previously unknown—that could have been exploited for devastating cyberattacks. By automating the process, SCAD could change how industry protects critical online infrastructure.
The co-authors of this study include graduate students Zhongjie Wang, Yu Hao, Shenghan Zheng, Xin’an Zhou, Yue Cao, and professor Zhiyun Qian.
More information:
Hasin Us Sami et al, Gradient Inversion Attacks on Parameter-Efficient Fine-Tuning, arXiv (2025). DOI: 10.48550/arxiv.2506.04453
Qing Deng et al, Beyond the Horizon: Uncovering Hosts and Services Behind Misconfigured Firewalls, 2025 IEEE Symposium on Security and Privacy (SP) (2025). DOI: 10.1109/sp61157.2025.00164
Keyu Man et al, SCAD: Towards a Universal and Automated Network Side-Channel Vulnerability Detection, 2025 IEEE Symposium on Security and Privacy (SP) (2025). DOI: 10.1109/sp61157.2025.00068
Citation:
Computer scientists are boosting US cybersecurity (2025, September 19)
retrieved 19 September 2025
from https://techxplore.com/news/2025-09-scientists-boosting-cybersecurity.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
Government meets with car parts suppliers amid JLR cyber crisis | Computer Weekly

The Department for Business and Trade (DBT) is conducting high-level engagement with Jaguar Land Rover (JLR) and the wider British automotive industry as car production remains suspended at JLR’s facilities following a cyber attack.
DBT representatives today (Friday 19 September) held an extraordinary meeting with the Society of Motor Manufacturers and Traders (SMMT) Automotive Components Section amid the ongoing disruption to the wider supply chain in the UK.
During talks, officials heard about some of the challenges the sector is currently facing thanks to the sudden shutdown at JLR. The Tata-owned carmaker produced over 300,000 vehicles in 2024 and employs over 30,000 people, so is a cornerstone of the UK’s automotive industry.
The DBT send it was working to understand the impact to the supply chain, and that the meeting had allowed it to listen directly and understand the challenges and concerns JLR’s suppliers are facing.
Computer Weekly understands that many of these suppliers have had to shut down their own assembly lines since they cannot now send their finished products to JLR, and some are facing the prospect of lay-offs as a result.
“We know this is a worrying time for those affected, and although Jaguar Land Rover are taking the lead on support for their own supply chain, our cyber experts continue to support them to resolve the issue as quickly as possible,” said minister for industry Chris McDonald.
McDonald additionally met with West Midlands mayor Richard Parker on Thursday 18 September to discuss the impact of the JLR shutdown on the region.
On 17 September, trade union Unite urged the government to consider setting up a furlough scheme – similar to the nationwide scheme put in place for many sectors during the early days of the Covid-19 pandemic in 2020 – to preserve the jobs of an estimated 200,000 people.
“Workers in the JLR supply chain must not be made to pay the price for the cyber attack,” said Unite general secretary Sharon Graham. “It is the government’s responsibility to protect jobs and industries that are a vital part of the economy.”
Unite has advised some of the affected workers that they may be able to apply for Universal Credit.
JLR production is currently scheduled to resume on 24 September, but according to the BBC, Unite believes there is “zero chance” of this happening.
Ongoing incident
The JLR incident began at the end of August but first became public on 2 September when the Liverpool Echo revealed that workers at the firm’s Halewood plant in Merseyside had been told not to come into work.
The attack came just days after the new 75 batch of vehicle registration plates were made available, a regular six-monthly switchover that goes alongside a boost to car sales in the UK.
JLR subsequently revealed that data was exfiltrated from its systems during the attack, although the precise nature of this data has not been disclosed.
The attack was swiftly claimed by a hacking collective referring to itself as Scattered Lapsus$ Hunters – an apparent collaboration between three associated groups, Scattered Spider, Lapsus$ and ShinyHunters. It should be noted that attribution is a highly-imprecise science and so the veracity of these links has not been officially confirmed by law enforcement.
Tech
UK cyber action plan lays out path to resilience | Computer Weekly

A report produced for the government has today set out nine core recommendations for how the UK can strengthen its burgeoning cyber security sector to fuel resilience and growth across the economy.
Written by experts at Imperial College London (ICL) and the University of Bristol, and drawing on consultations with nearly 100 members of the cyber community, the UK cyber growth action plan slots into the government’s Modern Industrial Strategy, and will feed into an ongoing refresh of the National Cyber Strategy.
The report says that although the UK’s cyber sector remains on an upward trajectory, with jobs and revenue both rising by over 10% and gross value added (GVA) by over 20% in the past 12 months, taken as a whole, cyber is still undervalued. It describes “significant untapped potential” to go further still.
“The cyber security sector in the UK has significant growth potential, and there are clear roles for both government and the private sector identified … to contribute to tapping into that potential,” said Nigel Steward, director of the Centre for Sectoral Economic Performance (CSEP) at ICL.
“Supporting the sector isn’t just an economic opportunity, it’s essential for our national security and the resilience of businesses, so we at CSEP are very happy to have been able to produce this independent report in partnership with the University of Bristol to support the government’s Modern Industrial Strategy.”
Guy Poppy, pro vice-chancellor for research and innovation at the University of Bristol, added: “The UK’s cyber sector is a driver of innovation, resilience and economic growth. This action plan provides a timely roadmap, recognising how emerging technologies will shape future challenges and opportunities for stakeholders. It sets out a framework for research, skills and collaboration to turn innovation into growth and nationwide impact.
“By combining academic excellence with enterprise and policy engagement, we can help build a stronger, more resilient cyber ecosystem.”
Three pillars, nine recommendations
Each of the nine core recommendations is organised around three pillars – culture, leadership and places, designed to be implemented together to maximise their impact and force change at a systemic level.
The report’s authors caveated this by saying these are not designed to be exhaustive, and given how quickly the report was researched and compiled, it is likely that further work will be needed to create more granular recommendations.
On the first pillar, culture, the report recognises that growing British cyber businesses will depend on better interaction between product and service suppliers, and security buyers and leaders, and the first three recommendations are designed to address this.
- First, government and stakeholders should review incentives and validation routes available to cyber businesses to help make it easier to navigate complex cyber demands and build a culture that helps organisations grow;
- Second, government should stimulate growth by setting expectations on reporting cyber risk, encouraging uptake of cyber insurance and principles-based assurance, and possibly mandating the use of accreditations such as the National Cyber Security Centre’s (NCSC’s) Cyber Essentials scheme;
- Third, cyber professionals should be engaged in civil society on their role in national resilience and prosperity to foster public participation in security. They could, for example, emphasise the role security teams at critical infrastructure operations play in keeping the nation’s homes lit and warm. This effort would also include shoring up cyber skills initiatives at schools and colleges to develop future talent.
On the second pillar, the report recognises that cyber leaders today tend not to be very focused on connecting supply and demand for sector growth. The fourth, fifth and sixth recommendations set out to address this.
- The report recommends the appointment of a UK cyber growth leader to coordinate across the security sector and in the government. This role would encompass some duties previously held by the now-defunct UK cyber ambassador in promoting exports in support of the country’s national security, as well as a responsibility for driving forward a plan to prioritise cyber growth and integrate it into various policy areas;
- Next, it calls for the appointment of “place-based leaders” who can convene and drive local cyber security growth initiatives and outcomes. Ideally, these individuals will have significant experience in the industry. Although they will work with the cyber growth leader, they should remain independent from all levels of government;
- Then, the government should expand and better resource the NCSC, which the report’s authors describe as a “crown jewel” for cyber resilience, using its deep expertise in support of cyber growth, business guidance and validation, and technological research.
The third pillar recognises the role of “places” in innovation and growth. On this basis, the final three recommendations are designed to help attract cyber investors, shape research and development (R&D), and build relationships to help new security businesses get up and running.
- Place-based leaders should be in place to develop future-oriented communities that bring together security pros and chief information security officers, academics, small and large businesses, government, and other stakeholders, to share perspectives and pursue solutions to security challenges. The goal here is to help initiate and deliver innovative projects, building a “culture of anticipation”;
- Places should nurture distinct tech areas by being strategic in prioritising technologies and their areas of application based on local strengths and sector connections, aligned to government strategy. The goal here is local security strengths for local places that together are more than the sum of their parts and contribute to UK-wide growth;
- Finally, places should create safe spaces or sandboxes, with on-tap infrastructure and data for various stakeholders to explore, create and conduct exercises such as role-playing cyber wargames. The goal here is not just to help create new initiatives, products and services, but to foster broader capabilities to serve in times of crises, should they arise.
All of these recommendations are underpinned by two principles – that the UK’s security sector should act as one team, and celebrate, build on and capitalise on the social capital in the cyber community, and that the benefits of cyber resilience and growth should always be recognised during discussions of value for money.
“The message from across the sector is clear,” said Simon Shiu, professor of cyber security at the University of Bristol, who led on the report’s creation.
“The UK has the talent, ambition and opportunity to lead in cyber security. We can do this by aligning growth with resilience, and making strategic choices that benefit the whole economy.”
NCC Group CEO Mike Maddison added: “The UK’s Cyber growth action plan is a bold step forward, recognising cyber not just as a technology, but as a strategic enabler of national resilience and economic growth. It builds on the Industrial Strategy’s clear message: cyber is a frontier industry.
“This plan sends a powerful signal to our clients and partners. It shows that the UK is serious about scaling innovation, investing in skills and commercialising research. And it confirms what we have always known, that cyber security is essential to the future of every sector.”
-
Fashion1 week ago
Acne Studios expands in France with redesigned historic HQ
-
Tech6 days ago
How a 2020 Rolex Collection Changed the Face of Watch Design
-
Fashion7 days ago
Mexico imposes ADD on footwear originating in China
-
Tech7 days ago
Cancel Culture Comes for Artists Who Posted About Charlie Kirk’s Death
-
Tech6 days ago
OpenAI reaches new agreement with Microsoft to change its corporate structure
-
Fashion7 days ago
Vintage concept Styx debuts in Porto with luxury fashion and art
-
Fashion7 days ago
Dior names Greta Lee as brand ambassador
-
Fashion6 days ago
UK real GDP grows 0.2% QoQ, 1.2% YoY in May-Jul 2025: ONS