Connect with us

Tech

A firewall for science: AI tool identifies 1,000 ‘questionable’ journals

Published

on

A firewall for science: AI tool identifies 1,000 ‘questionable’ journals


Predicted characteristics of journals flagged as questionable at the 50% threshold (n = 1437). Credit: Science Advances (2025). DOI: 10.1126/sciadv.adt2792

A team of computer scientists led by the University of Colorado Boulder has developed a new artificial intelligence platform that automatically seeks out “questionable” scientific journals.

The study, published Aug. 27 in the journal Science Advances, tackles an alarming trend in the world of research.

Daniel Acuña, lead author of the study and associate professor in the Department of Computer Science, gets a reminder of that several times a week in his email inbox: These spam messages come from people who purport to be editors at scientific journals, usually ones Acuña has never heard of, and offer to publish his papers—for a hefty fee.

Such publications are sometimes referred to as “predatory” journals. They target scientists, convincing them to pay hundreds or even thousands of dollars to publish their research without proper vetting.

“There has been a growing effort among scientists and organizations to vet these journals,” Acuña said. “But it’s like whack-a-mole. You catch one, and then another appears, usually from the same company. They just create a new website and come up with a new name.”

His group’s new AI tool automatically screens , evaluating their websites and other online data for certain criteria: Do the journals have an editorial board featuring established researchers? Do their websites contain a lot of grammatical errors?

Acuña emphasizes that the tool isn’t perfect. Ultimately, he thinks human experts, not machines, should make the final call on whether a journal is reputable.

But in an era when prominent figures are questioning the legitimacy of science, stopping the spread of questionable publications has become more important than ever before, he said.

“In science, you don’t start from scratch. You build on top of the research of others,” Acuña said. “So if the foundation of that tower crumbles, then the entire thing collapses.”

The shake down

When scientists submit a new study to a reputable publication, that study usually undergoes a practice called . Outside experts read the study and evaluate it for quality—or, at least, that’s the goal.

A growing number of companies have sought to circumvent that process to turn a profit. In 2009, Jeffrey Beall, a librarian at CU Denver, coined the phrase “predatory” journals to describe these publications.

Often, they target researchers outside of the United States and Europe, such as in China, India and Iran—countries where scientific institutions may be young, and the pressure and incentives for researchers to publish are high.

“They will say, ‘If you pay $500 or $1,000, we will review your paper,'” Acuña said. “In reality, they don’t provide any service. They just take the PDF and post it on their website.”

A few different groups have sought to curb the practice. Among them is a called the Directory of Open Access Journals (DOAJ). Since 2003, volunteers at the DOAJ have flagged thousands of journals as suspicious based on six criteria. (Reputable publications, for example, tend to include a detailed description of their peer review policies on their websites.)

But keeping pace with the spread of those publications has been daunting for humans.

To speed up the process, Acuña and his colleagues turned to AI. The team trained its system using the DOAJ’s data, then asked the AI to sift through a list of nearly 15,200 open-access journals on the internet.

Among those journals, the AI initially flagged more than 1,400 as potentially problematic.

Acuña and his colleagues asked human experts to review a subset of the suspicious journals. The AI made mistakes, according to the humans, flagging an estimated 350 publications as questionable when they were likely legitimate. That still left more than 1,000 journals that the researchers identified as questionable.

“I think this should be used as a helper to prescreen large numbers of journals,” he said. “But human professionals should do the final analysis.”

Acuña added that the researchers didn’t want their system to be a “black box” like some other AI platforms.

“With ChatGPT, for example, you often don’t understand why it’s suggesting something,” Acuña said. “We tried to make ours as interpretable as possible.”

The team discovered, for example, that questionable journals published an unusually high number of articles. They also included authors with a larger number of affiliations than more legitimate journals, and authors who cited their own research, rather than the research of other scientists, to an unusually high level.

The new AI system isn’t publicly accessible, but the researchers hope to make it available to universities and publishing companies soon. Acuña sees the tool as one way that researchers can protect their fields from bad data—what he calls a “firewall for science.”

“As a computer scientist, I often give the example of when a new smartphone comes out,” he said. “We know the phone’s software will have flaws, and we expect bug fixes to come in the future. We should probably do the same with science.”

More information:
Han Zhuang et al, Estimating the predictability of questionable open-access journals, Science Advances (2025). DOI: 10.1126/sciadv.adt2792

Citation:
A firewall for science: AI tool identifies 1,000 ‘questionable’ journals (2025, August 30)
retrieved 30 August 2025
from https://techxplore.com/news/2025-08-firewall-science-ai-tool-journals.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Carbon opportunities highlighted in Australia’s utilities sector

Published

on

Carbon opportunities highlighted in Australia’s utilities sector


Credit: Unsplash/CC0 Public Domain

Australia’s utility sector accounts for some 43.1% of the country’s carbon footprint, and some 37.2% of its direct emissions, new research from Edith Cowan University (ECU) has revealed.

Dr. Soheil Kazemian, from the ECU School of Business and Law, said the utilities sector included , transmission and distribution, gas supply, water supply and waste collection and treatment.

Electricity generation and transmission were identified as the most significant contributors within the utilities sector, with commercial services and manufacturing emerging as substantial sources of embodied within the sector.

The research, published in the Management of Environmental Quality: An International Journal, revealed that 71% of embodied emissions were attributed to electricity transmission, distribution, on-selling electricity, and electricity market operation. Electricity generation accounted for a further 15%, while gas supply accounted for 5%, water supply for 4%, and waste services and treatment for the remaining 5% of embodied emissions in the sector.

“The study highlights electricity transmission and generation as the subsectors with the highest potential for adopting low-carbon technologies. By pinpointing emission hotspots and offering detailed sectoral disaggregation, the results of the research provide actionable insights for prioritizing investment in emissions reduction strategies, advancing Australia’s sustainability goals and supporting global climate change mitigation,” Dr. Kazemian said.

He said that as with any other business, the pressure to reduce the carbon emissions footprint of the utility sector would need to originate from the consumer sector.

Unlike other sectors, however, increased investment into the utilities sector is likely to result in a smaller carbon footprint.

“This is a major difference between the different sectors in Australia. If you invest more in mining, that means the from that industry would increase, and the same can be said for manufacturing as the investment would result in expanded business.

“While new infrastructure development can generate temporary increases in emissions for the utility sector during construction, the long-term impact depends on where those dollars are spent. Investment in or efficient delivery networks can significantly cut emissions, whereas continuing to fund carbon-intensive energy sources risks locking in higher emissions for decades to come.

“This complexity highlights a critical point that meaningful decarbonization will depend not only on policy or technology, but also on consumer choices. When households and businesses demand cleaner energy, utilities are more likely to channel investment into low-carbon solutions. By consciously choosing renewable energy options and supporting sustainable providers, consumers can send a powerful market signal that accelerates the transition to a cleaner grid,” Dr. Kazemian said.

More information:
Soheil Kazemian et al, Determining the carbon footprint of Australia’s electricity, gas, water and waste services sector, Management of Environmental Quality: An International Journal (2025). DOI: 10.1108/meq-07-2024-0311

Citation:
Carbon opportunities highlighted in Australia’s utilities sector (2025, October 15)
retrieved 15 October 2025
from https://techxplore.com/news/2025-10-carbon-opportunities-highlighted-australia-sector.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Tech

AI-ready companies turning network pilots into profit | Computer Weekly

Published

on

AI-ready companies turning network pilots into profit | Computer Weekly


While the AI genie is out of the bottle for organisations for all sizes, only 13% of businesses are fully prepared for it, with those ready as much as four times more likely to move pilots into production and 50% more likely to see measurable value, according to a study by Cisco.

The data comes from the Cisco AI readiness index 2025, a global study, now in its third year, based on a double-blind survey of 8,000 senior IT and business leaders responsible for AI strategy at organisations with more than 500 employees across 26 industries across 30 markets.

Cisco added that the combination of foresight and foundation is delivering real, tangible results at a time when two major forces are starting to reshape the landscape: AI agents, which raise the bar for scale, security and governance; and AI infrastructure debt, the early warning signs of hidden bottlenecks that threaten to erode long-term value.

Regarding AI agents, the survey found ambition was outpacing readiness. Overall, 83% of organisations planned to deploy AI agents, and nearly 40% expected them to work alongside employees within a year. But the study discovered that, for majority of these companies, AI agents were exposing weak foundations – that is, systems that can barely handle reactive, task-based AI, let alone AI systems that act autonomously and learn continuously. More than half (54%) of respondents said their networks can’t scale for complexity or data volume and just 15% describe their networks as flexible or adaptable.

AI infrastructure debt was called the modern evolution of technical and digital debt that once held back digital transformation. Moreover, the survey regarded it as “the silent accumulation of compromises, deferred upgrades, and underfunded architecture that erodes the value of AI over time”. Some 62% of firms expect workloads to rise by over 30% within three years, 64% struggle to centralise data, only 26% said that they have robust GPU capacity and fewer than one in three could detect or prevent AI-specific threats.

Among the topline results from the report were that “small but consistent” group of companies surveyed – falling into the category of pacesetters, and making up about 13% of organisations for the past three years – were outperforming their peers across every measure of AI value.

Cisco noted that the pacesetters’ sustained advantage indicated a new form of resilience: a disciplined, system-level approach that balances strategic drivers with the data and network infrastructure needed to keep pace with AI’s accelerating evolution. It added that such firms were already architecting for the future, with 98% designing their networks for the growth, scale and complexity of AI, compared with 46% overall.

The research outlined a pattern among companies delivering real returns: they make AI part of the business, not a side project; they build infrastructure that’s ready to grow; they move pilots into production; they measure what matters; and they turn security into strength.

Virtually all pacesetters (99%) were found to have a defined AI roadmap (vs 58% overall), and 91% (vs 35%) had a change-management plan. Budgets match intent, with 79% making AI the top investment priority (vs 24%), and 96% with short- and long-term funding strategies (vs 43%). The study noted that such firms architect for the always-on AI era. Some 71% of pacesetters said that their networks were fully flexible and can scale instantly for any AI project (vs 15% overall), and 77% are investing in new datacentre capacity within the next 12 months (vs 43%).

Just over three-fifths had what was defined as a “mature, repeatable” innovation process for generating and scaling AI use cases (versus 13% overall), and three-quarters (77%) had already finalised those use cases (versus 18%). Some 95% track the impact of their AI investments – three times higher than others – and 71% were confident their use cases will generate new revenue streams, more than double the overall average. Meanwhile, 87% were highly aware of AI-specific threats (versus 42% overall), 62% integrated AI into their security and identity systems (versus 29%), and 75% were fully equipped to control and secure AI agents (versus 31%).

The result of this approach, said Cisco, was that pacesetters achieve more widespread results than their peers because of this approach, with 90% reporting gains in profitability, productivity and innovation, compared with around 60% overall.

Commenting on the results from the survey, Cisco president and chief product officer Jeetu Patel stated that the AI readiness index makes one thing clear: AI doesn’t fail – readiness fails, adding: “The most AI-ready organisations – the pacesetters from our research – prove it. They’re four times more likely to move pilots into production and 50% more likely to realise measurable value. So, with more than 80% of organisations we surveyed about to deploy AI agents, these new findings confirm readiness, discipline and action are key to unlocking value.”



Source link

Continue Reading

Tech

Patch Tuesday: Windows 10 end of life pain for IT departments | Computer Weekly

Published

on

Patch Tuesday: Windows 10 end of life pain for IT departments | Computer Weekly


The day Microsoft officially ended support for Windows 10 has coincided with a Patch Tuesday update, with several zero-day flaws that attackers could exploit to target the older Windows operating system.

Among these is CVE-2025-24990, which covers a legacy device driver that Microsoft has removed entirely from Windows. “The active exploitation of CVE-2025-24990 in the Agere Modem driver (ltmdm64.sys) shows the security risks of maintaining legacy components within modern operating systems,” warned Ben McCarthy, lead cyber security engineer at Immersive.

“This driver, which supports hardware from the late 1990s and early 2000s, predates current secure development practices and has remained largely unchanged for years,” he said. “Kernel-mode drivers operate with the highest system privileges, making them a primary target for attackers seeking to escalate their access.”

McCarthy said threat actors are using this vulnerability as a second stage for their operations. “The attack chain typically begins with the actor gaining an initial foothold on a target system through common methods like a phishing campaign, credential theft, or by exploiting a different vulnerability in a public-facing application,” he said.

McCarthy added that Microsoft’s decision to remove the driver entirely, rather than issue a patch, is a direct response to the risks associated with modifying unsupported, third-party legacy code. “Attempts to patch such a component can be unreliable, potentially introducing system instability or failing to address the root cause of the vulnerability completely,” he said.

In removing the driver from the Windows operating system, McCarthy said Microsoft has prioritised reducing the attack surface over absolute backward compatibility. “By removing the vulnerable and obsolete component, the potential for this specific exploit is zero,” he said. “The security risk presented by the driver was determined to be greater than the requirement to continue supporting the outdated hardware it serves.”

McCarthy said this approach demonstrates that an effective security strategy must include the lifecycle management of old code, where removal is often more definitive and secure than patching.

Another zero-day flaw that is being patched concerns the Trusted Platform Module from the Trusted Computing Group (TCG). Adam Barnett, lead software engineer at Rapid7, noted that the CVE-2025-2884 flaw concerns TPM 2.0 reference implementation, which, under normal circumstances, is likely to be replicated in the downstream implementation by each manufacturer.

“Microsoft is treating this as a zero-day despite the curious circumstance that Microsoft is a founder member of TCG, and thus presumably privy to the discovery before its publication,” he said. “Windows 11 and newer versions of Windows Server receive patches. In place of patches, admins for older Windows products such as Windows 10 and Server 2019 receive another implicit reminder that Microsoft would strongly prefer that everyone upgrade.”

One of the patches classified as “critical” has such a profound impact that some security experts advise IT departments to patch immediately. McCarthy warned that the CVE-2025-49708 critical vulnerability in the Microsoft Graphics Component, although classed as an “elevation of privilege” security issue, has a severe real-world impact.

“It is a full virtual machine [VM] escape,” he said. “This flaw, with a CVSS score of 9.9, completely shatters the security boundary between a guest virtual machine and its host operating system.”

McCarthy urged organisations to prioritise patching this vulnerability because it invalidates the core security promise of virtualisation.

“A successful exploit means an attacker who gains even low-privilege access to a single, non-critical guest VM can break out and execute code with system privileges directly on the underlying host server,” he said. “This failure of isolation means the attacker can then access, manipulate or destroy data on every other VM running on that same host, including mission-critical domain controllers, databases or production applications.”



Source link

Continue Reading

Trending