Connect with us

Tech

Google spins up agentic SOC to speed up incident management | Computer Weekly

Published

on

Google spins up agentic SOC to speed up incident management | Computer Weekly


At Google Cloud’s virtual Security Summit this week, the organisation has shared more details of its expanding vision around safeguarding artificial intelligence (AI), both in terms deploying AI’s capabilities in the service of improving resilience with new agentic security operations centre (SOC) capabilities and features, and securing its customers’ future AI development projects.

Google leadership spoke of an “unprecedented” opportunity for end-user organisations to redefine their security postures and reduce risk around their AI investments.

The firm’s vision of the agentic SOC is an “integrated experience” whereby detection engineering workflows are streamlined based on AI agents optimising data pipelines, automating alert triage, investigation and response in a system whereby they are able to coordinate their actions in support of a shared goal.

Its new alert investigation agent, which was first announced back at Google Cloud Next in April but enters preview today for a number of users, will supposedly enrich events, analyse command line interfaces (CLIs), and build process trees based on the work of the human analysts at Google Cloud’s Mandiant unit.

The resulting alert summaries will be accompanied by recommendations for human defenders, which Google believes may help defenders drastically cut down both manual effort and response times.

“We’re excited about the new capabilities that we’re bringing to market across our security portfolio to help organisations not only continue to innovate with AI, but also leverage AI to keep their organisation secure,” Google Cloud’s Naveed Makhani, product lead for security AI, told Computer Weekly.

“One of the biggest security improvements that we’re announcing is within our AI Protection solution. As organisations rapidly adopt AI, we’re developing new capabilities to help them keep their initiatives secure,” added Makhani.

In this space, Google today announced three new capabilities within its Agentspace and Agent Builder tools that it hopes will protect customer-developed AI agents.

These include new agent inventory and risk identification capabilities to help security teams better spot potential vulnerabilities, misconfigurations, or dodgy interactions among their agents, better safeguards against prompt injection and jailbreaking attacks, and enhanced threat detection within Security Command Centre.

Elsewhere, Google added enhancements to its Unified Security (GUS) offering – also unveiled earlier this year – including a security operations laboratory feature offering early access to experimental AI tools for threat parsing, detection and response, dashboards to better visualise, analyse and act on security data, and the porting of security features present in the Android version of its Chrome browser to Apple’s iOS. Trusted Cloud, meanwhile, gains several updates around compliance, posture management, risk report, agentic identity and access management (IAM), data protection, and network security.

AI consulting

Based on Mandiant data suggesting that its human analysts are increasingly seeing customer demands for guidance around cyber security for AI applications, Google will also introduce more AI specific offerings within the overall solution set offered by Mandiant’s consultants.

“Mandiant Consulting now provides risk-based AI governance, pre-deployment guidance for AI environment hardening, and AI threat modelling. Partnering with Mandiant can empower organisations to embrace AI technologies while mitigating security risks,” said Google.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Morocco tests floating solar panels to save water, generate power

Published

on

Morocco tests floating solar panels to save water, generate power


Credit: Pixabay/CC0 Public Domain

Sun-baked Morocco, grappling with its worst drought in decades, has launched a pilot project aimed at slowing water evaporation while simultaneously generating green energy using floating solar panels.

At a major reservoir near the northern city of Tangier, thousands of so-called “floatovoltaic” panels protect the water’s surface from the blazing sun and absorb its light to generate electricity.

Authorities plan to power the neighboring Tanger Med port complex with the resulting energy, and if it proves a success, the technology could have far wider implications for the North African kingdom.

According to official figures, Morocco’s water reserves lost the equivalent of more than 600 Olympic-sized swimming pools every day to evaporation between October 2022 and September 2023.

Over that same period, temperatures averaged 1.8C higher than normal, meaning water evaporated at a higher rate.

Alongside other factors like declining rainfall, this has reduced reservoirs nationwide to about one-third of their capacity.

Water ministry official Yassine Wahbi said the Tangier reservoir loses around 3,000 cubic meters a day to evaporation, but that figure more than doubles in the hot summer months.

The floating photovoltaic panels can help cut evaporation by about 30%, he said.

The water ministry has said the floating panels represent “an important gain in a context of increasingly scarce water resources”, even if the evaporation they stop is, for now, relatively marginal.

Assessment studies are underway for another two similar projects in Oued El Makhazine, at one of Morocco’s largest dams in the north, and in Lalla Takerkoust near Marrakesh.

Similar technology is being tested in France, Indonesia and Thailand, while China already operates some of the world’s largest floating solar farms.

‘Pioneering’

Since the Moroccan pilot program began late last year, more than 400 floating platforms supporting several thousand panels have been installed.

The government wants more, planning to reach 22,000 panels that would cover about 10 hectares at the 123-hectare Tangier reservoir.

Once completed, the system would generate roughly 13 megawatts of electricity—enough to power the Tanger Med complex.

Authorities also have plans to plant trees along the banks of the reservoir to reduce winds, believed to exacerbate evaporation.

Climate science professor Mohammed-Said Karrouk called it a “pioneering” project.

He noted, however, that the is too large and its surface too irregular to cover completely with floating panels, which could be damaged with fluctuating .

Official data shows water reserves fed by rainfall have fallen by nearly 75% in the past decade compared with the 1980s, dropping from an annual average of 18 billion cubic meters to only five.

Morocco has so far mainly relied on desalination to combat shortages, producing about 320 million cubic meters of potable water a year.

Authorities aim to expand production to 1.7 billion cubic meters yearly by 2030.

Karrouk said an urgent priority should be transferring surplus water from northern dams to regions in central and southern Morocco that are more impacted by the years-long drought.

The kingdom already has a system dubbed the “water highway”—a 67-kilometer canal linking the Sebou basin to the capital Rabat—with plans to expand the network to other dams.

© 2025 AFP

Citation:
Morocco tests floating solar panels to save water, generate power (2025, August 30)
retrieved 30 August 2025
from https://techxplore.com/news/2025-08-morocco-solar-panels-generate-power.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Tech

A firewall for science: AI tool identifies 1,000 ‘questionable’ journals

Published

on

A firewall for science: AI tool identifies 1,000 ‘questionable’ journals


Predicted characteristics of journals flagged as questionable at the 50% threshold (n = 1437). Credit: Science Advances (2025). DOI: 10.1126/sciadv.adt2792

A team of computer scientists led by the University of Colorado Boulder has developed a new artificial intelligence platform that automatically seeks out “questionable” scientific journals.

The study, published Aug. 27 in the journal Science Advances, tackles an alarming trend in the world of research.

Daniel Acuña, lead author of the study and associate professor in the Department of Computer Science, gets a reminder of that several times a week in his email inbox: These spam messages come from people who purport to be editors at scientific journals, usually ones Acuña has never heard of, and offer to publish his papers—for a hefty fee.

Such publications are sometimes referred to as “predatory” journals. They target scientists, convincing them to pay hundreds or even thousands of dollars to publish their research without proper vetting.

“There has been a growing effort among scientists and organizations to vet these journals,” Acuña said. “But it’s like whack-a-mole. You catch one, and then another appears, usually from the same company. They just create a new website and come up with a new name.”

His group’s new AI tool automatically screens , evaluating their websites and other online data for certain criteria: Do the journals have an editorial board featuring established researchers? Do their websites contain a lot of grammatical errors?

Acuña emphasizes that the tool isn’t perfect. Ultimately, he thinks human experts, not machines, should make the final call on whether a journal is reputable.

But in an era when prominent figures are questioning the legitimacy of science, stopping the spread of questionable publications has become more important than ever before, he said.

“In science, you don’t start from scratch. You build on top of the research of others,” Acuña said. “So if the foundation of that tower crumbles, then the entire thing collapses.”

The shake down

When scientists submit a new study to a reputable publication, that study usually undergoes a practice called . Outside experts read the study and evaluate it for quality—or, at least, that’s the goal.

A growing number of companies have sought to circumvent that process to turn a profit. In 2009, Jeffrey Beall, a librarian at CU Denver, coined the phrase “predatory” journals to describe these publications.

Often, they target researchers outside of the United States and Europe, such as in China, India and Iran—countries where scientific institutions may be young, and the pressure and incentives for researchers to publish are high.

“They will say, ‘If you pay $500 or $1,000, we will review your paper,'” Acuña said. “In reality, they don’t provide any service. They just take the PDF and post it on their website.”

A few different groups have sought to curb the practice. Among them is a called the Directory of Open Access Journals (DOAJ). Since 2003, volunteers at the DOAJ have flagged thousands of journals as suspicious based on six criteria. (Reputable publications, for example, tend to include a detailed description of their peer review policies on their websites.)

But keeping pace with the spread of those publications has been daunting for humans.

To speed up the process, Acuña and his colleagues turned to AI. The team trained its system using the DOAJ’s data, then asked the AI to sift through a list of nearly 15,200 open-access journals on the internet.

Among those journals, the AI initially flagged more than 1,400 as potentially problematic.

Acuña and his colleagues asked human experts to review a subset of the suspicious journals. The AI made mistakes, according to the humans, flagging an estimated 350 publications as questionable when they were likely legitimate. That still left more than 1,000 journals that the researchers identified as questionable.

“I think this should be used as a helper to prescreen large numbers of journals,” he said. “But human professionals should do the final analysis.”

Acuña added that the researchers didn’t want their system to be a “black box” like some other AI platforms.

“With ChatGPT, for example, you often don’t understand why it’s suggesting something,” Acuña said. “We tried to make ours as interpretable as possible.”

The team discovered, for example, that questionable journals published an unusually high number of articles. They also included authors with a larger number of affiliations than more legitimate journals, and authors who cited their own research, rather than the research of other scientists, to an unusually high level.

The new AI system isn’t publicly accessible, but the researchers hope to make it available to universities and publishing companies soon. Acuña sees the tool as one way that researchers can protect their fields from bad data—what he calls a “firewall for science.”

“As a computer scientist, I often give the example of when a new smartphone comes out,” he said. “We know the phone’s software will have flaws, and we expect bug fixes to come in the future. We should probably do the same with science.”

More information:
Han Zhuang et al, Estimating the predictability of questionable open-access journals, Science Advances (2025). DOI: 10.1126/sciadv.adt2792

Citation:
A firewall for science: AI tool identifies 1,000 ‘questionable’ journals (2025, August 30)
retrieved 30 August 2025
from https://techxplore.com/news/2025-08-firewall-science-ai-tool-journals.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Tech

Scammers Will Try to Trick You Into Filling Out Google Forms. Don’t Fall for It

Published

on

Scammers Will Try to Trick You Into Filling Out Google Forms. Don’t Fall for It


One of the lesser-known apps in the Google Drive online suite is Google Forms. It’s an easy, intuitive way to create a web form for other people to enter information into. You can use it for employee surveys, for organizing social gatherings, for giving people a way to contact you, and much more. But Google Forms can also be used for malicious purposes.

These forms can be created in minutes, with clean and clear formatting, official-looking images and video, and—most importantly of all—a genuine Google Docs URL that your web browser will see no problem with. Scammers can then use these authentic-looking forms to ask for payment details or login information.

It’s a type of scam that continues to spread, with Google itself issuing a warning about the issue in February. Students and staff at Stanford University were among those targeted with a Google Forms link that asked for login details for the academic portal there, and the attack beat standard email malware protection.

How the Scam Works

Google Forms are quick and easy to put together.

David Nield

These scams can take a variety of guises, but they’ll typically start with a phishing email that will try to trick you into believing it’s an official and genuine communication. It might be designed to look like it’s from a colleague, an administrator, or someone from a reputable organization.

The apparent quality and trustworthiness of this original phishing email is part of the con. Our inboxes are regularly filled with requests to reset passwords, verify details, or otherwise take action. Like many scams, the email might suggest a sense or urgency, or indicate that your security has been compromised in some way.

Even worse, the instigating email might actually come from a legitimate email address, if someone in your social circle, family, or office has had their account hijacked. In this case you wouldn’t be able to run the usual checks on the sender identity and email address, because everything would look genuine—though the wording and style would be off.

This email (or perhaps a direct message on social media) will be used to deliver a Google Forms link, which is the second half of the scam. This form will most often be set up to look genuine, and may be trying to spoof a recognized site like your place of work or your bank. The form might prompt you for sensitive information, offer up a link to malware, or feature a phone number or email address to lead you into further trouble.



Source link

Continue Reading

Trending