Tech
Are AI agents a blessing or a curse for cyber security? | Computer Weekly
Artificial intelligence (AI) and AI agents are seemingly everywhere. Be it with conference show floors or television adverts featuring celebrities, suppliers are keen to showcase the technology, which they tell us will help make our day-to-day lives much easier. But what exactly is an AI agent?
Fundamentally, AI agents – also known as agentic AI models – are generative AI (GenAI) and large language models (LLMs) used to automate tasks and workflows.
For example, need to book a room for a meeting at a particular office at a specific time for a certain number of people? Simply ask the agent to do so and it will act, plan and execute on your behalf, identifying a suitable room and time, then sending the calendar invite out to your colleagues on your behalf.
Or perhaps you’re booking a holiday. You can detail where you want to go, how you want to get there, add in any special requirements and ask the AI agent for suggestions that it will duly examine, parse and detail in seconds – saving you both time and effort.
“We’re going to be very dependent on AI agents in the very near future – everybody’s going to have an agent for different things,” says Etay Maor, chief security strategist at network security company Cato Networks. “It’s super convenient and we’re going to see this all over the place.
“The flip side of that is the attackers are going to be looking heavily into it, too,” he adds.
Unforeseen consequences
When new technology appears, even if it’s developed with the best of intentions, it’s almost inevitable that criminals will seek to exploit it.
We saw it with the rise of the internet and cyber fraud, we saw it with the shift to cloud-based hybrid working, and we’ve seen it with the rise of AI and LLMs, which cyber criminals quickly jumped on to write more convincing phishing emails. Now, cyber criminals are exploring how to weaponise AI agents and autonomous systems, too.
“They want to generate exploits,” says Yuval Zacharia, who until recently was R&D director at cyber security firm Hunters, and is now a co-founder at a startup in stealth mode. “That’s a complex mission involving code analysis and reverse engineering that you need to do to understand the codebase then exploit it. And that’s exactly the task that agentic AI is good at – you can divide a complex problem into different components, each with specific tools to execute it.”
Cyber security consultancy Reversec has published a wide range of research on how GenAI and AI agents can be exploited by malicious hackers, often by taking advantage of how new the technology is, meaning security measures may not fully be in place – especially if those developing AI tools want to ensure their product is released ahead of the competition.
For example, attackers can exploit prompt injection vulnerabilities to hijack browser agents with the aim of stealing data or other unauthorised actions. Or, alternatively, Reversec has demonstrated how an AI agent can be manipulated through prompt injection attacks to encourage outputs to include phishing links, social engineering and other ways of stealing information.
“Attackers can use jailbreaking or prompt injection attacks,” says Donato Capitella, principal security consultant at Reversec. “Now, you give an LLM agency – all of a sudden this is not just generic attacks, but it can act on your behalf: it can read and send emails, it can do video calls.
“An attacker sends you an email, and if an LLM is reading parts of that mailbox, all of a sudden, the email contains instructions that confuse the LLM, and now the LLM will steal information and send information to the attacker.”
Agentic AI is designed to help users, but as AI agents become more common and more sophisticated, that’s also going to open the door to attackers looking to exploit them to aid with their own goals – especially if legitimate tools aren’t secured correctly.
“If I’m a criminal and I know you’re using an AI agent which helps you with managing files on your network, for me, that’s a way into the network to deploy ransomware,” says Maor. “Maybe you’ll have an AI agent which can leave voice messages for you: Your voice? Now it’s identity fraud. Emails are business email compromise (BEC) attacks.
“The fact is a lot of these agents are going to have a lot of capabilities with the things they can do, and not too many guardrails, so criminals will be focusing on it,” he warns, adding that “there’s a continuous lowering of the bar of what it takes to do bad things”.
Fighting agentic AI with agentic AI
Ultimately, this means agentic AI-based attacks is something else chief information security officers (CISOs) and cyber security teams need to consider on top of every other challenge they currently face. Perhaps one answer to this is for defenders to take advantage of the automation provided by AI agents, too.
Zacharia believes so – she even built an agentic AI-powered threat-hunting tool in her spare time.
“It was about a side-project I did in my spare time at the weekends – I’m really geeky,” she says. “It was about exploring the world of AI agents because I thought it was cool.”
Cyber attacks are constantly evolving, and rapid response to emerging threats can be incredibly difficult, especially in an area where AI agents could be maliciously deployed to uncover new exploits en masse. That means identifying security threats, let alone assessing the impact and applying the mitigations can take a lot of time – especially if cyber security staff are doing it manually.
“What I was trying to do was automate this with AI agents,” says Zacharia. “The architecture built on top of multiple AI agents aim to identify emerging threats and prioritise according to business context, data enrichment and things that you care about, then they create hunting and viability queries that will help you turn those into actionable insights.”
That data enrichment comes from multiple sources. They include social media trends, CVEs, Patch Tuesday notifications, CISA alerts and other malware advisories.
The AI prioritises this information according to severity, with the AI agents acting upon that information to help perform tasks – for example, by downloading critical security updates – while also helping to relieve some of the burden on overworked cyber security staff.
“Cyber security teams have a lot on their hands, a lot of things to do,” says Zacharia. “They’re overwhelmed by the alerts they keep getting from all the security tools that they have. That means threat hunting in general, specifically for emergent threats, is always second priority.”
She points to incidents like Log4j, a critical zero-day vulnerability in widely used software that was almost immediately exploited by sophisticated threat actors upon disclosure.
“Think how much damage this could cause in your organisation if you’re not finding these on time,” says Zacharia. “And that’s exactly the point,” she adds, referring to how agentic AI can help to swiftly identify and remedy cyber security vulnerabilities and issues.
Streamlining the SOC with agentic AI
Zacharia’s far from alone in believing agentic AI could be of great benefit to cyber security teams.
“Think of a SOC [security operations centre] analyst sitting in front of an incident and he or she needs to start investigating it,” says Maor. “They start with looking at the technical data, to see if they’ve seen something like it in the past.”
What he’s describing is the important – but time-consuming – work SOC analysts do everyday. Maor believes adding agentic AI tools to the process can streamline their work, ultimately making them more effective at detecting cyber threats.
“An AI model can examine the incident and then detail similar incidents, immediately suggesting an investigation is needed,” he says. “There’s also the predictive model that tells the analyst what they don’t need to investigate. This cuts down the grunt work that needs to be done – sometimes hours, sometimes days of work – in order to reach something of value, which is nice.”
But while it can provide support, it’s important to note that agentic AI isn’t a silver bullet that is going to eliminate cyber security threats. Yes, it’s designed to make the task of monitoring threat intelligence or applying security updates easier and more efficient, but people remain key to information security, too. People are needed to work in SOCs, and information security staff are still required to help employees across the rest of the organisation remain alert and secure to cyber threats.
Especially as AI continues to evolve and improve, and attackers will continue to look to exploit it – and it’s up to the defenders to counter them.
“It’s a cat and mouse situation,” says Zacharia. “Both sides are adopting AI. But as an attacker, you only need one way to sneak in. As a defender, you have to protect the entire castle. Attackers will always have the advantage, that’s the game we’re playing. But I do think that both sides are getting better and better.”
Tech
Why fears of a trillion-dollar AI bubble are growing
For almost as long as the artificial intelligence boom has been in full swing, there have been warnings of a speculative bubble that could rival the dot-com craze of the late 1990s that ended in a spectacular crash and a wave of bankruptcies.
Tech firms are spending hundreds of billions of dollars on advanced chips and data centers, not just to keep pace with a surge in the use of chatbots such as ChatGPT, Gemini and Claude, but to make sure they’re ready to handle a more fundamental and disruptive shift of economic activity from humans to machines.
The final bill may run into the trillions. The financing is coming from venture capital, debt and, lately, some more unconventional arrangements that have raised eyebrows on Wall Street.
Even some of AI’s biggest cheerleaders acknowledge the market is frothy, while still professing their belief in the technology’s long-term potential. AI, they say, is poised to reshape multiple industries, cure diseases and generally accelerate human progress.
Yet never before has so much money been spent so rapidly on a technology that remains somewhat unproven as a profit-making business model. Tech industry executives who privately doubt the most effusive assessments of AI’s revolutionary potential—or at least struggle to see how to monetize it—may feel they have little choice but to keep pace with their rivals’ investments or risk being out-scaled and sidelined in the future AI marketplace.
Sharp falls in global technology stocks in early November underscored investors’ growing unease over the sector’s sky-high valuations, with Wall Street chief executives warning of an overdue market correction.
What are the warning signs for AI?
When Sam Altman, the chief executive of ChatGPT maker OpenAI, announced a $500 billion AI infrastructure plan known as Stargate alongside other executives at the White House in January, the price tag triggered some disbelief. Since then, other tech rivals have ramped up spending, including Meta’s Mark Zuckerberg, who has pledged to invest hundreds of billions in data centers. Not to be outdone, Altman has since said he expects OpenAI to spend “trillions” on AI infrastructure.
To finance those projects, OpenAI is entering into new territory. In September, chipmaker Nvidia Corp. announced an agreement to invest up to $100 billion in OpenAI’s data center buildout, a deal that some analysts say raises questions about whether the chipmaker is trying to prop up its customers so that they keep spending on its own products.
The concerns have followed Nvidia, to varying degrees, for much of the boom. The dominant maker of AI accelerator chips has backed dozens of companies in recent years, including AI model makers and cloud computing providers. Some of them then use that capital to buy Nvidia’s expensive semiconductors. The OpenAI deal was far larger in scale.
OpenAI has also indicated it could pursue debt financing, rather than leaning on partners such as Microsoft Corp. and Oracle Corp. The difference is that those companies have rock-solid, established businesses that have been profitable for many years. OpenAI expects to burn through $115 billion of cash through 2029, The Information has reported.
Other large tech companies are also relying increasingly on debt to support their unprecedented spending. Meta, for example, turned to lenders to secure $26 billion in financing for a planned data center complex in Louisiana that it says will eventually approach the size of Manhattan. JPMorgan Chase & Co. and Mitsubishi UFJ Financial Group are also leading a loan of more than $22 billion to support Vantage Data Centers’ plan to build a massive data-center campus, Bloomberg News has reported.
So how about the payback?
By 2030, AI companies will need $2 trillion in combined annual revenue to fund the computing power needed to meet projected demand, Bain & Co. said in a report released in September. Yet their revenue is likely to fall $800 billion short of that mark, Bain predicted.
“The numbers that are being thrown around are so extreme that it’s really, really hard to understand them,” said David Einhorn, a prominent hedge fund manager and founder of Greenlight Capital. “I’m sure it’s not zero, but there’s a reasonable chance that a tremendous amount of capital destruction is going to come through this cycle.”
In a sign of the times, there’s also a growing number of less proven firms trying to capitalize on the data center goldrush. Nebius, an Amsterdam-based cloud provider that split off from Russian internet giant Yandex in 2024, recently inked an infrastructure deal with Microsoft worth up to $19.4 billion. And Nscale, a little-known British data center company, is working with Nvidia, OpenAI and Microsoft on build-outs in Europe. Like some other AI infrastructure providers, Nscale previously focused on another frothy sector: cryptocurrency mining.
Are there concerns about the technology itself?
The data center spending spree is overshadowed by persistent skepticism about the payoff from AI technology. In August, investors were rattled after researchers at the Massachusetts Institute of Technology found that 95% of organizations saw zero return on their investment in AI initiatives.
More recently, researchers at Harvard and Stanford offered a possible explanation for why. Employees are using AI to create “workslop,” which the researchers define as “AI-generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.”
The promise of AI has long been that it would help streamline tasks and boost productivity, making it an invaluable asset for workers and one that corporations would pay top dollar for. Instead, the Harvard and Stanford researchers found the prevalence of workslop could cost larger organizations millions of dollars a year in lost productivity.
AI developers have also been confronting a different challenge. OpenAI, Claude chatbot developer Anthropic and others have for years bet on the so-called scaling laws—the idea that more computing power, data and larger models will inevitably pave the way for greater leaps in the power of AI.
Eventually, they say, these advances will lead to artificial general intelligence, a hypothetical form of the technology so sophisticated that it matches or exceeds humans in most tasks.
Over the past year, however, these developers have experienced diminishing returns from their costly efforts to build more advanced AI. Some have also struggled to match their own hype.
After months of touting GPT-5 as a significant leap, OpenAI’s release of its latest AI model in August was met with mixed reviews. In remarks around the launch, Altman conceded that “we’re still missing something quite important” to reach AGI.
Those concerns are compounded by growing competition from China, where companies are flooding the market with competitive, low-cost AI models. While U.S. firms are generally still viewed as ahead in the race, the Chinese alternatives risk undercutting Silicon Valley on price in certain markets, making it harder to recoup the significant investment in AI infrastructure.
There’s also the risk that the AI industry’s vast data center buildout, entailing a huge increase in electricity consumption, will be held back by the limitations of national power networks.
What does the AI industry say in response?
Sam Altman, the face of the current AI boom, has repeatedly acknowledged the risk of a bubble in recent months while maintaining his optimism for the technology. “Are we in a phase where investors as a whole are overexcited about AI? In my opinion, yes,” he said in August. “Is AI the most important thing to happen in a very long time? My opinion is also yes.”
Altman and other tech leaders continue to express confidence in the roadmap toward AGI, with some suggesting it could be closer than skeptics think.
“Developing superintelligence is now in sight,” Zuckerberg wrote in July, referencing an even more powerful form of AI that his company is aiming for. In the near term, some AI developers also say they need to drastically ramp up computing capacity to support the rapid adoption of their services.
Altman, in particular, has stressed repeatedly that OpenAI remains constrained in computing resources as hundreds of millions of people around the world use its services to converse with ChatGPT, write code and generate images and videos.
OpenAI and Anthropic have also released their own research and evaluations that indicate AI systems are having a meaningful impact on work tasks, in contrast to the more damning reports from outside academic institutions. An Anthropic report released in September found that roughly three quarters of companies are using Claude to automate work.
The same month, OpenAI released a new evaluation system called GDPval that measures the performance of AI models across dozens of occupations.
“We found that today’s best frontier models are already approaching the quality of work produced by industry experts,” OpenAI said in a blog post. “Especially on the subset of tasks where models are particularly strong, we expect that giving a task to a model before trying it with a human would save time and money.”
So how much will customers eventually be willing to pay for these services? The hope among developers is that, as AI models improve and field more complex tasks on users’ behalf, they will be able to convince businesses and individuals to spend far more to access the technology.
“I want the door open to everything,” OpenAI Chief Financial Officer Sarah Friar said in late 2024, when asked about a report that the company has discussed a $2,000 monthly subscription for its AI products. “If it’s helping me move about the world with literally a Ph.D.-level assistant for anything that I’m doing, there are certainly cases where that would make all the sense in the world.”
In September, Zuckerberg said an AI bubble is “quite possible,” but stressed that his bigger concern is not spending enough to meet the opportunity. “If we end up misspending a couple of hundred billion dollars, I think that that is going to be very unfortunate, obviously,” he said in a podcast interview. “But what I’d say is I actually think the risk is higher on the other side.”
What makes a market bubble?
Bubbles are economic cycles defined by a swift increase in market values to levels that aren’t supported by the underlying fundamentals. They’re usually followed by a sharp selloff—the so-called pop.
A bubble often begins when investors get swept up in a speculative frenzy—over a new technology or other market opportunity—and pile in for fear of missing out on further gains. American economist Hyman Minsky identified five stages of a market bubble: displacement, boom, euphoria, profit-taking and panic.
Bubbles are sometimes difficult to spot because market prices can become dislocated from real-world values for many reasons, and a sharp price drop isn’t always inevitable. And, because a crash is part of a bubble cycle, they can be hard to pinpoint until after the fact.
Generally, bubbles pop when investors realize that the lofty expectations they had were too high. This usually follows a period of over-exuberance that tips into mania, when everyone is buying into the trend at the very top.
What comes next is usually a slow, prolonged selloff where company earnings start to suffer, or a singular event that changes the long-term view, sending investors dashing for the exits.
There was some fear that an AI bubble had already popped in late January, when China’s DeepSeek upended the market with the release of a competitive AI model purportedly built at a fraction of the amount that top U.S. developers spend. DeepSeek’s viral success triggered a trillion-dollar selloff of technology shares. Nvidia, a bellwether AI stock, slumped 17% in one day.
The DeepSeek episode underscored the risks of investing heavily in AI. But Silicon Valley remained largely undeterred. In the months that followed, tech companies redoubled their costly AI spending plans, and investors resumed cheering on these bets. Nvidia shares charged back from an April low to fresh records. It was worth more than $4 trillion by the end of September, making it the most valuable company in the world.
So is this 1999 all over again?
As with today’s AI boom, the companies at the center of the dot-com frenzy drew in vast amounts of investor capital, often using questionable metrics such as website traffic rather than their actual ability to turn a profit. There were many flawed business models and exaggerated revenue projections.
Telecommunication companies raced to build fiber-optic networks only to find the demand wasn’t there to pay for them. When it all crashed in 2001, many companies were liquidated, others absorbed by healthier rivals at knocked-down prices.
Echoes of the dot-com era can be found in AI’s massive infrastructure build-out, sky-high valuations and showy displays of wealth. Venture capital investors have been courting AI startups with private jets, box seats and big checks.
Many AI startups tout their recurring revenue as a key metric for growth, but there are doubts as to how sustainable or predictable those projections are, particularly for younger businesses. Some AI firms are completing multiple mammoth fundraisings in a single year. Not all will necessarily flourish.
“I think there’s a lot of parallels to the internet bubble,” said Bret Taylor, OpenAI’s chairman and the CEO of Sierra, an AI startup valued at $10 billion. Like the dot-com era, a number of high-flying companies will almost certainly go bust. But in Taylor’s telling, there will also be large businesses that emerge and thrive over the long term, just as happened with Amazon.com Inc. and Alphabet Inc.’s Google in the late 90s.
“It is both true that AI will transform the economy, and I think it will, like the internet, create huge amounts of economic value in the future,” Taylor said. “I think we’re also in a bubble, and a lot of people will lose a lot of money.”
Amazon Chairman Jeff Bezos said the spending on AI resembles an “industrial bubble” akin to the biotech bubble of the 1990s, but he still expects it to improve the productivity of “every company in the world.”
There are also some key differences to the dot-com boom that market watchers point out, the first being the broad health and stability of the biggest businesses that are at the forefront of the trend. Most of the “Magnificent Seven” group of U.S. tech companies are long-established giants that make up much of the earnings growth in the S&P 500 Index. These firms have huge revenue streams and are sitting on large stockpiles of cash.
Despite the skepticism, AI adoption has also proceeded at a rapid clip. OpenAI’s ChatGPT has about 700 million weekly users, making it one of the fastest growing consumer products in history. Top AI developers, including OpenAI and Anthropic, have also seen remarkably strong sales growth. OpenAI previously forecast revenue would more than triple in 2025 to $12.7 billion.
While the company does not expect to be cash-flow positive until near the end of this decade, a recent deal to help employees sell shares gave it an implied valuation of $500 billion—making it the world’s most valuable company never to have turned a profit.
2025 Bloomberg L.P. Distributed by Tribune Content Agency, LLC.
Citation:
Why fears of a trillion-dollar AI bubble are growing (2025, November 6)
retrieved 6 November 2025
from https://techxplore.com/news/2025-11-trillion-dollar-ai.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
Nature’s resilience inspires an improved power grid
Natural ecosystems made up of plants, animals and microorganisms face constant challenges from natural hazards, like extreme weather or invasive species.
Despite these challenges, ecosystems have thrived for millions of years, showcasing high levels of resilience against hazards and disturbances. What if the mechanisms and patterns responsible for this prosperous resilience could be applied to the power grid?
Texas A&M University researchers have tested bio-inspired cyber-physical systems to strengthen the power grid to mitigate different types of cyber-attacks and understand their impacts.
Possible cyber threats to resource networks like the power grid include presentations of false information to data systems and information theft attempts, which can affect a network’s performance abilities.
“Ecosystems experience many of the same unexpected disturbances as human-made systems, like droughts and floods,” said Dr. Astrid Layton, an associate professor in the J. Mike Walker ’66 Department of Mechanical Engineering and head of the Bio-inspired SystemS Lab (BiSSL).
“While ecosystems may be damaged by these hazards, they have the unique ability to survive these disturbances without wasteful levels of redundancies, not only at the ecosystem level, but on a species level as well—which is why we’re interested in cyber-physical power systems from this ecological perspective.”
As their name suggests, cyber-physical power systems are made up of both cyber and physical elements, referred to as components. Cyber components—like firewalls and routers—deal with digital information flows, while physical components—like buses and generators—process tangible energy flows. Despite their prevalence, the system’s complexity causes incomplete knowledge of how disturbances move through and impact a cyber-physical power system.
“It’s crucial for a system to not only survive the hard times, but to thrive during good times,” said Layton. “Using ecological models and the insight they give allows us to assess the cyber-physical interface, clarifying how the system can run more efficiently when there are no immediate threats while still understanding and minimizing damages when they do happen.”
The main goal of this project was to better understand the relationship between the cyber components and physical components that make up cyber-physical power systems. A stronger understanding of the system’s interface allows researchers to predict potential impacts of cyber-attacks on the physical components and physical attacks on the cyber components, informing policymakers and grid operators on how best to prepare for and operate during these threats.
Layton, an expert in bio-inspired systems design and analysis techniques, collaborated with Dr. Katherine Davis, an associate professor of electrical and computer engineering, who brings extensive power system knowledge. Layton and Davis have worked as collaborators since a 2018 Texas A&M Energy Institute seed grant.
Their combined knowledge of mechanical and electrical engineering makes them a great team for understanding and designing cyber-physical power systems for resilience.
Layton and Davis were also joined by their senior Ph.D. students Emily Payne and Shining Sun for the Sandia study. Payne, a mechanical engineering student, started working with Layton in the Bio-inspired SystemS Lab as an undergraduate architectural engineering student in 2022.
Sun, an electrical and computer engineering student, has worked with Davis since 2023. Both Payne and Sun have published several papers relating to this work and have presented their findings at conferences, each winning awards for their research.
“Part of the success of this project has been these engineering graduate students, Emily and Shining, who have excelled at the interdisciplinary aspects of the work in addition to the highly technical focus of the problem,” Layton said.
“My research in particular asks engineering students to read ecology papers, which are essentially a different language from engineering papers, and apply this to their research.”
The approach enables Layton to view engineering problems from an innovative perspective.
The Sandia National Laboratories project ended in September 2025, but the researchers are continuing to collaborate on their bio-inspired power systems.
Layton and Davis are set to participate in a collaborative study focusing on modeling the impacts of weather disturbances on the power grid.
Citation:
Nature’s resilience inspires an improved power grid (2025, November 6)
retrieved 6 November 2025
from https://techxplore.com/news/2025-11-nature-resilience-power-grid.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
We Tested Travel Pillows on Planes, Trains, and Automobiles. These Are Our Favorites
Cabeau’s Evolution Earth neck pillow is covered in RPET, a super-soft, washable fabric made with recycled plastic bottles. The pillow offers excellent, high sides and a comfortable, firm fit. Like some other pillows in this guide, it can be tricky to use this pillow with over-ear headphones. The back is flat, so in theory it could sit flush against a headrest, though I found it a bit awkward based on my height when I tried it on a bus and plane. (Seat backs rarely work as intended for me.) There’s a chin strap that prevents your head from falling forward.
It took me many attempts to get this “HeadCatch” technology to work, and I didn’t find it super comfortable once it was strapped under my chin, but if you’re a forward-leaner, it’s a nice touch. (I, thankfully, am not.) I wish it came with a travel case, though you can pay an extra $5 to get one included. These caveats aside, once I got the pillow adjusted to where I wanted it, I was able to fall asleep. It was nice and firm without being too stiff, and I woke up feeling refreshed despite having spent the past hour on a bus.
| Materials | Memory foam, RPET cover |
| Washable? | Yes (cover) |
-
Tech1 week agoUS Ralph Lauren partners with Microsoft for AI shopping experience
-
Tech1 week agoOpenAI says a million ChatGPT users talk about suicide
-
Tech1 week agoHow digital technologies can support a circular economy
-
Sports1 week agoBilly Bob Thornton dishes on Cowboys owner Jerry Jones’ acting prowess after ‘Landman’ cameo
-
Tech1 week agoAI chatbots are becoming everyday tools for mundane tasks, use data shows
-
Fashion1 week agoITMF elects new board at 2025 Yogyakarta conference
-
Fashion1 week agoCalvin Klein launches Re-Calvin take-back programme across the US
-
Business1 week agoTransfer test: Children from Belfast low income families to be given free tuition
