Connect with us

Tech

Getting started with agentic AI | Computer Weekly

Published

on

Getting started with agentic AI | Computer Weekly


A study by Boston Consulting Group (BCG) suggests that organisations that lead in technology development are gaining a first-mover advantage when it comes to artificial intelligence (AI) and using agentic AI to improve business processes.

What is striking about BCG’s findings, according to Jessica Apotheker, managing director and senior partner at Boston Consulting Group, is that the leading companies in AI are mostly the same ones that were leaders eight years ago.

“What this year’s report shows is that the value gap between these companies and others is widening quite a bit,” she says. In other words, BCG’s research shows that organisations that have invested disproportionately in technology achieve a higher return from that investment.

Numerous pieces of research show that a high proportion of AI initiatives are failing to deliver measurable business success. BCG’s Build for the future 2025 report shows that the companies it rates as the best users of AI generate 1.7 times more revenue growth than the 60% of companies in the categories it defines as stagnating or emerging.

For Ilan Twig, co-founder and chief technology officer (CTO) at Navan, AI projects that fail to deliver value are indicative of how businesses use AI technology. Too often, AI is dropped on top of old systems and outdated processes. 

Building on RPA

However, there is certainly a case to build on previous initiatives such as robotic process automation (RPA).

Speaking at the recent Forrester Technology and Innovation Summit in London, Bernhard Schaffrik, principal analyst at Forrester, discussed how agentic AI can be built on top of a deterministic RPA system to provide greater flexibility than what existing systems can be programmed to achieve.

The analyst firm uses the term “process orchestration” to describe the next level of automating business processes, using agentic AI in workflow to handle ambiguities far more easily than the programming scripts used in RPA.

“Classic process automation tools require you to know everything at the design stage – you need to anticipate all of the errors and all the exceptions,” says Schaffrik.

He points out that considering these things at design time is unrealistic when trying to orchestrate complex processes. But new tools are being developed for process orchestration that rely on AI agents.

A strong data foundation

Boston Consulting Group (BCG) says prerequisites for the successful roll-out of AI agents include strong data foundations, scaled AI capabilities and clear governance.

Standardisation of data is a key requirement for success, according to Twig. “A big part of the issue is data,” he says. “AI is only as strong as the information it runs on, and many companies don’t have the standardised, consistent datasets needed to train or deploy it reliably.”

Within the context of agentic AI, this is important to avoid miscommunications both at the technology infrastructure level and in people’s understanding of the information. But the entire data foundation does not have to be built all at once.

BCG’s Apotheker says companies can have an enterprise-wide goal to achieve clean data, and build this out one project at a time, providing a clean data foundation on which subsequent projects can be built. In doing so, organisations are able to gain a better understanding of the enterprise data these projects require while they ensure that the datasets are clean and good data management practices are followed.

A working agentic AI strategy relies on AI agents connected by a metadata layer, whereby people understand where and when to delegate certain decisions to the AI or pass work to external contractors. It’s a focus on defining the role of the AI and where people involved in the workflow need to contribute. 

This functionality can be considered a sort of platform. Scott Willson, head of product marketing at xtype, describes AI workflow platforms as orchestration engines, coordinating multiple AI agents, data sources and human touchpoints through sophisticated non-deterministic workflows. At the code level, these platforms may implement event-driven architectures using message queues to handle asynchronous processing and ensure fault tolerance.

Data lineage tracking should happen at the code level through metadata propagation systems that tag every data transformation, model inference and decision point with unique identifiers. Willson says this creates an immutable audit trail that regulatory frameworks increasingly demand. According to Willson, advanced implementations may use blockchain-like append-only logs to ensure governance data cannot be retroactively modified.

Adapting workflows and change management

Having built AI-native systems from the ground up and transformed the company’s own product development processes using AI, Alan LeFort, CEO and co-founder of StrongestLayer, notes that most organisations are asking completely the wrong questions when evaluating AI workflow platforms.

“The fundamental issue isn’t technological, it’s actually organisational,” he says.

Conway’s Law states that organisations design systems that mirror their communication structures. But, according to LeFort, most AI workflow evaluations assume organisations bolt AI onto existing processes designed around human limitations. This, he says, results in serial decision-making, risk-averse approval chains and domain-specific silos.

When you try to integrate AI into human-designed processes, you get marginal improvements. When you redesign processes around AI capabilities, you get exponential gains
Alan LeFort, StrongestLayer

“AI doesn’t have those limitations. AI can parallelise activities that humans must do serially, doesn’t suffer from territorial knowledge hoarding and doesn’t need the elaborate safety nets we’ve built around human fallibility,” he adds. “When you try to integrate AI into human-designed processes, you get marginal improvements. When you redesign processes around AI capabilities, you get exponential gains.”

StrongestLayer recently transformed its front-end software development process using this principle. Traditional product development flows serially. A product manager talks to customers, extracts requirements and then hands over to the user experience team for design, the programme management team then approves the design, and developers implement the software. It used to take 18-24 months to completely rebuild the application in this process, he says.

Instead of bolting AI onto this process, LeFort says StrongestLayer “fundamentally reimagined it”.

“We created a full-stack prototyper role-paired with a front-end engineer focused on architecture. The key was building an AI pipeline that captured the contextual knowledge of each role: design philosophy, tech stack preferences, non-functional requirements, testing standards and documentation needs.”

As a result of making these workload changes, he says the company was able to achieve the same outcome from a product development perspective in a quarter of the time. This, he says, was not necessarily achieved by working faster, but by redesigning the workflow around AI’s ability to parallelise human sequential activities.

LeFort expected to face pushback. “My response was to lead from the front. I paired directly with our chief product officer, Joshua Bass, to build the process, proving it worked before asking others to adopt it. We reframed success for our front-end engineer around velocity and pioneering new ways of working,” he says.

For LeFort, true speed to value comes from two fundamental sources: eliminating slack time between value activities and accelerating individual activity completion through AI automation. “This requires upfront investment in process redesign rather than quick technology deployment,” he says.

LeFort urges organisations to evaluate AI workflow platforms based on their ability to enable fundamental process transformation, rather than working to integrate existing inefficiencies.

Getting agentic AI decision-making right 

Research from BCG suggests that the best way to deploy agents is through a few high-value workflows with clear implementation plans and workforce training, rather than in a massive roll-out of agents everywhere at once.

There are different models with different strengths. We want to use the best model for each task
Ranil Boteju, Lloyds Banking Group

One of the areas IT leaders need to consider is that their organisation will more than likely rely on a number of AI models to support agentic AI workflows. For instance, Ranil Boteju, chief data and analytics officer at Lloyds Banking Group, believes different models can be tasked with tackling each distinct part of a customer query.

“The way we think about this is that there are different models with different strengths, and what we want to do is to use the best model for each task,” says Boteju. This approach is how the bank sees agentic AI being deployed.

With agentic AI, problems can be broken down into smaller and smaller parts, where different agents respond to each part. Boteju believes in using AI agents to check the output from other agents, rather like acting as a judge or a second-line colleague acting as an observer. This can help to cut erroneous decision-making arising from AI hallucinations when the AI model basically produces a spurious result.

IT security in agentic AI

People in IT tend to appreciate the importance of adhering to cyber security best practices. But as Fraser Dear, head of AI and innovation at BCN, points out, most users do not think like a software developer who keeps governance in mind when creating their own agents. He urges organisations to impose policies that ensure the key security steps are not skipped in the rush to deploy agentic AI.

“Think about what these AI agents might access across SharePoint: multiple versions of documents, transcripts, HR files, salary data, and lots more. Without guardrails, AI agents can access all this indiscriminately. They won’t necessarily know which versions of these documents are draft and which are approved,” he warns.

The issue escalates when an agent created by one person is made available to a wider group of colleagues. It can inadvertently give them access to data that is beyond their permission level.

Dear believes data governance needs to include configuring data boundaries, restricting who can access what data according to job role and sensitivity level. The governance framework should also specify which data resources the AI agent can pull from.

In addition, he says AI agents should be built for a purpose, using principles of least privilege: “Just like any other business-critical application, it needs to be adequately tested and ‘red-teamed’. Perform penetration testing to identify what data the agent can surface, to whom, and how accurate the data is. Track and audit which agents are accessing which data and for what purpose, and implement real-time alerts to flag unusual access patterns.”

A bumpy ride ahead

What these conversations with technology experts illustrate is that there is no straightforward path to achieving a measurable business benefit from agentic AI workflows – and what’s more, these systems need to be secure by design.

Organisations need to have the right data strategy in place, and they should already be well ahead on their path to full digitisation, where automation through RPA is being used to connect many disparate workflows. Agentic AI is the next stage of this automation, where an AI is tasked with making decisions in a way that would have previously been too clunky using RPA.

However, automation of workflows and business processes are just pieces of an overall jigsaw. There is a growing realisation that the conversation in the boardroom needs to move beyond the people and processes.

BCG’s Apotheker believes business leaders should reassess what is important to their organisation and what they want to focus on going forward. This goes beyond the build versus buy debate: some processes and tasks should be owned by the business; some may be outsourced to a provider that may well use AI; and some will be automated through agentic AI workflows internally.

It is rather like business process engineering, where elements powered by AI sit alongside tasks outsourced to an external service provider. For Apotheker, this means businesses need to have a firm grasp of what part of the business process is strategically important and can be transformed internally.

Business leaders then need to figure out how to connect the strategically important part of the workflow to what the business actually outsources or potentially automates in-house.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Mark Zuckerberg Tries to Play It Safe in Social Media Addiction Trial Testimony

Published

on

Mark Zuckerberg Tries to Play It Safe in Social Media Addiction Trial Testimony


Zuckerberg repeatedly fell back on accusing Lanier of “mischaracterizing” his previous statements. When it came to emails, Zuckerberg typically objected based on how old the message was, or his lack of familiarity with the Meta employees involved. “I don’t think so, no,” he replied when directed to clarify if he knew Karina Newton, Instagram’s head of public policy in 2021. And Zuckerberg never failed to point out when he wasn’t actually on an email thread entered as evidence.

Perhaps anticipating these detached and repetitive talking points from Zuckerberg—who claimed over and over that any increased engagement from a user on Facebook or Instagram merely reflected the “value” of those apps—Lanier early on suggested that the CEO has been coached to address these issues. “You have extensive media training,” he said. “I think I’m sort of well-known to be pretty bad at this,” Zuckerberg protested, getting a rare laugh from the courtroom. Lanier went on to present Meta documents outlining communication strategies for Zuckerberg, describing his team as “telling you what kind of answers to give,” including in a context such as testifying under oath. “I’m not sure what you’re trying to imply,” Zuckerberg said. In the afternoon, Meta counsel Paul Schmidt returned to that line of questioning, asking if Zuckerberg had to speak to the media because of his role as head of a major business. “More than I would like,” Zuckerberg said, to more laughter.

In an even more, well, “meta” moment after the court had returned from lunch, Kuhl struck a stern tone by warning all in the room that anyone wearing “glasses that record”—such as the AI-equipped Oakley and Ray-Ban glasses sold by Meta for up to $499—had to remove them while attending the proceedings, where both video and audio recordings are prohibited.

K.G.M.’s suit and the others to follow are novel in their sidestepping of Section 230, a law that has protected tech companies from liability for content created by users on their platforms. As such, Zuckerberg stuck to a playbook that framed the lawsuit as a fundamental misunderstanding of how Meta works. When Lanier presented evidence that Meta teams were working on increasing the minutes users spent on their platforms each day, Zuckerberg countered that the company had long ago moved on from those objectives, or that those numbers were not even “goals” per se, just metrics of competitiveness within the industry. When Lanier questioned if Meta was merely hiding behind an age limit policy that was “unenforced” and maybe “unenforceable,” per an email from Nick Clegg, Meta’s former president of global affairs, Zuckerberg calmly deflected with a narrative about people circumventing their safeguards despite continual improvements on that front.

Lanier, though, could always return to K.G.M., who he said had signed up for Instagram at the age of 9, some five years before the app started asking users for their birthday in 2019. While Zuckerberg could more or less brush off internal data on, say, the need to convert tweens into loyal teen users, or Meta’s apparent rejection of the alarming expert analysis they had commissioned on the risks of Instagram’s “beauty filters,” he didn’t have a prepackaged response to Lanier’s grand finale: a billboard-sized tarp, which took up half the width of the courtroom and required seven people to hold, of hundreds of posts from K.G.M.’s Instagram account. As Zuckerberg blinked hard at the vast display, visible only to himself, Kuhl, and the jury, Lanier said it was a measure of the sheer amount of time K.G.M. had poured into the app. “In a sense, y’all own these pictures,” he added. “I’m not sure that’s accurate,” Zuckerberg replied.

When Lanier had finished and Schmidt was given the chance to set Zuckerberg up for an alternate vision of Meta as a utopia of connection and free expression, the founder quickly gained his stride again. “I wanted people to have a good experience with it,” he said of the company’s platforms. Then, a moment later: “People shift their time naturally according to what they find valuable.”



Source link

Continue Reading

Tech

The Best Bose Noise-Canceling Headphones Are Discounted Right Now

Published

on

The Best Bose Noise-Canceling Headphones Are Discounted Right Now


Bose helped write the book on noise canceling when it entered the market way back in the 1970s. Lately, the brand has been on a tear, with the goal of creating the ultimate in sonic solitude. The QuietComfort Ultra Gen 2 are Bose’s latest and greatest creation, offering among the very best noise canceling we’ve ever tested.

Just as importantly, they’re currently on sale for $50 off. Now, this might not seem like a huge discount on a $450 pair of headphones, but this is the lowest price we’ve seen on these headphones outside of a major shopping holiday. So if you missed your chance during Black Friday but you have a spring break trip to Mexico or Hawaii on the calendar, this is your best bet.

The Best Noise Canceling Headphones Are on Sale

I’ve wondered over the last few years if the best noise cancelers even needed another potency upgrade. Previous efforts like Sony’s WH-1000XM5, Apple’s AirPods Max, and Bose’s own QuietComfort 45 offering enough silence that my own wife gives me a jump scare when she walks up behind me.

Then I had a kid.

Bose’s properly named QuietComfort Ultra not only do a fantastic job quelling the many squeaks, squawks, and adorable pre-nap protests my baby makes. Now that my wife and I have turned my solo office into a shared space, I can go about my business in near total sonic freedom, even as she sits in on a loud Zoom call.

In testing, we found Sony’s latest WH-1000XM6 offered a slight bump in noise canceling performance over Bose’s latest, due in part to their zippy response time when attacking unwanted sounds. But both were within a hair of each other when tested across frequencies. I prefer Bose’s pair for travel, due to their more cushy design that lets me listen for a full cross-country flight in luxe comfort.

Upgrades to the latest generation, like the ability to sleep them and quickly wake them, make these headphones surprisingly more intuitive to use daily. The new built-in USB-C audio interface lets you listen to lossless audio directly from supported devices, a nice touch now that Spotify has joined Apple Music and other services with lossless audio support.

Speaking of audio, the QC Ultra Gen 2’s performance is impressive, providing clear and crisp detail and dialog, with a lively touch that brings some added excitement to instruments like percussion or zippy guitar tones. It’s a lovely overall presentation. I’m not a huge fan of the new spatial audio mode (what Bose calls Cinema mode), but it’s always nice to have options.

These headphones often bounce between full price and this $50 discount, so if you’ve been waiting for the dip, now’s the time to buy. If you’ve deal with daily distractions like me, whether at home or in a busy office space, you’ll appreciate the latest level of sound-smashing solitude Bose’s best noise-cancelers ever can provide.


Power up with unlimited access to WIRED. Get best-in-class reporting and exclusive subscriber content that’s too important to ignore. Subscribe Today.



Source link

Continue Reading

Tech

This Defense Company Made AI Agents That Blow Things Up

Published

on

This Defense Company Made AI Agents That Blow Things Up


Like many Silicon Valley companies today, Scout AI is training large AI models and agents to automate chores. The big difference is that instead of writing code, answering emails, or buying stuff online, Scout AI’s agents are designed to seek and destroy things in the physical world with exploding drones.

In a recent demonstration, held at an undisclosed military base in central California, Scout AI’s technology was put in charge of a self-driving off-road vehicle and a pair of lethal drones. The agents used these systems to find a truck hiding in the area, and then blew it to bits using an explosive charge.

“We need to bring next-generation AI to the military,” Colby Adcock, Scout AI’s CEO, told me in a recent interview. (Adcock’s brother, Brett Adcock, is the CEO of Figure AI, a startup working on humanoid robots). “We take a hyperscaler foundation model and we train it to go from being a generalized chatbot or agentic assistant to being a warfighter.”

Adcock’s company is part of a new generation of startups racing to adapt technology from big AI labs for the battlefield. Many policymakers believe that harnessing AI will be the key to future military dominance. The combat potential of AI is one reason why the US government has sought to limit the sale of advanced AI chips and chipmaking equipment to China, although the Trump administration recently chose to loosen those controls.

“It’s good for defense tech startups to push the envelope with AI integration,” says Michael Horowitz, a professor at the University of Pennsylvania who previously served in the Pentagon as deputy assistant secretary of defense for force development and emerging capabilities. “That’s exactly what they should be doing if the US is going to lead in military adoption of AI.”

Horowitz also notes, though, that harnessing the latest AI advances can prove particularly difficult in practice.

Large language models are inherently unpredictable and AI agents—like the ones that control the popular AI assistant OpenClawcan misbehave when given even relatively benign tasks like ordering goods online. Horowitz says it may be especially hard to demonstrate that such systems are robust from a cybersecurity standpoint—something that would be required for widespread military use.

Scout AI’s recent demo involved several steps where AI had free rein over combat systems.

At the outset of the mission the following command was fed into a Scout AI system known as Fury Orchestrator:

Fury Orchestrator, send 1 ground vehicle to checkpoint ALPHA. Execute a 2 drone kinetic strike mission. Destroy the blue truck 500m East of the airfield and send confirmation.

A relatively large AI model with over a 100 billion parameters, which can run either on a secure cloud platform or an air-gapped computer on-site, interprets the initial command. Scout AI uses an undisclosed open source model with its restrictions removed. This model then acts as an agent, issuing commands to smaller, 10-billion-parameter models running on the ground vehicles and the drones involved in the exercise. The smaller models also act as agents themselves, issuing their own commands to lower-level AI systems that control the vehicles’ movements.

Seconds after receiving marching orders, the ground vehicle zipped off along a dirt road that winds between brush and trees. A few minutes later, the vehicle came to a stop and dispatched the pair of drones, which flew into the area where it had been instructed that the target was waiting. After spotting the truck, an AI agent running on one of the drones issued an order to fly toward it and detonate an explosive charge just before impact.



Source link

Continue Reading

Trending