Connect with us

Tech

Interview: Paul Neville, director of digital, data and technology, The Pensions Regulator | Computer Weekly

Published

on

Interview: Paul Neville, director of digital, data and technology, The Pensions Regulator | Computer Weekly


Paul Neville, director of digital, data and technology at The Pensions Regulator (TPR), is building strong IT foundations as part of a five-year strategy to help transform the organisation from a compliance-based to a risk-based regulator. He explains what that change will mean in practice over the next few years.

“As a regulator, we’ll obviously still have specific processes we expect people to follow, but we’ll be much more concerned about the outcome that we’re trying to achieve, and we’ll make decisions based on that demand,” he says.

“To make that shift, we need to understand our data. We need to have the right level of automation to explore information, measure outcomes, and deliver those outcomes with industry and other government bodies interested in pensions. We imagine a future world in which information flows between organisations.”

A historian by education, Neville entered the world of business as the internet boom gathered pace in the 1990s. Describing himself as a self-taught digital leader, he developed his skills in the commercial sector at blue-chip companies such as Sky and BT, and with startups and smaller businesses.

His transformation work in larger firms focused on delivering big technology-enabled change programmes, centred on boosting customer experiences. Mid-career, he decided to apply those skills for public benefit and worked as a consultant for two major charities, Marie Curie and Macmillan, helping those organisations to transform digitally.

Neville then turned to the public sector to apply his skills in another for-good area. He worked in digital leadership roles at the London Borough of Waltham Forest, UK Export Finance and Enfield Council, before joining TPR in October 2023. Neville reflects on this final move.

“It was the opportunity to take all of that experience and deliver on a national scale and impact everybody, because almost everyone has a pension, and the opportunity to make that process work for the citizens of this country, and make a difference for people in retirement, is a massive issue,” he says.

“Secondly, the chief executive, my boss, Nausicaa Delfas, was setting up an opportunity to change, not only TPR, but the pensions industry, so the role felt like a chance to be a central part of that journey, because not every CIO gets to sit on the board of an organisation.”

Transforming processes

Neville reflects on the transformation journey at TPR, saying it’s been an exciting ride: “Everyone on the executive board is aligned on the fact that digital, data and technology are the key enablers for helping us change as an organisation, and also helping the pensions industry transform.”

Late last year, Neville launched a digital, data and technology strategy, a set of missions over a five-year plan to renew TPR’s capabilities, embracing new ways of working, driving efficiency, automation and innovation. In March this year, he launched the data component of the strategy, which establishes a collaborative plan to drive adoption of new data technologies and standards.

“I am proud of that strategic work,” he says. “That effort includes strengthening our technology foundations, improving our capability in terms of automation, and making sure we have the skills in my team to develop the future. We’ve hired quite a lot of people and also consolidated similar skills across the organisation, and that’s enabled us to deliver more and save money on suppliers, because we’ve done a lot in-house.”

Neville says the projects his team has worked on include delivering artificial intelligence (AI) tools that help increase automation. They’ve also focused on improving cyber security and data governance to ensure safe and secure access to high-quality internal information.

The team also recently launched an innovation service to foster conversations with industry stakeholders. Neville says TPR is encouraging and enabling people and organisations to think differently about the services they deliver to their customers and the benefits they provide.

“That’s just a small selection of the things we’ve done so far,” he says. “We’ve got just under four years left of the plan. There’s a lot more we want to do, but we have built the confidence, both internally and externally, that we are a different TPR and we can deliver. That encourages everyone in our industry to think differently as well.”

Building foundations

Neville says the transformation work enabled through the strategy so far is focused on building the right technological foundations at TPR.

In addition to cyber security and data governance projects, his team has focused on service management initiatives that help TPR rationalise its application estate. The organisation has adopted an agile, product-based approach to deliver reusable capabilities for flexible services in key areas related to pensions governance within the organisation and externally.

TPR is also making progress on automation, including in case management. He inherited a situation where cases were often managed on spreadsheets or via one-off technology solutions. In short, nothing was joined up. Neville is using automation, via Microsoft Dynamics 365, to take a different approach.

“Everyone on the executive board is aligned on the fact that digital, data and technology are the key enablers for helping us change as an organisation, and also helping the pensions industry transform”

Paul Neville, The Pensions Regulator

“We’re delivering a single case management system,” he says. “We are working to make sure the process is streamlined, so we’re thinking about the business process first. By taking that approach, we can deliver in an agile and iterative way. Where we’ve already rolled that technology out, we’ve delivered productivity savings of around 60%.”

Neville expects the progress made through case management automation to be repeated in other areas. As automation takes hold in the organisation, he anticipates people will spend less time on paperwork and more time delivering better services.

Given the developments in the technology sector during the past few years, AI is playing a key role.

“We are deploying AI to specific use cases,” he says. “I’ve got a fantastic data science team, who are developing lots of very clever tools for us.”

Embracing AI

Neville says the next two years will be spent honing these technology initiatives and delivering tangible results.

Critical projects include implementing organisation-wide access to data via Dynamics 365 services and completing transformation projects in core areas, such as cyber security and data governance. It’s these foundations and the application of emerging technology that will help TPR transform from a compliance-based to a risk-based regulator.

Two years from now, Neville expects all foundational work, from case management to customer relationship management (CRM) systems, will be embedded within the organisation. On these foundations, employees will use AI-enabled tools to boost their working processes.

“That preparatory work will enable us in the future to create more customer-facing digital capabilities,” he says.

One example of where TPR is applying AI is analysing online news sites to scan for potential risks in pension schemes. Neville saw AI could provide a helping hand to what is currently a manually intensive process.

“That’s a great example, because many pension schemes don’t have the same name as the provider,” he says. “The technology does quite a lot of joining up behind the scenes to make that process work.”

Another example is using AI to analyse Task Force on Climate-Related Financial Disclosures (TCFD) statements, which organisations must submit to comply with legislation. Once again, generative technology – in the form of OpenAI and Microsoft Azure technology – is helping TPR staff summarise lengthy prose and create insights as a basis for intervention when required.

“Those are just two examples,” says Neville. “We’ve got other risk tools that we’re using. We are also rolling out Copilot internally, and we’re in the middle of our plan for that technology. We’re trialling GitHub Copilot for our developers, and they’re starting to write test scripts, which is fun. We’re still at the beginning of this work, as are lots of people, but these projects are a taster of what we want to achieve.”

Solving challenges

Neville says the result of this work will be that the future TPR will have an operating environment that differs greatly from its traditional, manually intensive processes. Today, the organisation maintains a digital portal, where people send, for example, pension scheme returns as part of a large, intensive data upload. Neville foresees a better approach.

We need to understand our data, and so does the industry. The firms need to provide better customer experiences for people, like you and me, who have pensions
Paul Neville, The Pensions Regulator

“There won’t necessarily be a scheme return like you see today, because we will have the information we need, and organisations across the industry will be more digitally enabled, so they’re able to drive the kind of innovation and competition in the market that will benefit savers, people with pensions and employers that offer pensions,” he says.

This new level of digital interaction will make it easier for TPR and organisations in the pensions sector to tackle some of the thorny issues of the day. One of these issues is adequacy, or the extent to which people save enough money in pension schemes for their retirement.

“We need to understand our data, and so does the industry. The firms need to provide better customer experiences for people, like you and me, who have pensions. By driving a customer focus, we think the industry will perform better,” he says.

“We may even feel a bit like a fintech as an organisation, because we’ll be enabling innovation. Technology will produce the insights we need to work with the industry. So, we could be operating in a completely different world, which drives innovation and change for everyone.”

Neville continues to seek ways to push transformation forward. He recently helped launch the Pensions Data and Digital Working Group, which will help ensure TFP and the pension industry work together to embrace digital, data and technology and achieve the digitalisation and automation aims outlined in the five-year strategy.

“The working group has 15 members,” he says. “It represents a cross-section of people from different parts of industry, so trustees, actuaries, lawyers, but also people from more technical backgrounds as well. It’s about getting all kinds of people involved to help solve the problems and move to this new world.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Heading to the Sauna? You Only Need 20 Minutes

Published

on

Heading to the Sauna? You Only Need 20 Minutes


Like cold plunging, sauna use isn’t suitable for everyone, however. If you have any heart, kidney, blood pressure, or respiratory concerns or are pregnant, you should avoid the sauna, for example. If you are unsure, you should always consult your doctor before use. And regardless of your level of sauna experience, if you feel lightheaded, nauseous, or uncomfortable in any way, you must leave the sauna immediately to avoid overheating or dehydration.

Traditional Sauna Vs. Infrared Sauna

How long you spend in a sauna also depends on what type of sauna you have, be it a traditional dry sauna, infrared sauna, or perhaps a steam sauna. The temperature of your sauna also matters, as the higher the temperature or humidity, the less time you can safely stay inside.

The two most popular sauna options include the traditional Finnish-style dry sauna that functions on high heat with low humidity at around 160 to 200 degrees Fahrenheit (70 to 100 Celsius). A typical session can last around eight to 10 minutes and is widely recommended three to four times a week for general health and relaxation. Pure Saunas suggests capping your sauna session at 20 minutes. Longer than that can lead to dehydration or overheating.

Meanwhile, an infrared sauna uses infrared light to heat the body at lower temperatures between 120 and 150 Fahrenheit (50-65 C). As the heat feels milder, Pure Saunas suggest a time range between 20 and 30 minutes. While experienced sauna users may be able to go to 30 minutes, it’s safer to keep to sessions under 20 minutes.

The Benefits of Heat and Movement

Aside from counting down the minutes on the sand timer, there’s another way to “be” while in a sauna. Space may limit you, but gentle intentional stretching in the sauna not only feels great but can be beneficial. A study by Harvard Medical School found that a hot yoga flow may even ease depression, for example, which is an indication of how well heat and movement go together.

“Learning to move and breathe calmly in heat teaches you to self-regulate and to stay centered when things feel intense,” says Nick Higgins from Hotpod Yoga. “It also elevates the heart rate and circulation, giving a gentle cardiovascular boost even during slower, more mindful flows. Whether you’re flowing through yoga or sitting, that mindful relationship with heat can be both grounding and transformative. Warmth encourages muscles to soften and lengthen, supporting flexibility and joint mobility while reducing the risk of strain.”

Your fellow sauna buddies may not appreciate you attempting a full-on sun salutation in such a tight space, but there are a few subtle yoga poses you can try.

“Certain stretches feel more accessible when the muscles are warm and supple, such as hip openers like Pigeon Pose, gentle backbends like Cobra or Bridge, and hamstring stretches like Forward Fold,” says Higgins. “The heat helps you ease deeper into those postures with control rather than force, which is key to safe, sustainable flexibility.”



Source link

Continue Reading

Tech

Robots that spare warehouse workers the heavy lifting

Published

on

Robots that spare warehouse workers the heavy lifting



There are some jobs human bodies just weren’t meant to do. Unloading trucks and shipping containers is a repetitive, grueling task — and a big reason warehouse injury rates are more than twice the national average.

The Pickle Robot Company wants its machines to do the heavy lifting. The company’s one-armed robots autonomously unload trailers, picking up boxes weighing up to 50 pounds and placing them onto onboard conveyor belts for warehouses of all types.

The company name, an homage to The Apple Computer Company, hints at the ambitions of founders AJ Meyer ’09, Ariana Eisenstein ’15, SM ’16, and Dan Paluska ’97, SM ’00. The founders want to make the company the technology leader for supply chain automation.

The company’s unloading robots combine generative AI and machine-learning algorithms with sensors, cameras, and machine-vision software to navigate new environments on day one and improve performance over time. Much of the company’s hardware is adapted from industrial partners. You may recognize the arm, for instance, from car manufacturing lines — though you may not have seen it in bright pickle-green.

The company is already working with customers like UPS, Ryobi Tools, and Yusen Logistics to take a load off warehouse workers, freeing them to solve other supply chain bottlenecks in the process.

“Humans are really good edge-case problem solvers, and robots are not,” Paluska says. “How can the robot, which is really good at the brute force, repetitive tasks, interact with humans to solve more problems? Human bodies and minds are so adaptable, the way we sense and respond to the environment is so adaptable, and robots aren’t going to replace that anytime soon. But there’s so much drudgery we can get rid of.”

Finding problems for robots

Meyer and Eisenstein majored in computer science and electrical engineering at MIT, but they didn’t work together until after graduation, when Meyer started the technology consultancy Leaf Labs, which specializes in building embedded computer systems for things like robots, cars, and satellites.

“A bunch of friends from MIT ran that shop,” Meyer recalls, noting it’s still running today. “Ari worked there, Dan consulted there, and we worked on some big projects. We were the primary software and digital design team behind Project Ara, a smartphone for Google, and we worked on a bunch of interesting government projects. It was really a lifestyle company for MIT kids. But 10 years go by, and we thought, ‘We didn’t get into this to do consulting. We got into this to do robots.’”

When Meyer graduated in 2009, problems like robot dexterity seemed insurmountable. By 2018, the rise of algorithmic approaches like neural networks had brought huge advances to robotic manipulation and navigation.

To figure out what problem to solve with robots, the founders talked to people in industries as diverse as agriculture, food prep, and hospitality. At some point, they started visiting logistics warehouses, bringing a stopwatch to see how long it took workers to complete different tasks.

“In 2018, we went to a UPS warehouse and watched 15 guys unloading trucks during a winter night shift,” Meyer recalls. “We spoke to everyone, and not a single person had worked there for more than 90 days. We asked, ‘Why not?’ They laughed at us. They said, ‘Have you tried to do this job before?’”

It turns out warehouse turnover is one of the industry’s biggest problems, limiting productivity as managers constantly grapple with hiring, onboarding, and training.

The founders raised a seed funding round and built robots that could sort boxes because it was an easier problem that allowed them to work with technology like grippers and barcode scanners. Their robots eventually worked, but the company wasn’t growing fast enough to be profitable. Worse yet, the founders were having trouble raising money.

“We were desperately low on funds,” Meyer recalls. “So we thought, ‘Why spend our last dollar on a warm-up task?’”

With money dwindling, the founders built a proof-of-concept robot that could unload trucks reliably for about 20 seconds at a time and posted a video of it on YouTube. Hundreds of potential customers reached out. The interest was enough to get investors back on board to keep the company alive.

The company piloted its first unloading system for a year with a customer in the desert of California, sparing human workers from unloading shipping containers that can reach temperatures up to 130 degrees in the summer. It has since scaled deployments with multiple customers and gained traction among third-party logistics centers across the U.S.

The company’s robotic arm is made by the German industrial robotics giant KUKA. The robots are mounted on a custom mobile base with an onboard computing systems so they can navigate to docks and adjust their positions inside trailers autonomously while lifting. The end of each arm features a suction gripper that clings to packages and moves them to the onboard conveyor belt.

The company’s robots can pick up boxes ranging in size from 5-inch cubes to 24-by-30 inch boxes. The robots can unload anywhere from 400 to 1,500 cases per hour depending on size and weight. The company fine tunes pre-trained generative AI models and uses a number of smaller models to ensure the robot runs smoothly in every setting.

The company is also developing a software platform it can integrate with third-party hardware, from humanoid robots to autonomous forklifts.

“Our immediate product roadmap is load and unload,” Meyer says. “But we’re also hoping to connect these third-party platforms. Other companies are also trying to connect robots. What does it mean for the robot unloading a truck to talk to the robot palletizing, or for the forklift to talk to the inventory drone? Can they do the job faster? I think there’s a big network coming in which we need to orchestrate the robots and the automation across the entire supply chain, from the mines to the factories to your front door.”

“Why not us?”

The Pickle Robot Company employs about 130 people in its office in Charlestown, Massachusetts, where a standard — if green — office gives way to a warehouse where its robots can be seen loading boxes onto conveyor belts alongside human workers and manufacturing lines.

This summer, Pickle will be ramping up production of a new version of its system, with further plans to begin designing a two-armed robot sometime after that.

“My supervisor at Leaf Labs once told me ‘No one knows what they’re doing, so why not us?’” Eisenstein says. “I carry that with me all the time. I’ve been very lucky to be able to work with so many talented, experienced people in my career. They all bring their own skill sets and understanding. That’s a massive opportunity — and it’s the only way something as hard as what we’re doing is going to work.”

Moving forward, the company sees many other robot-shaped problems for its machines.

“We didn’t start out by saying, ‘Let’s load and unload a truck,’” Meyers says. “We said, ‘What does it take to make a great robot business?’ Unloading trucks is the first chapter. Now we’ve built a platform to make the next robot that helps with more jobs, starting in logistics but then ultimately in manufacturing, retail, and hopefully the entire supply chain.”



Source link

Continue Reading

Tech

Ryder Cup takes its best network shot | Computer Weekly

Published

on

Ryder Cup takes its best network shot | Computer Weekly


The sporting calendar is full of big events, but some end up being unforgettable. And this year’s Ryder cup golf tournament at Bethpage Black golf course in New York was one of them, not just for what happened on the course, but also off it.

But as the action and drama on and off the golf course unfolded, underneath it all was an element that kept everything running: the performance of the network on which the event’s management relied on. In networking, that is very much job done.

And this job was something more complex than ever before. It involved supporting multi-faceted demands from all stakeholders involved the event, spanning retail, commercial, operations, security, safety and broadcast.

First contested in 1927 at the Worcester country club in the US, and named after its founder Samuel Ryder, The Ryder Cup has become one of the world’s leading sporting events. Every two years, 24 of the best players from Europe and the US go head-to-head in match play golf. It is set to celebrate its centenary in Country Limerick in Ireland.

For this year’s competition, the sporting and organisational bar was set higher than ever, and getting things right depended on what the organisers described as a connected intelligence platform that had the flexibility to ingest, unify and analyse tournament data from a variety of sources to gain real-time insights and intelligence. These sources included crowd locations, drink and merchandise point of sale devices, ticket scans, weather information, the location of golf carts, historical footage of past Ryder Cup events, site plans and operational manuals.

From these sources, the event organisers wanted to achieve a defined series of outcomes including real-time operational management and decision-making; operational intelligence to inform future event planning to optimise revenue, costs and guest experiences; enhanced tools and resource to assist event staff; realise immediate value; and utilisation of tournament content.

But the bottom line was that to gain these outcomes, the Ryder Cup team needed a scalable, reliable and reusable data infrastructure. To achieve this, the Bethpage Black event’s underlying communications infrastructure was based on technology from HPE.

A history of connectivity

The two parties first began working in 2018 and the relationship accelerated in 2023 for the Rome edition of the Ryder Cup, when HPE provided greatly expanded coverage, alongside enhanced fan and staff experiences, based on an integrated private 5G and Wi-Fi network. The infrastructure combined the private 5G technology of Athonet, which was at the time recently acquired by HPE and HPE Aruba Networking’s Wi-Fi technology.

For 2023, the basic job was to support “the most digitally engaging” Ryder Cup, with network innovation designed to deliver enhanced fan experiences. That meant not only addressing spectators’ demands for constant connectivity, but also providing a personalised, immersive experience driven by rich content. Fans were able to use the network in Rome to navigate around the course virtually, jump the queues for merchandise and food, and track player locations. Similarly, operations staff were able to monitor fan behaviour, assign more staff during peak periods, and provide fan activations whenever and wherever needed.

Wi-Fi 6/6E networks provided the mainstay of the connectivity required for thousands of fans congregating in popular areas, while private 5G was the base for wide-area coverage to more remote parts of the golf course, as well as a secure private network dedicated to critical operations staff.

In reality, the course presented unique connectivity challenges, not only to the organisers, but to players and fans alike. It covered 370 acres of archaeologically protected countryside, meaning HPE could not route all of the course to run fibre optic ducting. Furthermore, the fixed cables attracted the very unwanted attention of fibre-hungry rats running around the site, meaning integrated Wi-Fi and private 5G network wireless options were imperative.

Despite still having to deal with some unwelcome creatures around the course, at least there were none of the four-legged variety at Bethpage Black. But there were plenty of people to deal with – the network had to cope with the demands of around 250,000 spectators plus staff over the weekend, making the job equivalent to servicing the connectivity needs of a small city.

Building blocks of modern IT

For James Roberston, HPE vice-president and CTO of industrial strategy, and CTO of media, entertainment, hospitality and sports, what was delivered in 2025 was probably one of the largest, temporary networks in the world and reflective of what HPE describes as the “essential building blocks” for modern IT: AI, hybrid cloud and networking. Any idea of a reliable, never mind elevated, experience could not have been considered without the use of AI.

“It takes these three key building blocks to make it work, to be able to gather the data right – whether it be IoT telemetry, network telemetry or other environmental telemetry – to bring it together and to process it with AI,” said Robertson. “It’s all data. And we have to make sure that we’re collecting, curating and collating the data in the most appropriate way based on the event. One of the things that we’re really proud of is how we’ve gone about this before the Ryder Cup this time around.”

In terms of basic core networking capabilities, the network comprised 170 CX switches out on the course in different locations, 650 Wi-Fi 6E access points (APs), 25 user experience insight sensors, a three-node private 5G network, 67 AI-enabled cameras and the HPE Aruba Networking Central management system.

It takes these three key building blocks to make it work, to be able to gather the data right, to bring it together and to process it with AI
James Roberston, HPE

There was also a HPE private cloud AI colocated with the network core in an air-cooled trailer on site, with multiple telemetry date generated via APIs. These gave notice of weather alerts, along with ticket scans, merchandising orchestration, food and beverage sales, and staff cart locations, as well as guest counting and queue management in real time. It also supported video search and statistics for the hundreds of accredited media at the event.

Yet Robertson feels that in such deployments as the Ryder Cup, simply building a network is table stakes: “We know how to do it. [We had] 1,500 acres of a golf course that we covered from end to end, edge to edge with coverage for Wi-Fi. [But also] we said if networking is table stakes, how do we go to that next level? What can we do to elevate the experience even more?”

The user experience sensors were said to be of particular importance, he added, saying: “Think of [them] as being a synthetic user where you can programme different tests, different things that you want to [observe such as] whether it’s accessing a website or whether it’s basic network, up, down, can I get a DHCP address, can I do something on a website? Am I getting the right response from the website?

“So, these synthetic users sitting around the course are basically giving us a view of what the people walking around the course are seeing in real time, but we’re doing it from a telemetry standpoint. We’re gathering that data, which is fed into the equation about how the infrastructure is behaving, even at the far flung reaches of the ninth hole all the way back.”

Key to reaching these far-flung places was a three-node private 5G network using the CBRS spectrum as deployed in the US. Used for operational comms and for backhaul, it was used in New York to plug last-minute coverage gaps – for example, a pop-up business that needed a mobile payment processing system.

Real-time insights

Yet even for such advanced tech, there was a large human element to the sensors and their positioning. HPE engineers positioned the user experience sensors at human height – as they were supposed to take in data from the same perspective as people, they were set at a height that an average human would be when performing typical tasks to give a better indication of the health of the infrastructure than if it was embedded in the AP hardware.

And similarly in running tests on the network heath, HPE examined “normal” behaviour, such as how well people could access social media sites to post and stream content. It addressed questions such whether the network seeing latency changes going to those sites, and how much load there was on specific APs when people moved en masse across the course to a location where golf was taking place.

This was done, noted Roberston, precisely to give the HPE on-site engineers an early warning system to ascertain how reliably the network was functioning. The key, again, was the telemetry data enabling understanding of the infrastructure.

HPE built for the Ryder Cup team a real-time operational intelligence dashboard that could only track essential items during operations, such as flow, wait times, sales and weather, along with monitoring live data feeds from HPE networks, including video streams. It could, for example, give indications of crowd surges near pavilions due to weather events, and increases in merchandise as well as food and beverage sales. As these peaks aligned with heavier network usage in area, the organisers could align extra staff, increase merchandising, scale F&B and increase bandwidth to accommodate users.

Michael Cole, CTO of the European Tour group and Ryder Cup Europe, revealed that the HPE operational intelligence platform could prepare, unify and feed ingested data into advanced analytics tools – such as computer vision crowd analytics and an AI assistant – to provide the required real-time insights.

For Cole, a core deliverable from the investment in the HPE network, and AI in particular, was to realise immediate value from AI use cases, gaining knowledge on network utilisation, bandwidth thresholds and the ability to maximise network performance insights.

For example, spectator behaviour analysis offered entry and exit counting, dwell times, queue duration and people counting. An enterprise data platform enabled sharing of the content and analytics environment for all stakeholders, including secure data access from the network ecosystem.

A video search assistant – using an engine built on Nvidia blueprint, running on HPE Private Cloud Al – allowed users to seek out any Ryder Cup video instantly, searching by player, hole number, or moment. It had an Al tool that found highlights by searching and summarising Ryder Cup footage using natural language processing. Historical footage was indexed and searchable via Al interface.

Crucially, the AI use cases included networking alerting and workflows, showing network utilisation and bandwidth thresholds, enabling network performance insights to be maximised. An essential element of the HPE setup was to use agentic AI capabilities to diagnose problems before they occurred.

“[We needed to see network traffic] data in advance, sorting [a problem] before it happened and basically being in the mindset that – if something really goes wrong – we just swap it out. You don’t have time to diagnose and fix it. We just swap it out and keep going,” said Robertson.

The layout of the network infrastructure

The network fibre backhaul feeding into the trailers housing the network control centre exemplified perfectly the essence of managing an outdoor event of this scale. The fibre cables were run around the course under specially reinforced ducting that could take the weight of fully laden tractors and golf carts, and of the thousands of people walking by. Near the end of the cable run by the control centres, the most practical route was to hang then from the branches of trees and over pathways.

Within the air-cooled trailers that were used to monitor performance were the engineers. A trailer housed two server racks side by side, with cables cross-connected into the core switch, the aggregation switches and the uplink to the internet. In the lower half of the racks sat the PC AI system, the GPU worker node and cluster node.

On a practical note, the server room needed massive cooling through big air conditioners thanks mainly to the air conditioners working for the GPU-based PC AI system. The engineers stated they had never put such a PC AI server in a trailer before, adding that they would try not to do so next time due to the noise of the fans.

The HPE engineers described their network insight dashboard as a next-generation control centre, basically allowing them to “find a needle in a haystack”. Traditionally, network management indicated when systems were up or down, but the setup at Bethpage Black could advise where and when things were starting to go wrong or perform in a way that was not offering a good customer experience. In the past, the system used red or green signals to indicate the state of the network; the more fluid and flexible system now displays added colours for the various stages of network issues.

An engineer said: “I’ve got 719 devices total, between APs, switches and wireless gateways. I can look for clients and I can look for alerts. But my key points are all brought up here, where I can see things like overall usage, how the network is operating and what the health insights are.

“From an insight perspective, we’re taking information from the data lake that all this feeds into. We’re using the network as a big sensor to collect telemetry and feed it back, and then we’ll run it through machine learning engines or AI processes to go back and look for things. This helps us figure out things like, [how to locate something] in a large, deployed network, and how many switches do we have? And keeping track of things like, what firmware am I on?

“If I want to enable 6 GHz communications on Wi-Fi, I have to use new technology. I have to use things like WPA 3, but not all my clients support that yet. The insight [function] looks at all the client devices that are connecting to the network, not just my network, but any network that is serviced by HPE, to find what the client capabilities are. [It helps me find] the tipping point on an SSID where I go, ‘I have enough critical mass for the types of devices, the types of applications that they’re using, that it makes sense to go to WPA 3’. So, that gives me additional insight to help where I might go.”

Another essential point in the operation was that the system was used to track events such as a Syslog or a logger using the network as a sensor. It was able to show, for example, how long it took an iPhone running the latest version of iOS to connect to the network. If there was an AP offline, was it a systemic problem? Was it a power issue? How many times did the AP fail in a given time period?  The AI was able to give specific information to say where problems were, the devices that it was on, the specific ports it was located, and suggest an action to remediate or troubleshoot.

HPE is looking forward to advancing this capability in the future by use of agentic AI reasoning agents that can correlate all network data and see how many people have been affected by a DHCP server and see if an issue was client specific, for example. The goal is that such actions will become self-healing over time.

On the subject of 6 GHz communications on Wi-Fi, Roberston added that another key issue to deal with was the frequency coordination of the many wireless technologies in use, including TV and broadcast equipment, as well as for IT and networking – in particular, the private 5G setup.

“A lot of cameras now use 6 GHz, so we had to work with the camera operators and camera owners and manufacturers to understand what part of that spectrum they were using so we [could] reserve it and make sure that it didn’t conflict,” he said. “That’s frequency coordination: it’s imperative at an event like this because you’ve got everything from two-way radios going on, to communication with TV. The last thing you want is one of those mission critical signals to go down.”

And, in reality, they did not fail. While all was going wild around the course, nobody mentioned the network in the post-event analysis. For example, how many people sent TikTok clips of Rory McIlroy being berated? All the networking elements passed off without noticeable incidents reported. The golf shots got hit, presidents danced, spectators spectated – and the network ran as required.

Job done – and now onwards to Limerick in 2027.



Source link

Continue Reading

Trending