Tech
Neos Networks to deliver live broadcast connectivity for Premier League | Computer Weekly

Broadcast infrastructure provider NEP Connect has selected Neos Networks in support of the delivery of live English Premier League men’s football for the 2025–26 season.
Formerly known as SIS Live, NEP Connect boasts a long record in sports broadcasting and providing global critical connectivity services. The company’s technology team delivers content to sports viewers worldwide via its satellite and Anylive fibre infrastructure.
As well as supporting broadcasters such as the BBC, Channel 4, ITV and Sky Sports, NEP Connect works with commercial brands like Red Bull, Audi and McDonald’s to deliver experiential live events and social network streaming.
Neos Networks believes the broadcast industry’s shift to remote production has raised the bar for live delivery, with full-HD 1080p now the standard across major events. This demands significantly increased bandwidth and ultra-low latency connections to ensure “flawless” picture quality in real time. Additionally, the firm says the growing use of multiple camera angles – including aerial drone footage, 360-degree views, and on-field close-ups – has enhanced the immersive experience for fans watching at home or on mobile devices.
Meeting these requirements means building network infrastructure that can handle not only larger data volumes, but also the complex routing and synchronisation of multiple high-bandwidth video feeds – capabilities that Neos insists it is uniquely equipped to provide in partnership with NEP Connect.
The agreement extends a seven-year relationship between the two companies that will see Neos tasked with providing high-capacity, low latency fibre connectivity.
Already having fibre backhaul in place across all 20 Premier League stadiums, Neos Networks was uniquely positioned to support NEP’s rapid mobilisation ahead of the upcoming season.
The fibre infrastructure will enable the live transport of high-definition broadcast feeds to NEP’s remote production hubs, reducing the need for on-site trucks and personnel, while maintaining the broadcast quality and resilience demanded by top-tier sport.
Underpinned by Neos Network’s UK-wide network, the connectivity being provided also includes geographically diverse circuits to ensure maximum uptime, supporting NEP’s commitment to delivering uninterrupted coverage across multiple platforms.
“Live broadcast delivery demands absolute performance – from resilience and latency to geographic reach – and our relationship with NEP has always been built around meeting those challenges head-on,” said Lee Myall, CEO at Neos Networks.
“This latest project builds on a strong foundation of trust and shared understanding, and we’re pleased to continue supporting NEP with reliable, high-quality connectivity as they deliver some of the UK’s most-watched sporting content.”
Vince Russell, NEP Connect managing director, added: “We’re very proud to continue to provide connectivity solutions to Premier League venues via the unrivalled NEP Connect Anylive network, and our partnership with Neos Networks is a key component of this delivery. Our work together is focused on supporting our customers with connectivity that is reliable, scalable to their needs and backed by industry-leading expertise so they can be successful in delivering for their audiences.”
Tech
‘People Are So Proud of This’: How River and Lake Water Is Cooling Buildings

“In the old days, it was more like a luxury project,” says Deo de Klerk, team lead for heating and cooling solutions at the Dutch energy firm Eneco. Today, his company’s clients increasingly ask for district cooling as well as district heating systems. Eneco has 33 heating and cooling projects under construction. In Rotterdam, Netherlands, one of the company’s installations helps to cool buildings, including apartment blocks, police offices, a theater and restaurants, using water from the River Meuse.
It’s not hard to see why cooling technologies are getting more popular. A few years ago, Nayral moved out of Paris. She remembers the heat waves. “My routine during the weekend was to go to the parks,” she says. Nayral would sit there well into the evening—reading Les Misérables, no less—waiting for her apartment to cool down. Recently, she has increasingly found herself spending time in shopping malls, where air-conditioning is plentiful, in order to make it through searing hot French summers. This year, unprecedented heat waves hit France and other countries in Europe.
The city of Paris is now desperate to help its denizens find cool refuges during spells of extreme heat. A key component of Parisian climate adaptation plans is the river-supplied cooling network, the pipes for which currently cover a distance of 100 kilometers, though this is due to expand to 245 km by 2042. While around 800 buildings are served by the network today, those in charge aim to supply 3,000 buildings by that future date.
Systems such as Paris’ do not pump river water around properties. Rather, a loop of pipework brings river water into facilities where it soaks up warmth from a separate, closed loop of water that connects to buildings. That heat transfer is possible thanks to devices called heat exchangers. When cooled water in the separate loop later arrives at buildings, more heat exchangers allow it to cool down fluid in pipes that feed air-conditioning devices in individual rooms. Essentially, heat from, say, a packed conference room or tourist-filled art gallery is gradually transferred—pipe by pipe—to a river or lake.
The efficiency of Paris’ system varies throughout the year, but even at the height of summer, when the Seine is warm, the coefficient of performance (COP)—how many kilowatt-hours of cooling energy you get for every kilowatt-hour of electricity consumed by the system—does not dip much below 4. In the winter, when offices, museums, and hospitals still require some air-conditioning, the COP can be as high as 15, much higher than conventional air-conditioning systems. “It is absolutely magnificent,” boasts Nayral.
But those summer temperatures are increasingly a concern. This summer, the Seine briefly exceeded 27 degrees Celsius (81 degrees Fahrenheit), says Nayral. How can that cool anything? The answer is chiller devices, which help to provide additional cooling for the water that circulates around buildings. Instead of blowing out hot air, those devices can expel their heat into the Seine via the river loop. The opportunity to keep doing this is narrowing, though—because Fraîcheur de Paris is not allowed to return water to the Seine at temperatures above 30 degrees Celsius, for environmental reasons. At present, that means the river can accommodate only a few additional degrees of heat on the hottest days. Future, stronger heat waves could evaporate more of that overhead.
Tech
SLA promises, security realities: Navigating the shared responsibility gap | Computer Weekly

The shared responsibility model (SRM) plays a central role in defining how security and operational duties are split between cloud providers and their customers. However, when this model intersects with service level agreements (SLAs), it introduces layers of complexity.
SLAs typically cover metrics like uptime, support response times and service performance, but often overlook critical elements such as data protection, breach response and regulatory compliance. This creates a responsibility gap, where assumptions about who is accountable can lead to serious blind spots. For instance, a customer might assume that the cloud provider’s SLA guarantees data protection, only to realise that their own misconfigurations or weak identity management practices have led to a data breach.
Organisations may mistakenly believe their provider handles more than it does, increasing the risk of non-compliance, security incidents and operational disruptions. Understanding the nuances between SLA commitments and shared security responsibilities is vital to safely leveraging cloud services without undermining resilience or regulatory obligations.
The reality of the SRM and SLAs
The SRM fundamentally shapes the scope and impact of SLAs in cloud environments. Let’s quickly understand the reality of cloud providers’ SRM.
- Cloud providers secure the infrastructure they manage; you ensure what you deploy.
- Customers are responsible for data, configurations, identities and applications.
- Cloud providers often cite the model to deflect blame during breaches.
- Customers must secure the stack themselves, as cloud doesn’t equal safe-by-default -visibility, policy and controls are still on you.
While an SLA guarantees the cloud provider’s commitment to “the security of the cloud”, ensuring the underlying infrastructure’s uptime, resilience and core security, it explicitly does not cover the customer’s responsibilities for “security in the cloud.” This means that even if a provider’s SLA promises 99.99% uptime for their infrastructure, a customer’s misconfigurations, weak identity management or unpatched applications (all part of their responsibility) can still lead to data breaches or service outages, effectively nullifying the perceived security and uptime benefits of the provider’s SLA. Therefore, the SRM directly impacts the adequate security and availability experienced by the enterprise, making diligent customer-side security practices crucial for realising the full value of any cloud SLA.
Several controls should be a part of a comprehensive approach to gaining access to innovative cloud technology while safeguarding your enterprise:
- Due diligence, gap analysis and risk quantification: Conduct an exhaustive review of the cloud provider’s security posture beyond just the SLA. Request and scrutinise security whitepapers, independent audit reports (eg FedRAMP, SOC 2 Type 2, ISO 27001) and penetration test summaries. Perform a detailed risk assessment that quantifies the potential impact of any SLA shortfalls on your business operations, data privacy and regulatory obligations. Understand precisely where the provider’s “security of the cloud” ends and your “security in the cloud” responsibilities begin, especially concerning data encryption, access controls and incident response.
- Strategic contract negotiation and custom clauses: Engage in direct negotiation with the cloud provider to tailor the SLA to your infrastructure requirements. For significant contracts, cloud providers should be willing to include custom clauses addressing critical security commitments, data handling procedures, incident notification timelines and audit rights that exceed their standard offerings. Ensure the contract includes indemnification clauses for data breaches or service disruptions directly attributable to the provider’s security failures, and clearly define data portability and destruction protocols for an effective exit strategy.
- Implement robust layered security (defence-in-depth): Recognise that the shared responsibility model necessitates your active participation. In addition to the provider’s native offerings, implement additional security controls covering, among others, identity and access management (IAM), cloud security posture management (CSPM), cloud workload protection (CWP), data loss prevention (DLP) and zero trust network access (ZTNA).
- Enhanced security monitoring and integration: Integrate the cloud service’s logs and security telemetry into your enterprise’s security information and event management (SIEM) and security orchestration, automation and response (SOAR) platforms. This centralised visibility and correlation capability allows your security operations centre (SOC) to detect, analyse and respond to threats across both your on-premises and cloud environments, bridging any potential gaps left by the provider’s default monitoring.
- Proactive governance, risk and compliance (GRC): Update your internal security policies and procedures to explicitly account for the new cloud service and its specific risk profile. Map the provider’s security controls and your compensating controls directly to relevant regulatory requirements (eg GDPR, HIPAA, PCI DSS). Maintain meticulous documentation of your risk assessments, mitigation strategies and any formal risk acceptance decisions.
By adopting these strategies, IT and IT security leaders can confidently embrace innovative cloud technologies, minimising inherent risks and ensuring a strong compliance posture, even when faced with SLAs that don’t initially meet every desired criterion.
The bottom line
Make sure to follow the principle “own your security posture” by implementing customised security policies and not relying solely on your cloud provider. Treat security as a core component of your infrastructure and not an add-on. Adopt and deploy unified controls to align security strategies across all environments to strengthen defences against the expanding threat landscape, thereby reducing risk and boosting resilience. Shared responsibility doesn’t mean shared blame, it means shared diligence.
Aditya K Sood is vice president of security engineering and AI strategy at Aryaka.
Tech
Europe’s fastest supercomputer to boost AI drive

Europe’s fastest supercomputer Jupiter is set to be inaugurated Friday in Germany with its operators hoping it can help the continent in everything from climate research to catching up in the artificial intelligence race.
Here is all you need to know about the system, which boasts the power of around one million smartphones.
What is the Jupiter supercomputer?
Based at Juelich Supercomputing Center in western Germany, it is Europe’s first “exascale” supercomputer—meaning it will be able to perform at least one quintillion (or one billion billion) calculations per second.
The United States already has three such computers, all operated by the Department of Energy.
Jupiter is housed in a center covering some 3,600 meters (38,000 square feet)—about half the size of a football pitch—containing racks of processors, and packed with about 24,000 Nvidia chips, which are favored by the AI industry.
Half the 500 million euros ($580 million) to develop and run the system over the next few years comes from the European Union and the rest from Germany.
Its vast computing power can be accessed by researchers across numerous fields as well as companies for purposes such as training AI models.
“Jupiter is a leap forward in the performance of computing in Europe,” Thomas Lippert, head of the Juelich center, told AFP, adding that it was 20 times more powerful than any other computer in Germany.
How can it help Europe in the AI race?
Lippert said Jupiter is the first supercomputer that could be considered internationally competitive for training AI models in Europe, which has lagged behind the US and China in the sector.
According to a Stanford University report released earlier this year, US-based institutions produced 40 “notable” AI models—meaning those regarded as particularly influential—in 2024, compared to 15 for China and just three for Europe.
“It is the biggest artificial intelligence machine in Europe,” Emmanuel Le Roux, head of advanced computing at Eviden, a subsidiary of French tech giant Atos, told AFP.
A consortium consisting of Eviden and German group ParTec built Jupiter.
Jose Maria Cela, senior researcher at the Barcelona Supercomputing Center, said the new system was “very significant” for efforts to train AI models in Europe.
“The larger the computer, the better the model that you develop with artificial intelligence,” he told AFP.
Large language models (LLMs) are trained on vast amounts of text and used in generative AI chatbots such as OpenAI’s ChatGPT and Google’s Gemini.
Nevertheless with Jupiter packed full of Nvidia chips, it is still heavily reliant on US tech.
The dominance of the US tech sector has become a source of growing concern as US-Europe relations have soured.
What else can the computer be used for?
Jupiter has a wide range of other potential uses beyond training AI models.
Researchers want to use it to create more detailed, long-term climate forecasts that they hope can more accurately predict the likelihood of extreme weather events such as heat waves.
Le Roux said that current models can simulate climate change over the next decade.
“With Jupiter, scientists believe they will be able to forecast up to at least 30 years, and in some models, perhaps even up to 100 years,” he added.
Others hope to simulate processes in the brain more realistically, research that could be useful in areas such as developing drugs to combat diseases like Alzheimer’s.
It can also be used for research related to the energy transition, for instance by simulating air flows around wind turbines to optimize their design.
Does Jupiter consume a lot of energy?
Yes, Jupiter will require on average around 11 megawatts of power, according to estimates—equivalent to the energy used to power thousands of homes or a small industrial plant.
But its operators insist that Jupiter is the most energy-efficient among the fastest computer systems in the world.
It uses the latest, most energy-efficient hardware, has water-cooling systems and the waste heat that it generates will be used to heat nearby buildings, according to the Juelich center.
© 2025 AFP
Citation:
Europe’s fastest supercomputer to boost AI drive (2025, September 5)
retrieved 5 September 2025
from https://techxplore.com/news/2025-09-europe-fastest-supercomputer-boost-ai.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
-
Sports1 week ago
New Zealand rugby player Shane Christie, who suffered multiple concussions, dies aged 39 – SUCH TV
-
Tech1 week ago
Top CDC Officials Resign After Director Is Pushed Out
-
Fashion1 week ago
Portugal Jewels Chiado boutique nominated for two global design awards
-
Sports1 week ago
New-look Pac-12 extends CW deal through 2031
-
Fashion1 week ago
ICE cotton futures fall for 2nd consecutive day on strong crop outlook
-
Sports1 week ago
Dolphins GM Chris Grier says fans threatened his family in string of vile emails after team’s lackluster year
-
Entertainment1 week ago
YouTube TV viewers could lose access to Fox channels over contract dispute
-
Tech1 week ago
Real-time technique directly images material failure in 3D to improve nuclear reactor safety and longevity