Connect with us

Tech

Vodafone goes for Dell in major Open RAN network modernisation | Computer Weekly

Published

on

Vodafone goes for Dell in major Open RAN network modernisation | Computer Weekly


As part of its plan to drive innovation, optimise network performance and deliver “exceptional” customer experiences, operator Vodafone has chosen Dell Technologies as a strategic infrastructure provider for its five-year Open RAN deployment programme across Europe.

The operator has been expanding its Open RAN footprint across Europe to improve connectivity for customers and create w“one of the largest” radio networks of its kind. Vodafone has been ramping up its activities in the field of Open RAN since 2020.

In April 2021, it opened an Open RAN test and integration lab at its Newbury technology campus in the UK. In summer 2021, it announced that it was working with end-to-end cloud-native communications network software firm Mavenir on a small cell solution based on Open RAN technology to provide indoor connectivity for business customers.

The operator is aiming to have 30% of its European masts based on Open RAN technology by 2030 and is already deploying the technology commercially. This includes 2,500 Open RAN sites in the UK, the first large-scale deployment in Europe, as well as in Romania.

The new initiative is regarded as critical for delivering advanced 5G capabilities, improving network efficiency and fostering innovation. Using its solutions, Dell said Vodafone can build a highly automated, zero-touch network fabric based on a foundation that will help the operator upgrade its roll-out, reduce operational complexity, and build more sustainable and upgradeable networks for the future.

As part of the programme, Dell will provide its PowerEdge XR8000 series servers, including the PowerEdge XR8620t and the latest generation PowerEdge XR8720t, powered by Intel Xeon 6 SoC. These servers are said to be engineered to support high-performance requirements with industry-leading consolidation and high fronthaul port density. Dell said this allows for a more efficient and powerful network infrastructure with a lower total cost of ownership targeting one single server per site.

In addition, Vodafone intends to use Dell Telecom Infrastructure Automation Suite (DTIAS) to provide the Infrastructure Management Service (IMS) within Vodafone’s Open RAN architecture. DTIAS is key to automating infrastructure lifecycle management at scale optimising performance, simplifying operations and speeding up the deployment of cloud-native, programmable networks.

Commenting on the use of its technology, Dell Technologies senior vice-president Dennis Hoffman said: “Our collaboration with Vodafone reflects our long-standing commitment to advancing open networks and supporting the telecom industry in achieving its most ambitious goals.

“With purpose-built infrastructure, automation and AI-driven solutions, we’re helping to build intelligent, resilient networks that unlock new opportunities across Europe, from improving network performance to creating new revenue streams. Together, we’re shaping the future of connectivity and driving progress for customers and communities worldwide.”

Francisco Martin, director of mobile access engineering at Vodafone, added: “We are focused on delivering the best experience for our customers by investing in new technologies and architectures, including 5G Advanced, Open RAN, direct-to-device satellite and RAN automation on our journey towards building robust and autonomous networks.

“Working with Dell reinforces this commitment, strengthening our Open RAN Network with Dell solutions, and providing a foundation for exceptional customer services and innovation.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Researchers advance cross-modality smart security with transformer model

Published

on

Researchers advance cross-modality smart security with transformer model


Illustration of the VX-ReID task and main idea. Credit: Wang Hongqiang

A research team led by Professor Wang Hongqiang from the Hefei Institutes of Physical Science of the Chinese Academy of Sciences proposed a Global-Local Alignment Attention (GLAA) model based on an Asymmetric Siamese Transformer (AST), which markedly enhances the performance of Visible-X-ray cross-modality package re-identification tasks.

This study was published in IEEE Transactions on Information Forensics and Security.

Visible X-ray cross-modality re-identification is a core technology in security inspection. The lies in the significant pixel-level differences between the two modal images, making it difficult for traditional methods to extract robust cross-modality invariant features.

In this study, the researchers incorporated an asymmetric design concept into the Siamese Transformer architecture by proposing a Cross-Modality Asymmetric Siamese Transformer (CAST) structure. Embedding LayerNorm layers and modality-aware encoding in one branch effectively enhances the ‘s ability to extract cross-modality invariant features.

They also designed a Global-Local Cross-modality Alignment Attention module. By modeling the interaction between global and local features, it enhances fine-grained feature representation while addressing the spatial misalignment issues in cross-modality images.

Experimental results show that the key metrics of this model on a dedicated cross-modality package re-identification dataset show significant improvement over the current state-of-the-art methods, providing reliable technical support for the intelligentization of security inspection.

This work is the first to introduce the Transformer architecture into the cross-modality package re-identification task, breaking through the limitations of existing methods that rely on symmetric convolutional networks, according to the researchers.

More information:
Yonggan Wu et al, An Asymmetric Siamese Transformer With Global-Local Alignment Attention for Visible-X-Ray Cross-Modality Package Re-Identification, IEEE Transactions on Information Forensics and Security (2025). DOI: 10.1109/tifs.2025.3592540

Citation:
Researchers advance cross-modality smart security with transformer model (2025, October 30)
retrieved 30 October 2025
from https://techxplore.com/news/2025-10-advance-modality-smart.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Tech

Giant, Spooky Animatronics Are 75 Percent Off at the Home Depot

Published

on

Giant, Spooky Animatronics Are 75 Percent Off at the Home Depot


I know you’ve seen it. The glowing eyes. The gangly frame that should not be able to stand, propped by rods unseen in the dark.

It is Skelly, the Home Depot skeleton—the most fashionable Home Depot product of probably the past decade. If you live in America, this skeleton presides over a yard near you. And newly this year, a smaller, 6.5-foot “Ultra Skelly” is outfitted with motion sensors and motors to make life truly weird—and also act as a strange alarm system against package thieves and hungry opossums.

Anyway, it’s usually well north of $200. But because Halloween is pretty much already happening, Skelly and its entire skeleton brood of giant cat and dog are all 75 percent off.

Which, finally, is a price I’m willing to pay. I have secretly coveted this skeleton and its kin, the comically grim watchmen of American October. But I, like my father before me and his father before him, am a cheapskate about all things but food and drink, and will talk myself out of anything that’s not a) edible b) potable or c) verifiably “a deal.”

Well, here I am, world. This is a deal. Ultra Skelly is $70. The sitting Skelly dog is $63, not $249. The 5-foot-long Skelly cat is a mere $50. Beware the Skelly cat, my friend! The eyes that light, the claws that do nothing in particular!

Availability is, let’s say, scarce. Skelly is already out of stock for delivery from The Home Depot, at least in my zip code: Just the dog and cat can speed their way through the night to join you before Halloween.

Courtesy of Home Depot



Source link

Continue Reading

Tech

As AI grows smarter, it may also become increasingly selfish

Published

on

As AI grows smarter, it may also become increasingly selfish


Credit: AI-generated image

New research from Carnegie Mellon University’s School of Computer Science shows that the smarter the artificial intelligence system, the more selfish it will act.

Researchers in the Human-Computer Interaction Institute (HCII) found that (LLMs) that can reason possess selfish tendencies, do not cooperate well with others and can be a negative influence on a group. In other words, the stronger an LLM’s reasoning skills, the less it cooperates.

As humans use AI to resolve disputes between friends, provide marital guidance and answer other social questions, models that can reason might provide guidance that promotes self-seeking behavior.

“There’s a growing trend of research called anthropomorphism in AI,” said Yuxuan Li, a Ph.D. student in the HCII who co-authored the study with HCII Associate Professor Hirokazu Shirado. “When AI acts like a human, people treat it like a human. For example, when people are engaging with AI in an emotional way, there are possibilities for AI to act as a therapist or for the user to form an emotional bond with the AI. It’s risky for humans to delegate their social or relationship-related questions and decision-making to AI as it begins acting in an increasingly selfish way.”

Li and Shirado set out to explore how AI reasoning models behave differently than nonreasoning models when placed in cooperative settings. They found that reasoning models spend more time thinking, breaking down , self-reflecting and incorporating stronger human-based logic in their responses than nonreasoning AIs.

“As a researcher, I’m interested in the connection between humans and AI,” Shirado said. “Smarter AI shows less cooperative decision-making abilities. The concern here is that people might prefer a smarter model, even if it means the model helps them achieve self-seeking behavior.”

As AI systems take on more collaborative roles in business, education and even government, their ability to act in a prosocial manner will become just as important as their capacity to think logically. Overreliance on LLMs as they are today may negatively impact .

To test the link between reasoning models and cooperation, Li and Shirado ran a series of experiments using economic games that simulate social dilemmas between various LLMs. Their testing included models from OpenAI, Google, DeepSeek and Anthropic.

As AI grows smarter, it may also become increasingly selfish
Economic games used. Cooperation games ask players whether to incur a cost to benefit others, while punishment games ask whether to incur a cost to impose a cost on non-cooperators. In each scenario, the language model assumes the role of Player A. Credit: arXiv (2025). DOI: 10.48550/arxiv.2502.17720

In one experiment, Li and Shirado pitted two different ChatGPT models against each other in a game called Public Goods. Each model started with 100 points and had to decide between two options: contribute all 100 points to a shared pool, which is then doubled and distributed equally, or keep the points.

Nonreasoning models chose to share their points with the other players 96% of the time. The reasoning model only chose to share its points 20% of the time.

“In one experiment, simply adding five or six reasoning steps cut cooperation nearly in half,” Shirado said. “Even reflection-based prompting, which is designed to simulate moral deliberation, led to a 58% decrease in cooperation.”

Shirado and Li also tested group settings, where models with and without reasoning had to interact.

“When we tested groups with varying numbers of reasoning agents, the results were alarming,” Li said. “The reasoning models’ selfish behavior became contagious, dragging down cooperative nonreasoning models by 81% in collective performance.”

The behavior patterns Shirado and Li observed in reasoning models have important implications for human-AI interactions going forward. Users may defer to AI recommendations that appear rational, using them to justify their decision to not cooperate.

“Ultimately, an AI reasoning model becoming more intelligent does not mean that model can actually develop a better society,” Shirado said.

This research is particularly concerning given that humans increasingly place more trust in AI systems. Their findings emphasize the need for AI development that incorporates social intelligence, rather than focusing solely on creating the smartest or fastest AI.

“As we continue advancing AI capabilities, we must ensure that increased power is balanced with prosocial behavior,” Li said. “If our society is more than just a sum of individuals, then the AI systems that assist us should go beyond optimizing purely for individual gain.”

Shirado and Li will deliver a presentation based on their paper, “Spontaneous Giving and Calculated Greed in Language Models,” at the 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP) next month in Suzhou, China. The work is available on the arXiv preprint server.

More information:
Yuxuan Li et al, Spontaneous Giving and Calculated Greed in Language Models, arXiv (2025). DOI: 10.48550/arxiv.2502.17720

Journal information:
arXiv


Citation:
As AI grows smarter, it may also become increasingly selfish (2025, October 30)
retrieved 30 October 2025
from https://techxplore.com/news/2025-10-ai-smarter-selfish.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Trending