Connect with us

Tech

Cracking a long-standing weakness in a classic algorithm for programming reconfigurable chips

Published

on

Cracking a long-standing weakness in a classic algorithm for programming reconfigurable chips


Credit: Pixabay/CC0 Public Domain

Researchers from EPFL, AMD, and the University of Novi Sad have uncovered a long-standing inefficiency in the algorithm that programs millions of reconfigurable chips used worldwide, a discovery that could reshape how future generations of these are designed and programmed.

Many industries, including telecoms, automotive, aerospace and rely on a special breed of chip called the Field-Programmable Gate Array (FPGA). Unlike traditional chips, FPGAs can be reconfigured almost endlessly, making them invaluable in fast-moving fields where designing a custom chip would take years and cost a fortune. But this flexibility comes with a catch: FPGA efficiency depends heavily on the software used to program them.

Since the late 1990s, an algorithm known as PathFinder has been the backbone of FPGA routing. Its job: connecting thousands of tiny circuit components without creating overlaps.

For decades, it worked so well that it became the standard. However, as circuits grew larger, engineers began encountering frustrating slowdowns and occasional outright failures. Designs that should have worked were often labeled “unroutable.”

Now, with colleagues from the University of Novi Sad and the technology company AMD, researchers from the Parallel Systems Architecture Laboratory (PARSA) in the School of Computer and Communication Sciences have come one step closer to untangling the inner workings of this classic algorithm.

In their paper, which received the Best Paper Award at the 33rd IEEE International Symposium on Field-Programmable Custom Computing Machines, they revealed why these failures happen and how PathFinder’s limits can be overcome.

Cracks in the algorithm

“In fact, it’s not surprising that PathFinder sometimes fails,” explained Shashwat Shrivastava, Ph.D. student with PARSA and first author of the paper.

“Very early on, researchers showed that the problem behind FPGA routing is extremely hard. Later, the creators of the original algorithm, together with a few collaborators, found cases where PathFinder would never succeed—but they noted such cases wouldn’t appear in practice.”

For decades, it seemed they were correct—PathFinder worked surprisingly well.

“PathFinder worked so well, in fact, that when it failed, people rarely questioned the algorithm. Instead of venturing inside to see what was going on, they tweaked its parameters, modified circuits, or switched to larger FPGAs,” added Stefan Nikolić, an EPFL alumnus and now a professor at the University of Novi Sad.

“Part of the reason for this is that it is rather difficult to understand what PathFinder is actually doing on examples of practical importance. Modern circuits are so large that their signals form veritable on-chip jungles.”

Enter the forest

“So, we really needed to look at the individual trees in that jungle,” continued Shrivastava, “and I really mean trees. Each signal—a connection that carries information between circuit components—must reach multiple destinations without overlapping other signals. FPGA routing is essentially about building one tree for each signal on the chip.”

While working on another project that relied on PathFinder, the team kept seeing results that defied intuition. At first, they blamed external factors, not the itself. Eventually, they realized they needed controlled examples: small, tricky cases where a solution definitely existed, and in which PathFinder should succeed.

“We needed real, practical examples, and lots of them, to understand what was really going on,” Shrivastava explains. “So, we built a framework to automatically extract small, hard problems from real circuits. Watching how PathFinder struggled with these helped us uncover issues that had remained hidden for a very long time.”

Power in partnership

“This breakthrough would have been much harder without industry support,” said Mirjana Stojilović, Shrivastava’s Ph.D. advisor. “From the start, we collaborated with Chirag Ravishankar and Dinesh Gaitonde from AMD. They helped us model FPGAs as close as possible to commercial devices, ensuring our findings had real-world impact.”

Once the framework was ready, things moved quickly. The team found that PathFinder often built routing trees larger than necessary, increasing the risk of overlaps. The problem came from the order in which it created and added new branches to the trees.

“In retrospect, this is intuitive, but somehow it went largely unnoticed for many years,” Shrivastava said. “Our first solution was simple: try different orders and pick the one that results in the smallest tree. Experimentally, it worked surprisingly well.”

The team is now exploring more scalable solutions. “I am especially proud that Summer@EPFL interns have been contributing significantly. One of them, Sun Tanaka, is also a co-author of the paper,” added Stojilović.

“Our discovery could reshape how millions of FPGAs are programmed and influence the design of future generations of these reconfigurable chips.”

More information:
Shashwat Shrivastava et al, Guaranteed Yet Hard to Find: Uncovering FPGA Routing Convergence Paradox, 2025 IEEE 33rd Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM) (2025). DOI: 10.1109/fccm62733.2025.00060

Citation:
Cracking a long-standing weakness in a classic algorithm for programming reconfigurable chips (2025, October 3)
retrieved 3 October 2025
from https://techxplore.com/news/2025-10-weakness-classic-algorithm-reconfigurable-chips.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Tech

TAG Heuer Has Dropped New Polylight-Powered F1s

Published

on

TAG Heuer Has Dropped New Polylight-Powered F1s


No doubt looking to find some breathing space after the hubbub of Watches and Wonders last week, TAG Heuer has dropped an update to its 2025 revamped collection of the brand’s iconic plastic-cased 1980s watch, the “Formula 1.”

The five new pieces are called the “pastel collection” by TAG, and all are built on the same solar-powered Formula 1 Solargraph 38 mm that launched in March last year. Two models feature a sandblasted stainless steel case, while the remaining three have cases made from TAG’s proprietary bio-polamide plastic, Polylight.

It’s these Polylight versions that, for WIRED, are the stars of the new mini collection. Coming in pastel blue, beige, and pink, and sporting case-matching rubber straps and bidirectional-rotating Polylight bezels, they reference classic F1 designs that made the line iconic in the first place.

The new Polylight beige.

Courtesy of TAG Heuer

Image may contain Wristwatch Arm Body Part and Person

The “pastel green” steel F1 Solargraphs.

Courtesy of TAG Heuer

The stainless steel models have a 3-link sandblasted steel bracelet and either a “pastel green” or “lavender blue” dial with matching Polylight bezels. The dials on both watches also see eight diamonds replace the circular hour markers. TAG says these models add “a touch of refinement for those seeking sophistication,” but considering these “luxury” F1s will retail at $2,800, as opposed to the already punchy $1,950 full Polylight versions, our pick is most definitely the plastic pieces.

Not only do these blue, beige, and pink versions pleasingly hark back to vintage F1 designs—though now 38 mm in size instead of the original 35 mm—but also, just like all F1 Solargraphs, they come equipped with screw-down crowns and casebacks, making for 100 meters of water resistance and ensuring these will serve well as dive and sports watches. My recommendation? Go for the pink, it looks superb on the wrist. The beige is a very close second.

Image may contain Wristwatch Arm Body Part and Person

Pretty in pink: The new Polylight pink F1 is limited to 1,110 pieces for the 110th anniversary of the Indy 500.

Photograph: Jeremy White



Source link

Continue Reading

Tech

I’ve Tested Gaming Laptops for Over a Decade. This Is What I Think You Should Buy

Published

on

I’ve Tested Gaming Laptops for Over a Decade. This Is What I Think You Should Buy


Lenovo

Legion 7i Gen 10 (16 Inch, Intel)

Now, there’s another class of high-end gaming laptop that focuses more on performance than being thin or portable. The Lenovo Legion 7i Gen 10 is one of my favorites in this class, featuring a beautiful white chassis and glossy OLED display. Unlike some OLED displays, the Legion 7i’s screen can be cranked up to over 1,000 nits of brightness. The result is some really splendid HDR performance that brings games to life. HDR is a powerful way of improving the visuals of your games without a performance cost. The Legion 7i Gen 10 is one of the very best in this regard.

It’s still fairly thin at 0.7 inches thick too, while a lot of the ports are found on the back. It’s the definition of a “clean” gaming laptop. It’s no slouch when it comes to performance either, offering either the RTX 5070 Ti or RTX 5080 for graphics.

Cheap Gaming Laptops That Are Worth It

No gaming laptops worth buying are actually cheap. High-refresh rate displays and discrete graphics will always make them more expensive than standard laptops. But as you get closer to $1,000, there is one laptop I always come back to: the Lenovo LOQ 15. Pronounced “Lock,” this Lenovo subbrand is known for cutting the fluff and focusing on giving gamers the performance they need at an affordable price. No laptop does that better than the LOQ 15. Many laptop manufacturers sell their RTX 5060 configurations for hundreds of dollars more. In reality, if you’re shopping around $1,000, there’s no reason to not buy the LOQ 15. Just do it.

If you do want to save some extra cash, there is another option that is cheaper than the LOQ 15 with a few compromises in key areas. The Acer Nitro V 16 is that laptop, which comes with an RTX 5050. This was as affordable as $600 at one point last year—before prices on laptops have risen due to the ongoing memory shortage—but it remains the only laptop cheaper than the Lenovo LOQ 15 that’s actually worth it. It’s fairly powerful for the RTX 5050, and while the screen is pretty shoddy, it’s not a bad-looking laptop. The one big caveat is that the 135-watt power supply it comes with doesn’t deliver quite enough power to keep it charged in Performance mode. Read more about this issue in my review, as it’s important to know about if you’re planning to buy it.

There are other cheap gaming laptops out there I’ve tested, such as the MSI Cyborg A15, but either the Acer Nitro V 16 or Lenovo LOQ 15 are better, cheaper options. You will also find lots of gaming laptops under $1,000 that use older graphics cards, such as the RTX 4050 or 3050. In general, I’d recommend staying away from these. They’re only one or two generations back, but remember: Nvidia only releases new laptop graphics cards every couple of years. So, an RTX 4050 laptop may be well over two years old already, and an RTX 3050 is over five years old. Not only do you get worse graphics performance, these laptops are much more likely to need to be replaced sooner.

Experimental Stuff

One of the exciting things about the world of gaming laptops right now is the experimentation. While clamshell gaming laptops with a conventional Nvidia GPU are the most standard way to go, there’s a few different ways to take your PC games on the go that stretch the boundaries. You might consider a gaming handheld, for example, like the Steam Deck or Xbox Ally X. These handhelds have their fans, and while you can’t also do your homework on these devices, they’re great on couches, trains, and planes.



Source link

Continue Reading

Tech

Sans Institute preps live systems for Nato cyber exercise | Computer Weekly

Published

on

Sans Institute preps live systems for Nato cyber exercise | Computer Weekly


The Sans Institute, one of the world’s pre-eminent cyber security certification and training bodies, is to play a key role in the annual Nato Cooperative Cyber Defence Centre of Excellence (CCDCOE) Locked Shields exercise, held in Tallinn, Estonia, through the provision of a fully functional power generation system that participating teams will attempt to defend during the game.

This year marks the 16th running of the Locked Shields live fire security defence exercise, which unites blue teams from across Nato’s 32 member states, as well as other allies and observers.

This year, however, Sans has been entrusted with the task of building a genuine, operational cyber range, as opposed to creating a simulation. It is using real industrial control systems (ICSs) and physical equipment that 16 teams of defenders will have to protect while under live cyber attack, with the decisions they make having an immediate physical impact on a national-scale power grid.

Nato and Sans said the aim of the game is to close the gap between sandboxed, classroom-based cyber security training and real-world operational readiness, which, amid the cyber dimension to the energy crisis precipitated by the war in Iran and spillover from the ongoing war in Ukraine, has never been more important.

“We are putting teams in an environment where cyber decisions directly impact physical operations,” said Felix Schallock, who leads the initiative at the Sans Institute. “If you lose visibility, if you lose control, the power generation can be affected. That’s the reality operators face every day. That’s what we’re training for.”

Nato CCDCOE director Tõnis Saar added: “Locked Shields is a technically advanced exercise that challenges participants to defend the critical infrastructure systems modern societies depend on. As much of this critical infrastructure is owned and operated by the private sector, strong public-private collaboration is essential. Industry partners such as Sans Institute play a vital role in making the exercise as realistic and impactful as possible.”

Hybrid architecture

The Sans Institute’s cyber range comprises close to 70 physical ICS devices, with programmable logic controllers (PLCs), human-machine interfaces (HMIs), operator and engineering workstations, 100 virtual machines (VMs) and interconnected systems within the wider CCDCOE environment, all supported by live network infrastructure, the whole forming a hybrid information and operational technology (IT/OT) architecture.

During the exercise, blue teamers will be set the task of defending the “energy provider” while coming under sustained attack from opposing red teams.

The goal is to effectively demonstrate how maintaining a reliable generation system isn’t some metric on a scorecard, but rather the core mission, so success will entail more than just spotting and arresting threats – it will also demand operational discipline, maintaining uninterrupted power generation, preserving comms between IT and OT networks, guaranteeing visibility and control of ICS technology, and avoiding any destabilising disruptions.

The people defending our critical infrastructure deserve training that takes the threat as seriously as they do
James Lyne, Sans Institute

Actions will be visible, rippling through the systems in real time, so participants won’t just see alerts, they will see turbines being throttled, breakers being opened or closed, and generation capacity being affected. As such, failure will be immediate and visible – missteps will degrade system performance, disrupt or halt power generation, or simulate national-level consequences.

Tim Conway, Sans Institute fellow and ICS curriculum lead, explained: “We’re showing teams how to defend infrastructure that can’t simply be rebooted or patched on the fly. You have to think like an operator, not just a defender. That mindset shift is what makes this environment so powerful.”

Sans Institute CEO James Lyne expressed great pride in what the Sans team has built for Locked Shields this year. “The scenarios these critical initiatives prepare for are playing out in the world – national espionage, cyber integrated to kinetic attacks and warfare, and retaliation attacks,” he said.

“Throw in AI or machine speed attackers and the need for defenders to adapt, and you have the most disruptive period in cyber security in 20 years. We are privileged to help our allies be ready and continuously improving to secure the future. The people defending our critical infrastructure deserve training that takes the threat as seriously as they do,” he added.

Schallock said the exercise was about preparing teams for protecting the systems that matter most. “Cyber security training must reflect the environment defenders are protecting. We’re not just teaching cyber security, we’re showing how to defend a nation’s infrastructure when it counts.”



Source link

Continue Reading

Trending