Connect with us

Tech

AI method reconstructs 3D scene details from simulated images using inverse rendering

Published

on

AI method reconstructs 3D scene details from simulated images using inverse rendering


Layout generation. a, Images for two scenes observed by a single camera. b, Test-time optimized inverse rendered objects. c, BEV layouts of the scenes. In the BEV layout (a common representation for autonomous driving tasks), black boxes represent the ground truth and colored boxes represent predicted BEV boxes. Credit: Nature Machine Intelligence (2025). DOI: 10.1038/s42256-025-01083-x

Over the past decades, computer scientists have developed many computational tools that can analyze and interpret images. These tools have proved useful for a broad range of applications, including robotics, autonomous driving, health care, manufacturing and even entertainment.

Most of the best performing computer vision approaches employed to date rely on so-called feed-forward neural networks. These are computational models that process input images step by step, ultimately making predictions about them.

While some of these models were found to perform well when tested on the data they analyzed during training, they often do not generalize well across new images and in different scenarios. In addition, their predictions and the patterns they extract from images can be difficult to interpret.

Researchers at Princeton University recently developed a new inverse rendering approach that is more transparent and could also interpret a wide range of images more reliably. The new approach, introduced in a paper published in Nature Machine Intelligence, relies on a generative artificial intelligence (AI)-based method to simulate the process of image creation, while also optimizing it by gradually adjusting a model’s internal parameters.

“Generative AI and neural rendering have transformed the field in recent years for creating novel content: producing images or videos from scene descriptions,” Felix Heide, senior author of the paper, told Tech Xplore. “We investigate whether we can flip this around and use these generative models for extracting the scene descriptions from images.”







Video of tracking results of the team’s method. A demonstration of the performance of our proposed tracking method based on inverse neural rendering for a sample of diverse scenes from the nuScenes dataset and the Waymo Open Dataset. We overlay the observed image with the rendered objects through alpha blending with a weight of 0.4. Object renderings are defined by the averaged latent embeddings zk,EMA and the tracked object state yk. Credit: Nature Machine Intelligence (2025). DOI: 10.1038/s42256-025-01083-x

The new approach developed by Heide and his colleagues relies on a so-called differentiable rendering pipeline. This is a process for the simulation of image creation, relying on compressed representations of images created by generative AI models.

“We developed an analysis-by-synthesis approach that allows us to solve vision tasks, such as tracking, as test-time optimization problems,” explained Heide. “We found that this method generalizes across datasets, and in contrast to existing supervised learning methods, does not need to be trained on new datasets.”

Essentially, the method developed by the researchers works by placing models of 3D objects in a virtual scene depicting real world settings. These models of objects are generated by a generative AI based on random sample of 3D scene parameters.

“We then render all these objects back together into a 2D image,” said Heide. “Next, we compare this rendered image with the real observed image. Based on how different they are, we backpropagate the difference through both the differentiable rendering function and the 3D generation model to update its inputs. In just a few steps, we optimize these inputs to make the rendered match the observed images better.”

  • A new generative model-based inverse rendering approach for computer vision and image processing
    Optimizing 3D models through inverse neural rendering. From left to right: the observed image, initial random 3D generations, and three optimization steps that refine these to better match the observed image. The observed images are faded to show the rendered objects clearly. The method effectively refines object appearance and position, all done at test time with inverse neural rendering. Credit: Ost et al.
  • A new generative model-based inverse rendering approach for computer vision and image processing
    Generalization of 3D multi-object tracking with Inverse Neural Rendering. The method directly generalizes across datasets such as the nuScenes and Waymo Open Dataset benchmarks without additional fine-tuning and is trained on synthetic 3D models only. The observed images are overlaid with the closest generated object and tracked 3D bounding boxes. Credit: Ost et al.

A notable advantage of the team’s newly proposed approach is that it allows very generic 3D object generation models trained on synthetic data to perform well across a wide range of datasets containing images captured in real-world settings. In addition, the renderings produced by the models are far more explainable than those produced by conventional rendering tools based on feed-forward machine learning models.

“Our inverse rendering approach for tracking works just as well as learned feed-forward approaches, but it provides us with explicit 3D explanations of its perceived world,” said Heide.

“The other interesting aspect is the generalization capabilities. Without changing the 3D generation model or training it on new data, our 3D multi-object tracking through Inverse Neural Rendering works well across different autonomous driving datasets and object types. This can significantly reduce the cost of fine-tuning on new data or at least work as an auto-labeling pipeline.”

This recent study could soon help to advance AI models for computer vision, improving their performance in real-world settings while also increasing their transparency. The researchers now plan to continue improving their method and start testing it on more computer vision-related tasks.

“A logical next step is the expansion of the proposed approach to other perception tasks, such as 3D detection and 3D segmentation,” added Heide. “Ultimately, we want to explore if inverse rendering can even be used to infer the whole 3D scene, and not just individual objects. This would allow our future robots to reason and continuously optimize a three-dimensional model of the world, which comes with built-in explainability.”

Written for you by our author Ingrid Fadelli,
edited by Gaby Clark, and fact-checked and reviewed by Robert Egan—this article is the result of careful human work. We rely on readers like you to keep independent science journalism alive.
If this reporting matters to you,
please consider a donation (especially monthly).
You’ll get an ad-free account as a thank-you.

More information:
Julian Ost et al, Towards generalizable and interpretable three-dimensional tracking with inverse neural rendering, Nature Machine Intelligence (2025). DOI: 10.1038/s42256-025-01083-x.

© 2025 Science X Network

Citation:
AI method reconstructs 3D scene details from simulated images using inverse rendering (2025, August 23)
retrieved 23 August 2025
from https://techxplore.com/news/2025-08-ai-method-reconstructs-3d-scene.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Tech

Prague’s City Center Sparkles, Buzzes, and Burns at the Signal Festival

Published

on

Prague’s City Center Sparkles, Buzzes, and Burns at the Signal Festival


And thanks to a mention in Dan Brown’s new novel, The Secret of Secrets, the festival has gained even more global recognition. Just a few weeks after the release of Brown’s new bestseller set in contemporary Prague, viewers were able to see for themselves what drew the popular writer to the festival, which is the largest Czech and Central European showcase of digital art. In one passage, the Signal Festival has a cameo appearance when the novel’s protagonist recalls attending an event at the 2024 edition.

“We’re happy about it,” festival director Martin Pošta says about the mention. “It’s a kind of recognition.” Not that the event needed promotion, even in one of the most anticipated novels of recent years. The organizers have yet to share the number of visitors to the festival this year, but the four-day event typically attracts half a million visitors.

On the final day, there was a long queue in front of the monumental installation Tristan’s Ascension by American video art pioneer Bill Viola before it opened for the evening, even though it was a ticketed event. In the Church of St. Salvator in the Convent of St. Agnes, visitors could watch a Christ-like figure rise upwards, streams of water defying gravity along with him, all projected on a huge screen.

The festival premiere took place on the Vltava River near the Dvořák Embankment. Taiwan’s Peppercorns Interactive Media Art presented a projection on a cloud of mist called Tzolk’in Light. While creators of other light installations have to deal with the challenges of buildings—their irregular surfaces, decorative details, and awkward cornices—projecting onto water droplets is a challenge of a different kind with artists having to give up control over the resulting image. The shape and depth of the Peppercorns’ work depended on the wind at any given moment, which determined how much of the scene was revealed to viewers and how much simply blown away. The reward, however, was an extraordinary 3D spectacle reminiscent of a hologram—something that can’t be achieved with video projections on static and flat buildings.

Another premiere event was a projection on the tower of the Old Town Hall, created for the festival by the Italian studio mammasONica. It transformed the 230-foot structure into a kaleidoscope of blue, green, red, and white surfaces. A short distance away, on Republic Square, Peppercorns had another installation. On a circular LED installation, they projected a work entitled Between Mountains and Seas, which recounted the history of Taiwan.





Source link

Continue Reading

Tech

A framework for software development self-service | Computer Weekly

Published

on

A framework for software development self-service | Computer Weekly


Software development is associated with the idea of not reinventing the wheel, which means developers often select components or software libraries with pre-built functionality, rather than write code to achieve the same result.

There are many benefits of this approach. For example, a software component that is widely deployed is likely to have undergone extensive testing and debugging. It is considered tried and trusted, mature technology, unlike brand-new code, which has not been thoroughly debugged and may inadvertently introduce unknown cyber security issues into the business.

The Lego analogy is often used to describe how these components can be put together to build enterprise applications. Developers can draw on functionality made available through application programming interfaces (APIs), which provide programmatic access to software libraries and components.

Increasingly, in the age of data-driven applications and greater use of artificial intelligence (AI), API access to data sources is another Lego brick that developers can use to create new software applications. And just as is the case with a set of old-school Lego bricks, constructing the application from the numerous software components available is left to the creativity of the software developer

A Lego template for application development

To take the Lego analogy a bit further, there are instructions, templates and pathways developers can be encouraged to follow to build enterprise software that complies with corporate policies.

A developer self-service platform provides a way for organisations to offer their developers almost pre-authorised assets, artefacts and tools that they can use to develop code
Roy Illsley, Omdia

Roy Illsley, chief analyst, IT operations, at Omdia, defines an internal developer platform (IDP) as a developer self-service portal to access the tools and environments that the IT strategy has defined the organisation should standardise on. “A developer self-service platform provides a way for organisations to offer their developers almost pre-authorised assets, artefacts and tools that they can use to develop code,” he says.

The basic idea is to provide a governance framework with a suite of compliant tools. Bola Rotibi, chief of enterprise research at CCS Insight, says: “A developer self-service platform is really about trying to get a governance path.”

Rotibi regards the platform as “a golden path”, which provides developers who are not as skilled as more experienced colleagues a way to fast-track their work within a governance structure that allows them a certain degree of flexibility and creativity.

As to why offering flexibility to developers is an important consideration falls under the umbrella of developer experience and productivity. SnapLogic effectively provides modern middleware. It is used in digital transformation projects to connect disparate systems, and is now being repositioned for the age of agentic AI.

SnapLogic’s chief technology officer, Jeremiah Stone, says quite a few of the companies it has spoken to that identify as leaders in business transformation regard a developer portal offering self-service as something that goes hand-in-hand with digital infrastructure and AI-powered initiatives.

SnapLogic’s platform offers API management and service management, which manages the lifecycle of services, version control and documentation through a developer portal called the Dev Hub.

Stone says the capabilities of this platform extend from software developers to business technologists, and now AI users, who, he says, may be looking for a Model Context Protocol (MCP) endpoint.

Such know-how captured in a self-service developer portal enables users – whether they are software developers, or business users using low-code or no-code tooling – to connect AI with existing enterprise IT systems.

Enter Backstage

One platform that seems to have captured the minds of the developer community when it comes to developer self-service is Backstage. Having begun life internally at audio streaming site Spotify, Backstage is now an open source project managed by the Cloud Native Computing Foundation (CNCF).

While many teams that implemented Backstage assumed that it would be an easy, free addition to their DevOps practices, that isn’t always the case. Backstage can be complex and requires engineering expertise to assemble, build and deploy
Christopher Condo and Lauren Alexander, Forrester

Pia Nilsson, senior director of engineering at the streaming service, says: “At Spotify, we’ve learned that enabling developer self-service begins with standardisation. Traditional centralised processes create bottlenecks, but complete decentralisation can lead to chaos. The key is finding the middle ground – standardisation through design, where automation and clear workflows replace manual oversight.”

Used by two million developers, Backstage is an open source framework for building internal developer portals. Nilsson says Backstage provides a single, consistent entry point for all development activities – tools, services, documentation and data. She says this means “developers can move quickly while staying aligned with organisational standards”.

Nilsson points out that standardising the fleet of components that comprise an enterprise technology stack is sometimes regarded as a large migration effort, moving everyone onto a single version or consolidating products into one. However, she says: “While that’s a critical part of standardising the fleet, it’s even more important to figure out the intrinsic motivator for the organisation to keep it streamlined and learn to ‘self-heal’ tech fragmentation.”

According to Nilsson, this is why it is important to integrate all in-house-built tools, as well as all the developer tools the business has purchased, in the same IDP. Doing so, she notes, makes it very easy to spot duplication. “Engineers will only use what they enjoy using, and we usually enjoy using the stuff we built ourselves because it’s exactly what we need,” she says.

The fact that Backstage is a framework is something IT leaders need to consider. In a recent blog post, Forrester analysts Christopher Condo and Lauren Alexander warned that most IDPs are frameworks that require assembly: “While many teams that implemented Backstage assumed that it would be an easy, free addition to their DevOps practices, that isn’t always the case. Backstage can be complex and requires engineering expertise to assemble, build and deploy.”

However, Forrester also notes that commercial IDP options are now available that include an orchestration layer on top of Backstage. These offer another option that may be a better fit for some organisations.

AI in an IDP

As well as the assembly organisations will need to carry out if they do not buy a commercial IDP, AI is revolutionising software development, and its impact needs to be taken into account in any decisions made around developer self-service and IDP.

Spotify’s Nilsson believes it is important for IT leaders to figure out how to support AI tooling usage in the most impactful way for their company.

“Today, there is both a risk to not leveraging enough AI tools or having it very unevenly spread across the company, as well as the risk that some teams give in to the vibes and release low-quality code to production,” she says.

According to Nilsson, this is why the IT team responsible for the IDP needs to drive up the adoption of these tools and evaluate the impact over time. “At Spotify, we drive broad AI adoption through education and hack weeks, which we promote through our product Skill Exchange. We also help engineers use context-aware agentic tools,” she adds.

Looking ahead

In terms of AI tooling, an example of how developer self-service could evolve is the direction of travel SAP looks to be taking with its Joule AI copilot tool.

When structure, automation and visibility are built into the developer experience, you replace bottlenecks with flow and create an environment where teams can innovate quickly, confidently and responsibly
Pia Nilsson, Spotify

CCS Insights’ Rotibi believes the trend to integrate AI into developer tools and platforms is an area of opportunity for developer self-service platforms. Among the interesting topics Rotibi saw at the recent SAP TechEd conference in Berlin was the use of AI in SAP Joule.

SAP announced new AI assistants in Joule, which it said are able to coordinate multiple agents across workflows, departments and applications. According to SAP, these assistants plan, initiate and complete complex tasks spanning finance, supply chain, HR and beyond. 

“SAP Joule is an AI interface. It’s a bit more than just a chatbot. It is also a workbench,” says Rotibi. Given that Joule has access to the SAP product suite, she notes that, as well as providing access, Joule understands the products. “It knows all the features and functions SAP has worked on, and, behind the scenes, uses the best data model to get the data points the user wants,” she says.

Recognising that enterprise software developers will want to build their own applications and create their own integration between different pieces of software, she says SAP Joule effectively plays the role of a developer self-service portal for the SAP product suite.

Besides what comes next with AI-powered functionality, there are numerous benefits in offering developer self-service to improve the overall developer experience, but there needs to be structure and standards.

Nilsson says: “When structure, automation and visibility are built into the developer experience, you replace bottlenecks with flow and create an environment where teams can innovate quickly, confidently and responsibly.”



Source link

Continue Reading

Tech

Every Model of this New Snoopy MoonSwatch Is Different—And You Can Only Get One When It Snows

Published

on

Every Model of this New Snoopy MoonSwatch Is Different—And You Can Only Get One When It Snows


First a confession: I own more MoonSwatches than I care to admit. Never let it be said that WIRED does not walk the walk when it comes to recommending products—Swatch has assiduously extracted a considerable amount of cash from me, all in $285 increments. This was no doubt the Swiss company’s dastardly plan all along, to lure us in, then, oh so gently, get watch fans hooked. The horological equivalent of boiling a frog. It’s worked, too—Swatch has, so far, netted hundreds of millions of dollars from MoonSwatch sales.

But while I’ve been a fan of the Omega X Swatch mashup since we reported on exactly how the hugely lucrative collaboration came to be in the first place, I have never liked the iterative Moonshine Gold versions. Employing a sliver of Omega’s exclusive 18K pale yellow gold alloy in marginally different ways on each design, they seemed almost cynical—a way of milking the MoonSwatch superfans on the hunt to complete the set.

A hidden Snoopy message on the Cold Moon’s dial is revealed under UV light.

Photograph: Courtesy of Swatch

Image may contain Wristwatch Arm Body Part and Person

The MoonSwatch comes with a rubber strap upgrade over the original launch models.

Photograph: Courtesy of Swatch

Now, though, just when I thought I was done with MoonSwatch—having gone as far as to upgrade all of mine with official $45 color-matching rubber straps—Swatch has managed to ensnare me once again, and with a Moonshine Gold model: the new MoonSwatch Mission To Earthphase Moonshine Gold Cold Moon.

Clumsy moniker aside, this version takes the all-white 2024 Snoopy model (WIRED’s top pick of the entire collection), mixes it with the Earthphase MoonSwatches, and replaces the inferior original strap for a superior white and blue Swatch rubber velcro one. Aesthetically, it’s definitely a win, but this is not the Cold Moon’s party trick.

On each $450 Cold Moon MoonSwatch, a snowflake is lasered onto its Moonshine Gold moon phase indicator—and, just like a real snowflake, Swatch claims each one will be completely unique. When you consider the volumes of MoonSwatches Swatch produces each year, this is no mean feat.

Image may contain Logo and Cork

The unique golden snowflakes appear on the moon phase dial of the Cold Moon.

Photograph: Courtesy of Swatch



Source link

Continue Reading

Trending