Connect with us

Tech

AI method reconstructs 3D scene details from simulated images using inverse rendering

Published

on

AI method reconstructs 3D scene details from simulated images using inverse rendering


Layout generation. a, Images for two scenes observed by a single camera. b, Test-time optimized inverse rendered objects. c, BEV layouts of the scenes. In the BEV layout (a common representation for autonomous driving tasks), black boxes represent the ground truth and colored boxes represent predicted BEV boxes. Credit: Nature Machine Intelligence (2025). DOI: 10.1038/s42256-025-01083-x

Over the past decades, computer scientists have developed many computational tools that can analyze and interpret images. These tools have proved useful for a broad range of applications, including robotics, autonomous driving, health care, manufacturing and even entertainment.

Most of the best performing computer vision approaches employed to date rely on so-called feed-forward neural networks. These are computational models that process input images step by step, ultimately making predictions about them.

While some of these models were found to perform well when tested on the data they analyzed during training, they often do not generalize well across new images and in different scenarios. In addition, their predictions and the patterns they extract from images can be difficult to interpret.

Researchers at Princeton University recently developed a new inverse rendering approach that is more transparent and could also interpret a wide range of images more reliably. The new approach, introduced in a paper published in Nature Machine Intelligence, relies on a generative artificial intelligence (AI)-based method to simulate the process of image creation, while also optimizing it by gradually adjusting a model’s internal parameters.

“Generative AI and neural rendering have transformed the field in recent years for creating novel content: producing images or videos from scene descriptions,” Felix Heide, senior author of the paper, told Tech Xplore. “We investigate whether we can flip this around and use these generative models for extracting the scene descriptions from images.”







Video of tracking results of the team’s method. A demonstration of the performance of our proposed tracking method based on inverse neural rendering for a sample of diverse scenes from the nuScenes dataset and the Waymo Open Dataset. We overlay the observed image with the rendered objects through alpha blending with a weight of 0.4. Object renderings are defined by the averaged latent embeddings zk,EMA and the tracked object state yk. Credit: Nature Machine Intelligence (2025). DOI: 10.1038/s42256-025-01083-x

The new approach developed by Heide and his colleagues relies on a so-called differentiable rendering pipeline. This is a process for the simulation of image creation, relying on compressed representations of images created by generative AI models.

“We developed an analysis-by-synthesis approach that allows us to solve vision tasks, such as tracking, as test-time optimization problems,” explained Heide. “We found that this method generalizes across datasets, and in contrast to existing supervised learning methods, does not need to be trained on new datasets.”

Essentially, the method developed by the researchers works by placing models of 3D objects in a virtual scene depicting real world settings. These models of objects are generated by a generative AI based on random sample of 3D scene parameters.

“We then render all these objects back together into a 2D image,” said Heide. “Next, we compare this rendered image with the real observed image. Based on how different they are, we backpropagate the difference through both the differentiable rendering function and the 3D generation model to update its inputs. In just a few steps, we optimize these inputs to make the rendered match the observed images better.”

  • A new generative model-based inverse rendering approach for computer vision and image processing
    Optimizing 3D models through inverse neural rendering. From left to right: the observed image, initial random 3D generations, and three optimization steps that refine these to better match the observed image. The observed images are faded to show the rendered objects clearly. The method effectively refines object appearance and position, all done at test time with inverse neural rendering. Credit: Ost et al.
  • A new generative model-based inverse rendering approach for computer vision and image processing
    Generalization of 3D multi-object tracking with Inverse Neural Rendering. The method directly generalizes across datasets such as the nuScenes and Waymo Open Dataset benchmarks without additional fine-tuning and is trained on synthetic 3D models only. The observed images are overlaid with the closest generated object and tracked 3D bounding boxes. Credit: Ost et al.

A notable advantage of the team’s newly proposed approach is that it allows very generic 3D object generation models trained on synthetic data to perform well across a wide range of datasets containing images captured in real-world settings. In addition, the renderings produced by the models are far more explainable than those produced by conventional rendering tools based on feed-forward machine learning models.

“Our inverse rendering approach for tracking works just as well as learned feed-forward approaches, but it provides us with explicit 3D explanations of its perceived world,” said Heide.

“The other interesting aspect is the generalization capabilities. Without changing the 3D generation model or training it on new data, our 3D multi-object tracking through Inverse Neural Rendering works well across different autonomous driving datasets and object types. This can significantly reduce the cost of fine-tuning on new data or at least work as an auto-labeling pipeline.”

This recent study could soon help to advance AI models for computer vision, improving their performance in real-world settings while also increasing their transparency. The researchers now plan to continue improving their method and start testing it on more computer vision-related tasks.

“A logical next step is the expansion of the proposed approach to other perception tasks, such as 3D detection and 3D segmentation,” added Heide. “Ultimately, we want to explore if inverse rendering can even be used to infer the whole 3D scene, and not just individual objects. This would allow our future robots to reason and continuously optimize a three-dimensional model of the world, which comes with built-in explainability.”

Written for you by our author Ingrid Fadelli,
edited by Gaby Clark, and fact-checked and reviewed by Robert Egan—this article is the result of careful human work. We rely on readers like you to keep independent science journalism alive.
If this reporting matters to you,
please consider a donation (especially monthly).
You’ll get an ad-free account as a thank-you.

More information:
Julian Ost et al, Towards generalizable and interpretable three-dimensional tracking with inverse neural rendering, Nature Machine Intelligence (2025). DOI: 10.1038/s42256-025-01083-x.

© 2025 Science X Network

Citation:
AI method reconstructs 3D scene details from simulated images using inverse rendering (2025, August 23)
retrieved 23 August 2025
from https://techxplore.com/news/2025-08-ai-method-reconstructs-3d-scene.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Tech

The Best Cyber Monday Streaming Deals With a Convenient Roommate’s Email Address

Published

on

The Best Cyber Monday Streaming Deals With a Convenient Roommate’s Email Address


HBO knows you’re bored and cold. It wants you to Max and chill with Noah Wyle in scrubs. The company offers some of the best Cyber Monday streaming deals with a ridiculously low-priced $3/month offer for basic HBO Max (it’s the version with ads and 2K streaming, but still, super-cheap). Disney Plus and Hulu deals are bundled up for $5/month. Apple TV wants back in your life for $6.

Of course, this deal is only meant for new customers. Not boring ol’ existing customers. If you already have basic HBO Max, you’re already paying $11 for the same service, and HBO would like you to keep doing that. Streaming apps are banking on you being complacent and happy in your streaming life. Maybe they’re even taking you for granted.

Sometimes you can get the current deal just by threatening to cancel, or actually canceling, your account. Suddenly, you’re an exciting new customer again! Another method is by using an alternate email account (perhaps your spouse’s or roommate’s?) and alternate payment information as a new customer. If you do use a burner email (you did not hear this from me), check in on your favorite app’s terms of service to make sure you’re not in violation by re-enrolling with different emails. I’ll also issue the caveat that you lose all your viewing data and tailored suggestions if you sign up anew.

But times and wallets are tight! And $3 HBO Max sounds pretty good. After all, every middle-aged American man needs to rewatch The Wire once every five years or so—assuming he’s not the kind of middle-aged man who rewatches The Sopranos instead. Here are the current best streaming deals for Cyber Monday 2025.


Devon Maloney; ARCHIVE ID: 546772

Regular price: $80



Source link

Continue Reading

Tech

SAP user group chair warns of AI low-hanging fruit risks | Computer Weekly

Published

on

SAP user group chair warns of AI low-hanging fruit risks | Computer Weekly


The UK and Ireland SAP User Group (UKISUG) Connect 25 conference has opened in Birmingham with a keynote session recognising the challenges business face.

The user group itself has adapted to changes in the technology market such as the advent of artificial intelligence (AI) in business applications and the economic climate that has a profound effect on its members’ ability to deliver value with enterprise technology.

In his keynote presentation, Conor Riordan, chair of UKISUG, said: “As an organisation, we have to change, to position ourselves as we move from the old to the new.”

The user group has a 2030 plan, recognising the shifts in enterprise software. For instance, there is the shift to no-code and low-code tooling, which has implications on the agility of enterprise software development. Riordan noted that the current business climate and geopolitical volatility means that there is a huge pressure to reduce costs, leading to cuts in training budgets and the challenge of delivering more with less, adding: “We need to have process change.”

Moving to a future where organisations are using data to make more dependable decisions, Riordan noted that SAP is moving to a dynamic ecosystem of applications and AI, but the challenge is how quickly businesses can start taking advantage of the AI now available in their business applications. “We see members say SAP AI will help them,” Riordan said.

But many are concerned how the new technology now available will deliver a return on investment (ROI). For Riordan, IT decision-makers need to be wary of tackling the so-called low-hanging fruit, the use cases that the industry sells to the executive team: “It is really complex work, and the low-hanging fruit is not that low hanging. It will take years, not months, to deliver value.”

A poll of delegates at the conference found that 78% of respondents are just getting started with AI, while 29% say their AI initiatives have under-delivered.

“This stuff is not easy,” Riordan said, adding that the challenge is one of process re-engineering and culture change, and that he believes humans need to be at the centre of decision-making. “We ask partners to be reasonable in their productivity claims so we can all succeed together.”

The Value of AI in the UK: Growth, people & data from SAP and Oxford Economics, which was published in October 2025, notes that customers are investing £16m in AI on average this year. The report’s authors predict this will increase by 40% within the next two years. However, the theme coming out of the keynote session at Connect25 is that few companies are really using AI.

Another big topic covered during the keynote is the end of support for SAP products. With SAP’s 2027 maintenance deadline for SAP ECC 6.0 fast approaching, many organisations are now embarking on their migration journey to SAP S/4Hana. More than half (54%) of respondents said that gaining access to SAP’s AI offerings will influence their future deployment of SAP.

Among attendees of Connect25, 49% said they are working towards the 2027 deadline. Riordan called on SAP to help customers to move to the cloud and build a tangible business case.

During her keynote speech, Leila Romane, managing director of SAP UK & Ireland, spoke about the AI opportunity, saying: “We are helping customers unleash new value with business AI.”

SAP’s strategy is to drive business value through the power of AI, data and its enterprise applications, with the SAP Cloud integral in SAP’s strategy to deliver AI-enablement across its enterprise software suite. Romane said SAP recognised that its customers were all at different stages of their cloud journey, adding: “Our commitment is to help you move.”



Source link

Continue Reading

Tech

Hong Kong FWA services market set for 9.6% growth | Computer Weekly

Published

on

Hong Kong FWA services market set for 9.6% growth | Computer Weekly


Analysis from GlobalData is forecasting that fixed wireless access (FWA) service revenue in Hong Kong is expected to increase at a “healthy” compound annual growth rate (CAGR) of 9.6% between 2025 and 2030.

The latest Hong Kong Total Fixed Communications Forecast set out to quantify current and future demand and spending on mobile services for the special administrative region of China. It noted that growth was being driven by Hong Kong’s extensive 5G network coverage and could also be attributed to local operators’ efforts to expand FWA services and position it as an alternative to traditional fibre broadband services for both residential and commercial sectors, meeting growing demand for high-speed connectivity in areas where extending fibre lines is challenging.

“High-density urban and suburban centres of Hong Kong create a strong business case for FWA services due to their cost-effective and rapid deployments without the complex infrastructure and civil work required for extending fibre-optic lines to such locations,” said Neha Misra, senior analyst at GlobalData.

“Competitive, feature-rich plans from the operators will also help drive its adoption over the forecast period. For instance, HKBN’s 5G Home Broadband Plan provides unlimited 5G broadband data (subject to a 300GB with a fair-usage policy) for HKD118 per month on a 24-month contract, along with a seven-day trial guarantee. The plan also includes a waiver of the HKD28 monthly administration fee and complimentary access to the basic HomeShield security plan.”

In addition to HKBN, the study noted that operators such as 3 Hong Kong and HKT are also using their extensive 5G networks to offer home broadband services, particularly in areas with limited fibre infrastructure. It cited HKT as recently having successfully deployed mmWave-based FWA to deliver ultra-high-speed internet to rural areas and outlying islands.

“Growing demand for FWA provides operators a strong revenue opportunity by expanding home and SME broadband without the high capital intensity of fibre roll-out,” Misra added. “By leveraging nationwide 5G coverage, introducing competitively priced service plans and bundling digital home services, operators can unlock higher ARPU [average revenue per user], accelerate market penetration in underserved areas and diversify beyond traditional revenues.”

GlobalData believes the Hong Kong government’s smart city initiatives will also open new opportunities for FWA, especially 5G FWA, which can deliver high-speed internet to power applications such as the digital economy, digital governance and e-health services, while supporting the city’s dense urban environment and digital transformation goals under the Smart City Blueprint 2.0.

The original blueprint was set out in December 2017, outlining 76 initiatives under six smart areas, namely Smart Mobility, Smart Living, Smart Environment, Smart People, Smart Government and Smart Economy. Blueprint 2.0 puts forth more than 130 initiatives that continue to enhance and expand existing city management measures and services. The new initiatives aim to bring benefits and convenience to the public so that residents can better perceive the benefits of smart city innovation and technology.



Source link

Continue Reading

Trending