Tech
How to reduce greenhouse gas emissions from ammonia production

Ammonia is one of the most widely produced chemicals in the world, used mostly as fertilizer, but also for the production of some plastics, textiles, and other applications. Its production, through processes that require high heat and pressure, accounts for up to 20% of all the greenhouse gases from the entire chemical industry, so efforts have been underway worldwide to find ways to reduce those emissions.
Now, researchers at MIT have come up with a clever way of combining two different methods of producing a compound that minimizes waste products, that—when combined with some other simple upgrades—could reduce the greenhouse emissions from production by as much as 63%, compared to the leading “low-emissions” approach being used today.
The new approach is described in the journal Energy & Fuels, in a paper by MIT Energy Initiative (MITEI) Director William H. Green, graduate student Sayandeep Biswas, MITEI Director of Research Randall Field, and two colleagues.
“Ammonia has the most carbon dioxide emissions of any kind of chemical,” says Green, who is the Hoyt C. Hottel Professor in Chemical Engineering.
“It’s a very important chemical,” he says, because its use as a fertilizer is crucial to being able to feed the world’s population.
Until late in the 19th century, the most widely used source of nitrogen fertilizer was mined deposits of bat or bird guano, mostly from Chile, but that source was beginning to run out, and there were predictions that the world would soon be running short of food to sustain the population. But then a new chemical process, called the Haber-Bosch process after its inventors, made it possible to make ammonia out of nitrogen from the air and hydrogen, which was mostly derived from methane. But both the burning of fossil fuels to provide the needed heat and the use of methane to make the hydrogen led to massive climate-warming emissions from the process.
To address this, two newer variations of ammonia production have been developed: so-called blue ammonia, where the greenhouse gases are captured right at the factory and then sequestered deep underground, and green ammonia, produced by a different chemical pathway, using electricity instead of fossil fuels to hydrolyze water to make hydrogen.
Blue ammonia is already beginning to be used, with a few plants operating now in Louisiana, Green says, and the ammonia mostly being shipped to Japan, “so that’s already kind of commercial.” Other parts of the world are starting to use green ammonia, especially in places that have lots of hydropower, solar, or wind to provide inexpensive electricity, including a giant plant now under construction in Saudi Arabia.
But in most places, both blue and green ammonia are still more expensive than the traditional fossil-fuel-based version, so many teams around the world have been working on ways to cut these costs as much as possible so that the difference is small enough to be made up through tax subsidies or other incentives.
The problem is growing, because as the population grows, and as wealth increases, there will be ever-increasing demand for nitrogen fertilizer. At the same time, ammonia is a promising substitute fuel to power hard-to-decarbonize transportation such as cargo ships and heavy trucks, which could lead to even greater needs for the chemical.
“It definitely works” as a transportation fuel, by powering fuel cells that have been demonstrated for use by everything from drones to barges and tugboats and trucks, Green says.
“People think that the most likely market of that type would be for shipping,” he says, “because the downside of ammonia is it’s toxic and it’s smelly, and that makes it slightly dangerous to handle and to ship around.”
So its best uses may be where it’s used in high volume and in relatively remote locations, like the high seas. In fact, the International Maritime Organization will soon be voting on new rules that might give a strong boost to the ammonia alternative for shipping.
The key to the new proposed system is to combine the two existing approaches in one facility, with a blue ammonia factory next to a green ammonia factory. The process of generating hydrogen for the green ammonia plant leaves a lot of leftover oxygen that just gets vented to the air. Blue ammonia, on the other hand, uses a process called autothermal reforming that requires a source of pure oxygen, so if there’s a green ammonia plant next door, it can use that excess oxygen.
“Putting them next to each other turns out to have significant economic value,” Green says.
This synergy could help hybrid “blue-green ammonia” facilities serve as an important bridge toward a future where eventually green ammonia, the cleanest version, could finally dominate. But that future is likely decades away, Green says, so having the combined plants could be an important step along the way.
“It might be a really long time before [green ammonia] is actually attractive” economically, he says. “Right now, it’s nowhere close, except in very special situations.”
But the combined plants “could be a really appealing concept, and maybe a good way to start the industry,” because so far only small, standalone demonstration plants of the green process are being built.
“If green or blue ammonia is going to become the new way of making ammonia, you need to find ways to make it relatively affordable in a lot of countries, with whatever resources they’ve got.” This new proposed combination, he says, “looks like a really good idea that can help push things along. Ultimately, there’s got to be a lot of green ammonia plants in a lot of places,” and starting out with the combined plants, which could be more affordable now, could help to make that happen. The team has filed for a patent on the process.
Although the team did a detailed study of both the technology and the economics that showed the system has great promise, Green points out, “No one has ever built one. We did the analysis, it looks good, but surely when people build the first one, they’ll find funny little things that need some attention,” such as details of how to start up or shut down the process.
“I would say there’s plenty of additional work to do to make it a real industry.”
But the results of this study, which show the costs to be much more affordable than existing blue or green plants in isolation, “definitely encourage the possibility of people making the big investments that would be needed to really make this industry feasible.”
This proposed integration of the two methods “improves efficiency, reduces greenhouse gas emissions, and lowers overall cost,” says Kevin van Geem, a professor in the Center for Sustainable Chemistry at Ghent University, who was not associated with this research.
“The analysis is rigorous, with validated process models, transparent assumptions, and comparisons to literature benchmarks. By combining techno-economic analysis with emissions accounting, the work provides a credible and balanced view of the trade-offs.”
He adds, “Given the scale of global ammonia production, such a reduction could have a highly impactful effect on decarbonizing one of the most emissions-intensive chemical industries.”
The research team also included MIT postdoc Angiras Menon and MITEI research lead Guiyan Zang.
More information:
Sayandeep Biswas et al, A Comprehensive Costing and Emissions Analysis of Blue, Green, and Combined Blue-Green Ammonia Production, Energy & Fuels (2025). DOI: 10.1021/acs.energyfuels.5c03111
This story is republished courtesy of MIT News (web.mit.edu/newsoffice/), a popular site that covers news about MIT research, innovation and teaching.
Citation:
How to reduce greenhouse gas emissions from ammonia production (2025, October 8)
retrieved 8 October 2025
from https://techxplore.com/news/2025-10-greenhouse-gas-emissions-ammonia-production.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
Sony’s Thrilling Bravia Surround System Is $200 Off Right Now

It’s Prime Day times, and one of my favorite sonic surprises of 2025 is getting a sweet discount. The Sony Bravia Theater System 6 is one of the best soundbar setups I’ve tested this year, offering a unique mix of components that serve up some of the most thrilling and musical surround sound you can get for the money. There aren’t a ton of fancy features here, but you’ll get everything you need in one box to take your TV setup from boring to bodacious.
For plenty more deals on all sorts of gear, make sure you peruse our massive Absolute Best Prime Day deals post and our Amazon Prime Day live blog for all the best stuff we’ve tested and curated.
The Bravia Theater System 6 comes in a rather large box, with its hefty subwoofer taking up most of the real estate. The large cabinet serves as both the sonic foundation and the primary hub of the 5.1-channel system, offering all inputs and connecting to the slim soundbar via a small flat cable. Inputs include HDMI eARC for seamless TV connection, as well as digital optical and 3.5-mm analog input for legacy sources.
A small amplifier box connects to the subwoofer wirelessly, while two more flat cables connect the tall surround speakers. It’s a lot of wires for a single-box surround system in 2025, but the payoff is performance that gets refreshingly close to more complex multi-speaker setups. You’ll get punch and verve in the bass, smooth musicality and poised dialog from the bar, and clear and fluid surround channels from the back speakers. While there aren’t any upfiring speakers for 3D sound formats like Dolby Atmos, the System 6 does a commendable job virtualizing Atmos.
One thing you won’t get in the package is Wi-Fi support, which means you’ll be confined to Bluetooth streaming, and any updates need to be done manually with a USB drive, yet another callback to older Home Theater in a Box (HTiB) systems.
A bit of awkwardness in setup is worth it for the sheer cinematic performance the Bravia Theater System 6 serves up. It’s worth the splurge for many at full price, but this discount makes it a much easier choice for anyone looking to take their basic TV setup to the next level. If you want to be fully immersed in your films and TV shows, this setup delivers.
Power up with unlimited access to WIRED. Get best-in-class reporting and exclusive subscriber content that’s too important to ignore. Subscribe Today.
Tech
Using generative AI to diversify virtual training grounds for robots

Chatbots like ChatGPT and Claude have experienced a meteoric rise in usage over the past three years because they can help you with a wide range of tasks. Whether you’re writing Shakespearean sonnets, debugging code, or need an answer to an obscure trivia question, artificial intelligence systems seem to have you covered. The source of this versatility? Billions, or even trillions, of textual data points across the internet.
Those data aren’t enough to teach a robot to be a helpful household or factory assistant, though. To understand how to handle, stack, and place various arrangements of objects across diverse environments, robots need demonstrations. You can think of robot training data as a collection of how-to videos that walk the systems through each motion of a task. Collecting these demonstrations on real robots is time-consuming and not perfectly repeatable, so engineers have created training data by generating simulations with AI (which don’t often reflect real-world physics), or tediously handcrafting each digital environment from scratch.
Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Toyota Research Institute may have found a way to create the diverse, realistic training grounds robots need. Their “steerable scene generation” approach creates digital scenes of things like kitchens, living rooms, and restaurants that engineers can use to simulate lots of real-world interactions and scenarios. Trained on over 44 million 3D rooms filled with models of objects such as tables and plates, the tool places existing assets in new scenes, then refines each one into a physically accurate, lifelike environment.
Steerable scene generation creates these 3D worlds by “steering” a diffusion model — an AI system that generates a visual from random noise — toward a scene you’d find in everyday life. The researchers used this generative system to “in-paint” an environment, filling in particular elements throughout the scene. You can imagine a blank canvas suddenly turning into a kitchen scattered with 3D objects, which are gradually rearranged into a scene that imitates real-world physics. For example, the system ensures that a fork doesn’t pass through a bowl on a table — a common glitch in 3D graphics known as “clipping,” where models overlap or intersect.
How exactly steerable scene generation guides its creation toward realism, however, depends on the strategy you choose. Its main strategy is “Monte Carlo tree search” (MCTS), where the model creates a series of alternative scenes, filling them out in different ways toward a particular objective (like making a scene more physically realistic, or including as many edible items as possible). It’s used by the AI program AlphaGo to beat human opponents in Go (a game similar to chess), as the system considers potential sequences of moves before choosing the most advantageous one.
“We are the first to apply MCTS to scene generation by framing the scene generation task as a sequential decision-making process,” says MIT Department of Electrical Engineering and Computer Science (EECS) PhD student Nicholas Pfaff, who is a CSAIL researcher and a lead author on a paper presenting the work. “We keep building on top of partial scenes to produce better or more desired scenes over time. As a result, MCTS creates scenes that are more complex than what the diffusion model was trained on.”
In one particularly telling experiment, MCTS added the maximum number of objects to a simple restaurant scene. It featured as many as 34 items on a table, including massive stacks of dim sum dishes, after training on scenes with only 17 objects on average.
Steerable scene generation also allows you to generate diverse training scenarios via reinforcement learning — essentially, teaching a diffusion model to fulfill an objective by trial-and-error. After you train on the initial data, your system undergoes a second training stage, where you outline a reward (basically, a desired outcome with a score indicating how close you are to that goal). The model automatically learns to create scenes with higher scores, often producing scenarios that are quite different from those it was trained on.
Users can also prompt the system directly by typing in specific visual descriptions (like “a kitchen with four apples and a bowl on the table”). Then, steerable scene generation can bring your requests to life with precision. For example, the tool accurately followed users’ prompts at rates of 98 percent when building scenes of pantry shelves, and 86 percent for messy breakfast tables. Both marks are at least a 10 percent improvement over comparable methods like “MiDiffusion” and “DiffuScene.”
The system can also complete specific scenes via prompting or light directions (like “come up with a different scene arrangement using the same objects”). You could ask it to place apples on several plates on a kitchen table, for instance, or put board games and books on a shelf. It’s essentially “filling in the blank” by slotting items in empty spaces, but preserving the rest of a scene.
According to the researchers, the strength of their project lies in its ability to create many scenes that roboticists can actually use. “A key insight from our findings is that it’s OK for the scenes we pre-trained on to not exactly resemble the scenes that we actually want,” says Pfaff. “Using our steering methods, we can move beyond that broad distribution and sample from a ‘better’ one. In other words, generating the diverse, realistic, and task-aligned scenes that we actually want to train our robots in.”
Such vast scenes became the testing grounds where they could record a virtual robot interacting with different items. The machine carefully placed forks and knives into a cutlery holder, for instance, and rearranged bread onto plates in various 3D settings. Each simulation appeared fluid and realistic, resembling the real-world, adaptable robots steerable scene generation could help train, one day.
While the system could be an encouraging path forward in generating lots of diverse training data for robots, the researchers say their work is more of a proof of concept. In the future, they’d like to use generative AI to create entirely new objects and scenes, instead of using a fixed library of assets. They also plan to incorporate articulated objects that the robot could open or twist (like cabinets or jars filled with food) to make the scenes even more interactive.
To make their virtual environments even more realistic, Pfaff and his colleagues may incorporate real-world objects by using a library of objects and scenes pulled from images on the internet and using their previous work on “Scalable Real2Sim.” By expanding how diverse and lifelike AI-constructed robot testing grounds can be, the team hopes to build a community of users that’ll create lots of data, which could then be used as a massive dataset to teach dexterous robots different skills.
“Today, creating realistic scenes for simulation can be quite a challenging endeavor; procedural generation can readily produce a large number of scenes, but they likely won’t be representative of the environments the robot would encounter in the real world. Manually creating bespoke scenes is both time-consuming and expensive,” says Jeremy Binagia, an applied scientist at Amazon Robotics who wasn’t involved in the paper. “Steerable scene generation offers a better approach: train a generative model on a large collection of pre-existing scenes and adapt it (using a strategy such as reinforcement learning) to specific downstream applications. Compared to previous works that leverage an off-the-shelf vision-language model or focus just on arranging objects in a 2D grid, this approach guarantees physical feasibility and considers full 3D translation and rotation, enabling the generation of much more interesting scenes.”
“Steerable scene generation with post training and inference-time search provides a novel and efficient framework for automating scene generation at scale,” says Toyota Research Institute roboticist Rick Cory SM ’08, PhD ’10, who also wasn’t involved in the paper. “Moreover, it can generate ‘never-before-seen’ scenes that are deemed important for downstream tasks. In the future, combining this framework with vast internet data could unlock an important milestone towards efficient training of robots for deployment in the real world.”
Pfaff wrote the paper with senior author Russ Tedrake, the Toyota Professor of Electrical Engineering and Computer Science, Aeronautics and Astronautics, and Mechanical Engineering at MIT; a senior vice president of large behavior models at the Toyota Research Institute; and CSAIL principal investigator. Other authors were Toyota Research Institute robotics researcher Hongkai Dai SM ’12, PhD ’16; team lead and Senior Research Scientist Sergey Zakharov; and Carnegie Mellon University PhD student Shun Iwase. Their work was supported, in part, by Amazon and the Toyota Research Institute. The researchers presented their work at the Conference on Robot Learning (CoRL) in September.
Tech
Open-source tool predicts wind farm power fluctuations with greater short-term accuracy

Researchers from TU Delft, a partner of the SUDOCO project, in collaboration with the Université catholique de Louvain (Belgium) and the National Renewable Energy Laboratory of Golden (U.S.), have developed a new open-source wake modeling framework called “OFF,” enhancing existing models such as OnWARDS, FLORIDyn, and FLORIS.
OFF enables the approximation of wind farm flow control (WFFC) strategies under dynamically changing conditions.
Today, most models rely on simplified steady-state assumptions that overlook short-term variability and the transient behavior of turbine wakes, limiting their ability to capture the true dynamics of wind farm interactions.
OFF addresses this gap by incorporating time-dependent dynamics, and when tested with real-world data from the Hollandse Kust Noord wind farm in the Netherlands, it demonstrated improved accuracy in predicting power output and turbine interactions, particularly over short time scales of less than 20 minutes.
These findings highlight OFF’s potential to balance energy gains with reduced turbine wear, making it a valuable tool for both scientists and industry.
The work is presented in the study “A dynamic open-source model to investigate wake dynamics in response to wind farm flow control strategies” published in Wind Energy Science.
The case study used a 24-hour wind direction time series based on field data, and subsets of the series were verified using Large-Eddy Simulation (LES). Results show that yaw movements strongly depend on the controller settings and indicate how to balance power gains with actuator usage.
Compared to LES, the dynamic wake model predicts short-term turbine power fluctuations more accurately than steady-state models, capturing high-frequency dynamics with better correlation and lower error.
By providing a transparent, accessible, and efficient platform, OFF empowers the wind energy community to accelerate the development of advanced control strategies and drive the transition toward more reliable and sustainable offshore wind power.
More information:
Marcus Becker et al, A dynamic open-source model to investigate wake dynamics in response to wind farm flow control strategies, Wind Energy Science (2025). DOI: 10.5194/wes-10-1055-2025
Citation:
Open-source tool predicts wind farm power fluctuations with greater short-term accuracy (2025, October 8)
retrieved 8 October 2025
from https://techxplore.com/news/2025-10-source-tool-farm-power-fluctuations.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
-
Business1 week ago
Top stocks to buy today: Stock market recommendations for September 30, 2025 – check list – The Times of India
-
Fashion1 week ago
BioFibreLoop spins breakthrough in Lignin-based sustainable textiles
-
Fashion1 week ago
EU–Indonesia CEPA to unlock $352 mn for European sports industry: FESI
-
Tech1 week ago
Samsung Promo Codes: 30% Off in October 2025
-
Tech1 week ago
Sound Machines Can Be a Game-Changer For Light Sleepers—Here Are Our Tested Picks
-
Business1 week ago
Chinese woman convicted in UK after ‘world’s biggest’ bitcoin seizure
-
Fashion1 week ago
Dutch producer confidence improves in Sept but stays negative
-
Sports1 week ago
Uncharted territory: Real Madrid travel 4,000 miles east to Asia for historic Champions League trip