Connect with us

Tech

Open-source tool predicts wind farm power fluctuations with greater short-term accuracy

Published

on

Open-source tool predicts wind farm power fluctuations with greater short-term accuracy


Credit: Sinousxl, Pixabay

Researchers from TU Delft, a partner of the SUDOCO project, in collaboration with the Université catholique de Louvain (Belgium) and the National Renewable Energy Laboratory of Golden (U.S.), have developed a new open-source wake modeling framework called “OFF,” enhancing existing models such as OnWARDS, FLORIDyn, and FLORIS.

OFF enables the approximation of wind farm flow control (WFFC) strategies under dynamically changing conditions.

Today, most models rely on simplified steady-state assumptions that overlook short-term variability and the transient behavior of turbine wakes, limiting their ability to capture the true dynamics of wind farm interactions.

OFF addresses this gap by incorporating time-dependent dynamics, and when tested with real-world data from the Hollandse Kust Noord wind farm in the Netherlands, it demonstrated improved accuracy in predicting and turbine interactions, particularly over short time scales of less than 20 minutes.

These findings highlight OFF’s potential to balance energy gains with reduced turbine wear, making it a for both scientists and industry.

The work is presented in the study “A dynamic open-source to investigate wake dynamics in response to wind farm flow control strategies” published in Wind Energy Science.

The used a 24-hour wind direction time series based on field data, and subsets of the series were verified using Large-Eddy Simulation (LES). Results show that yaw movements strongly depend on the controller settings and indicate how to balance power gains with actuator usage.

Compared to LES, the dynamic wake model predicts short-term turbine power fluctuations more accurately than steady-state models, capturing high-frequency dynamics with better correlation and lower error.

By providing a transparent, accessible, and efficient platform, OFF empowers the wind energy community to accelerate the development of advanced control strategies and drive the transition toward more reliable and sustainable offshore power.

More information:
Marcus Becker et al, A dynamic open-source model to investigate wake dynamics in response to wind farm flow control strategies, Wind Energy Science (2025). DOI: 10.5194/wes-10-1055-2025

Provided by
iCube Programme


Citation:
Open-source tool predicts wind farm power fluctuations with greater short-term accuracy (2025, October 8)
retrieved 8 October 2025
from https://techxplore.com/news/2025-10-source-tool-farm-power-fluctuations.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Using generative AI to diversify virtual training grounds for robots

Published

on

Using generative AI to diversify virtual training grounds for robots



Chatbots like ChatGPT and Claude have experienced a meteoric rise in usage over the past three years because they can help you with a wide range of tasks. Whether you’re writing Shakespearean sonnets, debugging code, or need an answer to an obscure trivia question, artificial intelligence systems seem to have you covered. The source of this versatility? Billions, or even trillions, of textual data points across the internet.

Those data aren’t enough to teach a robot to be a helpful household or factory assistant, though. To understand how to handle, stack, and place various arrangements of objects across diverse environments, robots need demonstrations. You can think of robot training data as a collection of how-to videos that walk the systems through each motion of a task. Collecting these demonstrations on real robots is time-consuming and not perfectly repeatable, so engineers have created training data by generating simulations with AI (which don’t often reflect real-world physics), or tediously handcrafting each digital environment from scratch.

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Toyota Research Institute may have found a way to create the diverse, realistic training grounds robots need. Their “steerable scene generation” approach creates digital scenes of things like kitchens, living rooms, and restaurants that engineers can use to simulate lots of real-world interactions and scenarios. Trained on over 44 million 3D rooms filled with models of objects such as tables and plates, the tool places existing assets in new scenes, then refines each one into a physically accurate, lifelike environment.

Steerable scene generation creates these 3D worlds by “steering” a diffusion model — an AI system that generates a visual from random noise — toward a scene you’d find in everyday life. The researchers used this generative system to “in-paint” an environment, filling in particular elements throughout the scene. You can imagine a blank canvas suddenly turning into a kitchen scattered with 3D objects, which are gradually rearranged into a scene that imitates real-world physics. For example, the system ensures that a fork doesn’t pass through a bowl on a table — a common glitch in 3D graphics known as “clipping,” where models overlap or intersect.

How exactly steerable scene generation guides its creation toward realism, however, depends on the strategy you choose. Its main strategy is “Monte Carlo tree search” (MCTS), where the model creates a series of alternative scenes, filling them out in different ways toward a particular objective (like making a scene more physically realistic, or including as many edible items as possible). It’s used by the AI program AlphaGo to beat human opponents in Go (a game similar to chess), as the system considers potential sequences of moves before choosing the most advantageous one.

“We are the first to apply MCTS to scene generation by framing the scene generation task as a sequential decision-making process,” says MIT Department of Electrical Engineering and Computer Science (EECS) PhD student Nicholas Pfaff, who is a CSAIL researcher and a lead author on a paper presenting the work. “We keep building on top of partial scenes to produce better or more desired scenes over time. As a result, MCTS creates scenes that are more complex than what the diffusion model was trained on.”

In one particularly telling experiment, MCTS added the maximum number of objects to a simple restaurant scene. It featured as many as 34 items on a table, including massive stacks of dim sum dishes, after training on scenes with only 17 objects on average.

Steerable scene generation also allows you to generate diverse training scenarios via reinforcement learning — essentially, teaching a diffusion model to fulfill an objective by trial-and-error. After you train on the initial data, your system undergoes a second training stage, where you outline a reward (basically, a desired outcome with a score indicating how close you are to that goal). The model automatically learns to create scenes with higher scores, often producing scenarios that are quite different from those it was trained on.

Users can also prompt the system directly by typing in specific visual descriptions (like “a kitchen with four apples and a bowl on the table”). Then, steerable scene generation can bring your requests to life with precision. For example, the tool accurately followed users’ prompts at rates of 98 percent when building scenes of pantry shelves, and 86 percent for messy breakfast tables. Both marks are at least a 10 percent improvement over comparable methods like “MiDiffusion” and “DiffuScene.”

The system can also complete specific scenes via prompting or light directions (like “come up with a different scene arrangement using the same objects”). You could ask it to place apples on several plates on a kitchen table, for instance, or put board games and books on a shelf. It’s essentially “filling in the blank” by slotting items in empty spaces, but preserving the rest of a scene.

According to the researchers, the strength of their project lies in its ability to create many scenes that roboticists can actually use. “A key insight from our findings is that it’s OK for the scenes we pre-trained on to not exactly resemble the scenes that we actually want,” says Pfaff. “Using our steering methods, we can move beyond that broad distribution and sample from a ‘better’ one. In other words, generating the diverse, realistic, and task-aligned scenes that we actually want to train our robots in.”

Such vast scenes became the testing grounds where they could record a virtual robot interacting with different items. The machine carefully placed forks and knives into a cutlery holder, for instance, and rearranged bread onto plates in various 3D settings. Each simulation appeared fluid and realistic, resembling the real-world, adaptable robots steerable scene generation could help train, one day.

While the system could be an encouraging path forward in generating lots of diverse training data for robots, the researchers say their work is more of a proof of concept. In the future, they’d like to use generative AI to create entirely new objects and scenes, instead of using a fixed library of assets. They also plan to incorporate articulated objects that the robot could open or twist (like cabinets or jars filled with food) to make the scenes even more interactive.

To make their virtual environments even more realistic, Pfaff and his colleagues may incorporate real-world objects by using a library of objects and scenes pulled from images on the internet and using their previous work on “Scalable Real2Sim.” By expanding how diverse and lifelike AI-constructed robot testing grounds can be, the team hopes to build a community of users that’ll create lots of data, which could then be used as a massive dataset to teach dexterous robots different skills.

“Today, creating realistic scenes for simulation can be quite a challenging endeavor; procedural generation can readily produce a large number of scenes, but they likely won’t be representative of the environments the robot would encounter in the real world. Manually creating bespoke scenes is both time-consuming and expensive,” says Jeremy Binagia, an applied scientist at Amazon Robotics who wasn’t involved in the paper. “Steerable scene generation offers a better approach: train a generative model on a large collection of pre-existing scenes and adapt it (using a strategy such as reinforcement learning) to specific downstream applications. Compared to previous works that leverage an off-the-shelf vision-language model or focus just on arranging objects in a 2D grid, this approach guarantees physical feasibility and considers full 3D translation and rotation, enabling the generation of much more interesting scenes.”

“Steerable scene generation with post training and inference-time search provides a novel and efficient framework for automating scene generation at scale,” says Toyota Research Institute roboticist Rick Cory SM ’08, PhD ’10, who also wasn’t involved in the paper. “Moreover, it can generate ‘never-before-seen’ scenes that are deemed important for downstream tasks. In the future, combining this framework with vast internet data could unlock an important milestone towards efficient training of robots for deployment in the real world.”

Pfaff wrote the paper with senior author Russ Tedrake, the Toyota Professor of Electrical Engineering and Computer Science, Aeronautics and Astronautics, and Mechanical Engineering at MIT; a senior vice president of large behavior models at the Toyota Research Institute; and CSAIL principal investigator. Other authors were Toyota Research Institute robotics researcher Hongkai Dai SM ’12, PhD ’16; team lead and Senior Research Scientist Sergey Zakharov; and Carnegie Mellon University PhD student Shun Iwase. Their work was supported, in part, by Amazon and the Toyota Research Institute. The researchers presented their work at the Conference on Robot Learning (CoRL) in September.



Source link

Continue Reading

Tech

NeoStampa update brings new ink & calibration tools

Published

on

NeoStampa update brings new ink & calibration tools



The latest version of the neoStampa 25.7 is now live, delivering new features aimed at improving user productivity and printing precision across a wide range of applications.

Among the headline additions is support for fluorescent orange and green inks, expanding the system’s colour gamut and enabling more vibrant, high-impact prints. The smarter ink display interface now automatically hides fill ink ratios when dilution inks are not present, offering a cleaner and more focused user experience.

NeoStampa 25.7 is now live, featuring support for fluorescent orange and green inks, enhanced droplet control with DRD, and a smarter ink display interface.
The update boosts calibration accuracy for X-Rite i1 Pro 3 Plus, adds FullWidthRip-compatible previews, speeds up PDF rendering, supports Cobra registration marks, and optimises source images via .xjb files.

Another key enhancement is advanced droplet control—the new DRD (Dynamic Resolution Droplet) calculation feature allows users to disable intermediate droplet sizes, offering more flexibility, neoStampa said in a release.

For users working with X-Rite devices, the update introduces new 51-patch linearisation targets for i1 Pro 3 Plus, allowing for improved calibration accuracy with additional linearisation target support. Preview functionality has also been enhanced with FullWidthRip-compatible previews, ensuring job simulations more accurately reflect final output settings.

Workflow speed gets a boost with faster PDF rendering tailored to specific use cases, while users can now generate and export Cobra registration marks for simplified alignment and registration during print setup.

Lastly, a new .xjb optimisation feature has been introduced, allowing source images to be streamlined within the Print Server—increasing overall efficiency.

Fibre2Fashion News Desk (HU)



Source link

Continue Reading

Tech

TAG Heuer’s New Smartwatch Ditches Google’s Wear OS to Be Apple Friendly

Published

on

TAG Heuer’s New Smartwatch Ditches Google’s Wear OS to Be Apple Friendly


Right as Google’s Wear OS is hitting its stride—now feature-rich with robust smartwatches that can go toe-to-toe with the Apple Watch—luxury watchmaker TAG Heuer has decided to ditch the operating system altogether for its latest Connected Calibre E5 smartwatch. Instead, it runs a proprietary “TAG Heuer OS” (still based on Android). But unlike many of the latest Wear OS smartwatches designed only for Android phones, this one is compatible with iPhones.

That’s likely one of the biggest reasons for the switch-up, as Google seems to have abandoned making its smartwatch platform compatible with Apple’s hardware (Apple never made it easy, though this could change). It also allows the Swiss watchmaker to be less dependent on the whims of Google, but ultimately, it means TAG’s smartwatch will not have access to the wealth of apps found on Google and Apple’s respective platforms.

I spent a few days with the 45-mm Calibre E5 (there’s also a new 40-mm variant), and this fifth-generation smartwatch feels polished, despite the software change. It’s also striking in its design, unlike any other smartwatch, with premium materials like a ceramic bezel, domed sapphire crystal, and snazzy band options. Unsurprisingly, the version I tried will cost you a punchy $2,000 when it goes on sale this month (and goes up to $2,800 for other variations), though the smaller 40-mm Calibre E5 starts at $1,800.

A Luxe Smartwatch

Comfortable and premium: the new TAG smartwatch sports a ceramic bezel, flat or domed sapphire crystal, titanium and DLC, depending on what model you choose.

Photograph: Julian Chokkattu

The Calibre E5 has a nice heft to it that helps make it feel luxe enough to match that price point. A stainless steel polished case, black polished ceramic bezel with silver markings, and flat sapphire crystal also furthers the premium pedigree. TAG has, of course, several other variations. You can get a black diamond-like carbon (DLC) grade-2 titanium sandblasted case, white and green indices, or a domed sapphire crystal over the display, exclusive to the new 40-mm case.

The sloped lugs offer a comfortable fit, and the metal bracelet integrates well with the case. It’s interchangeable (there’s a button you press on the underside to release it), though these straps are expressly designed for the E5. Still, I was able to pop on a 22-mm pin buckle strap from one of my other watches without issues.

Despite being weighty, I didn’t mind wearing this smartwatch to sleep, though you may want a more comfortable strap. However, when I woke up the next morning, I spent a few minutes hunting for my sleep results, only to learn they don’t exist. Yet. TAG says it plans to add sleep tracking, likely in December via a software update, which is important considering this is a staple feature on most smartwatches these days.

One of the boons of smartwatches is you can switch between several watch faces, and the E5 is no exception, but nicely many here mimic designs of TAG’s mechanical watches, such as the Carrera or Aquaracer. It’s fairly simple to customize these on the watch itself, too, choosing different accent colors, backgrounds, and complications.



Source link

Continue Reading

Trending