Tech
TAG Heuer’s New Smartwatch Ditches Google’s Wear OS to Be Apple Friendly

Right as Google’s Wear OS is hitting its stride—now feature-rich with robust smartwatches that can go toe-to-toe with the Apple Watch—luxury watchmaker TAG Heuer has decided to ditch the operating system altogether for its latest Connected Calibre E5 smartwatch. Instead, it runs a proprietary “TAG Heuer OS” (still based on Android). But unlike many of the latest Wear OS smartwatches designed only for Android phones, this one is compatible with iPhones.
That’s likely one of the biggest reasons for the switch-up, as Google seems to have abandoned making its smartwatch platform compatible with Apple’s hardware (Apple never made it easy, though this could change). It also allows the Swiss watchmaker to be less dependent on the whims of Google, but ultimately, it means TAG’s smartwatch will not have access to the wealth of apps found on Google and Apple’s respective platforms.
I spent a few days with the 45-mm Calibre E5 (there’s also a new 40-mm variant), and this fifth-generation smartwatch feels polished, despite the software change. It’s also striking in its design, unlike any other smartwatch, with premium materials like a ceramic bezel, domed sapphire crystal, and snazzy band options. Unsurprisingly, the version I tried will cost you a punchy $2,000 when it goes on sale this month (and goes up to $2,800 for other variations), though the smaller 40-mm Calibre E5 starts at $1,800.
A Luxe Smartwatch
The Calibre E5 has a nice heft to it that helps make it feel luxe enough to match that price point. A stainless steel polished case, black polished ceramic bezel with silver markings, and flat sapphire crystal also furthers the premium pedigree. TAG has, of course, several other variations. You can get a black diamond-like carbon (DLC) grade-2 titanium sandblasted case, white and green indices, or a domed sapphire crystal over the display, exclusive to the new 40-mm case.
The sloped lugs offer a comfortable fit, and the metal bracelet integrates well with the case. It’s interchangeable (there’s a button you press on the underside to release it), though these straps are expressly designed for the E5. Still, I was able to pop on a 22-mm pin buckle strap from one of my other watches without issues.
Despite being weighty, I didn’t mind wearing this smartwatch to sleep, though you may want a more comfortable strap. However, when I woke up the next morning, I spent a few minutes hunting for my sleep results, only to learn they don’t exist. Yet. TAG says it plans to add sleep tracking, likely in December via a software update, which is important considering this is a staple feature on most smartwatches these days.
One of the boons of smartwatches is you can switch between several watch faces, and the E5 is no exception, but nicely many here mimic designs of TAG’s mechanical watches, such as the Carrera or Aquaracer. It’s fairly simple to customize these on the watch itself, too, choosing different accent colors, backgrounds, and complications.
Tech
Using generative AI to diversify virtual training grounds for robots

Chatbots like ChatGPT and Claude have experienced a meteoric rise in usage over the past three years because they can help you with a wide range of tasks. Whether you’re writing Shakespearean sonnets, debugging code, or need an answer to an obscure trivia question, artificial intelligence systems seem to have you covered. The source of this versatility? Billions, or even trillions, of textual data points across the internet.
Those data aren’t enough to teach a robot to be a helpful household or factory assistant, though. To understand how to handle, stack, and place various arrangements of objects across diverse environments, robots need demonstrations. You can think of robot training data as a collection of how-to videos that walk the systems through each motion of a task. Collecting these demonstrations on real robots is time-consuming and not perfectly repeatable, so engineers have created training data by generating simulations with AI (which don’t often reflect real-world physics), or tediously handcrafting each digital environment from scratch.
Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Toyota Research Institute may have found a way to create the diverse, realistic training grounds robots need. Their “steerable scene generation” approach creates digital scenes of things like kitchens, living rooms, and restaurants that engineers can use to simulate lots of real-world interactions and scenarios. Trained on over 44 million 3D rooms filled with models of objects such as tables and plates, the tool places existing assets in new scenes, then refines each one into a physically accurate, lifelike environment.
Steerable scene generation creates these 3D worlds by “steering” a diffusion model — an AI system that generates a visual from random noise — toward a scene you’d find in everyday life. The researchers used this generative system to “in-paint” an environment, filling in particular elements throughout the scene. You can imagine a blank canvas suddenly turning into a kitchen scattered with 3D objects, which are gradually rearranged into a scene that imitates real-world physics. For example, the system ensures that a fork doesn’t pass through a bowl on a table — a common glitch in 3D graphics known as “clipping,” where models overlap or intersect.
How exactly steerable scene generation guides its creation toward realism, however, depends on the strategy you choose. Its main strategy is “Monte Carlo tree search” (MCTS), where the model creates a series of alternative scenes, filling them out in different ways toward a particular objective (like making a scene more physically realistic, or including as many edible items as possible). It’s used by the AI program AlphaGo to beat human opponents in Go (a game similar to chess), as the system considers potential sequences of moves before choosing the most advantageous one.
“We are the first to apply MCTS to scene generation by framing the scene generation task as a sequential decision-making process,” says MIT Department of Electrical Engineering and Computer Science (EECS) PhD student Nicholas Pfaff, who is a CSAIL researcher and a lead author on a paper presenting the work. “We keep building on top of partial scenes to produce better or more desired scenes over time. As a result, MCTS creates scenes that are more complex than what the diffusion model was trained on.”
In one particularly telling experiment, MCTS added the maximum number of objects to a simple restaurant scene. It featured as many as 34 items on a table, including massive stacks of dim sum dishes, after training on scenes with only 17 objects on average.
Steerable scene generation also allows you to generate diverse training scenarios via reinforcement learning — essentially, teaching a diffusion model to fulfill an objective by trial-and-error. After you train on the initial data, your system undergoes a second training stage, where you outline a reward (basically, a desired outcome with a score indicating how close you are to that goal). The model automatically learns to create scenes with higher scores, often producing scenarios that are quite different from those it was trained on.
Users can also prompt the system directly by typing in specific visual descriptions (like “a kitchen with four apples and a bowl on the table”). Then, steerable scene generation can bring your requests to life with precision. For example, the tool accurately followed users’ prompts at rates of 98 percent when building scenes of pantry shelves, and 86 percent for messy breakfast tables. Both marks are at least a 10 percent improvement over comparable methods like “MiDiffusion” and “DiffuScene.”
The system can also complete specific scenes via prompting or light directions (like “come up with a different scene arrangement using the same objects”). You could ask it to place apples on several plates on a kitchen table, for instance, or put board games and books on a shelf. It’s essentially “filling in the blank” by slotting items in empty spaces, but preserving the rest of a scene.
According to the researchers, the strength of their project lies in its ability to create many scenes that roboticists can actually use. “A key insight from our findings is that it’s OK for the scenes we pre-trained on to not exactly resemble the scenes that we actually want,” says Pfaff. “Using our steering methods, we can move beyond that broad distribution and sample from a ‘better’ one. In other words, generating the diverse, realistic, and task-aligned scenes that we actually want to train our robots in.”
Such vast scenes became the testing grounds where they could record a virtual robot interacting with different items. The machine carefully placed forks and knives into a cutlery holder, for instance, and rearranged bread onto plates in various 3D settings. Each simulation appeared fluid and realistic, resembling the real-world, adaptable robots steerable scene generation could help train, one day.
While the system could be an encouraging path forward in generating lots of diverse training data for robots, the researchers say their work is more of a proof of concept. In the future, they’d like to use generative AI to create entirely new objects and scenes, instead of using a fixed library of assets. They also plan to incorporate articulated objects that the robot could open or twist (like cabinets or jars filled with food) to make the scenes even more interactive.
To make their virtual environments even more realistic, Pfaff and his colleagues may incorporate real-world objects by using a library of objects and scenes pulled from images on the internet and using their previous work on “Scalable Real2Sim.” By expanding how diverse and lifelike AI-constructed robot testing grounds can be, the team hopes to build a community of users that’ll create lots of data, which could then be used as a massive dataset to teach dexterous robots different skills.
“Today, creating realistic scenes for simulation can be quite a challenging endeavor; procedural generation can readily produce a large number of scenes, but they likely won’t be representative of the environments the robot would encounter in the real world. Manually creating bespoke scenes is both time-consuming and expensive,” says Jeremy Binagia, an applied scientist at Amazon Robotics who wasn’t involved in the paper. “Steerable scene generation offers a better approach: train a generative model on a large collection of pre-existing scenes and adapt it (using a strategy such as reinforcement learning) to specific downstream applications. Compared to previous works that leverage an off-the-shelf vision-language model or focus just on arranging objects in a 2D grid, this approach guarantees physical feasibility and considers full 3D translation and rotation, enabling the generation of much more interesting scenes.”
“Steerable scene generation with post training and inference-time search provides a novel and efficient framework for automating scene generation at scale,” says Toyota Research Institute roboticist Rick Cory SM ’08, PhD ’10, who also wasn’t involved in the paper. “Moreover, it can generate ‘never-before-seen’ scenes that are deemed important for downstream tasks. In the future, combining this framework with vast internet data could unlock an important milestone towards efficient training of robots for deployment in the real world.”
Pfaff wrote the paper with senior author Russ Tedrake, the Toyota Professor of Electrical Engineering and Computer Science, Aeronautics and Astronautics, and Mechanical Engineering at MIT; a senior vice president of large behavior models at the Toyota Research Institute; and CSAIL principal investigator. Other authors were Toyota Research Institute robotics researcher Hongkai Dai SM ’12, PhD ’16; team lead and Senior Research Scientist Sergey Zakharov; and Carnegie Mellon University PhD student Shun Iwase. Their work was supported, in part, by Amazon and the Toyota Research Institute. The researchers presented their work at the Conference on Robot Learning (CoRL) in September.
Tech
NeoStampa update brings new ink & calibration tools

Among the headline additions is support for fluorescent orange and green inks, expanding the system’s colour gamut and enabling more vibrant, high-impact prints. The smarter ink display interface now automatically hides fill ink ratios when dilution inks are not present, offering a cleaner and more focused user experience.
NeoStampa 25.7 is now live, featuring support for fluorescent orange and green inks, enhanced droplet control with DRD, and a smarter ink display interface.
The update boosts calibration accuracy for X-Rite i1 Pro 3 Plus, adds FullWidthRip-compatible previews, speeds up PDF rendering, supports Cobra registration marks, and optimises source images via .xjb files.
Another key enhancement is advanced droplet control—the new DRD (Dynamic Resolution Droplet) calculation feature allows users to disable intermediate droplet sizes, offering more flexibility, neoStampa said in a release.
For users working with X-Rite devices, the update introduces new 51-patch linearisation targets for i1 Pro 3 Plus, allowing for improved calibration accuracy with additional linearisation target support. Preview functionality has also been enhanced with FullWidthRip-compatible previews, ensuring job simulations more accurately reflect final output settings.
Workflow speed gets a boost with faster PDF rendering tailored to specific use cases, while users can now generate and export Cobra registration marks for simplified alignment and registration during print setup.
Lastly, a new .xjb optimisation feature has been introduced, allowing source images to be streamlined within the Print Server—increasing overall efficiency.
Fibre2Fashion News Desk (HU)
Tech
Would you watch a film with an AI actor? What Tilly Norwood tells us about art—and labor rights

Tilly Norwood officially launched her acting career this month at the Zurich Film Festival.
She first appeared in the short film AI Commissioner, released in July. Her producer, Eline Van der Velden, claims Norwood has already attracted the attention of multiple agents.
But Norwood was generated with artificial intelligence (AI). The AI “actor” has been created by Xicoia, the AI branch of the production company Particle6, founded by the Dutch actor-turned-producer Ven der Velden. And AI Commissioner is an AI-generated short film, written by ChatGPT.
A post about the film’s launch on Norwood’s Facebook page read,
“I may be AI-generated, but I’m feeling very real emotions right now. I am so excited for what’s coming next!”
The reception from the industry has been far from warm. Actors—and audiences—have come out in force against Norwood.
So is this the future of film, or is it a gimmick?
‘Tilly Norwood is not an actor’
Norwood’s existence introduces a new type of technology to Hollywood. Unlike CGI (computer generated imagery), where a performer’s movements are captured and transformed into a digital character, or an animation which is voiced by a human actor, Norwood has no human behind her performance. Every expression and line delivery is generated by AI.
Norwood has been trained on the performances of hundreds of actors, without any payment or consent, and draws on the information from all those performances in every expression and line delivery.
Her arrival comes less than two years after the artist strikes that brought Hollywood to a standstill, with AI a central issue to the disputes. The strike ended with a historic agreement placing limitations around digital replicas of actors’ faces and voices, but did not completely ban “synthetic fakes.”
SAG-AFTRA, the union representing actors in the United States, has said:
“To be clear, ‘Tilly Norwood’ is not an actor; it’s a character generated by a computer program that was trained on the work of countless professional performers—without permission or compensation.”
Additionally, real actors can set boundaries and are protected by agents, unions and intimacy coordinators who negotiate what is shown on screen.
Norwood can be made to perform anything in any context—becoming a vessel for whatever creators or producers choose to depict.
This absence of consent or control opens a dangerous pathway to how the (digitally reproduced) female body may be represented on screen, both in mainstream cinema, and in pornography.
Is it art?
We consider creativity to be a human quality. Art is generally understood as an expression of human experience. Norwood’s performances do not come from such creativity or human experience, but from a database of pre-existing performances.
All artists borrow from and are influenced by predecessors and contemporaries. But that human influence is limited by time, informed by our own experiences and shaped by our unique perspective.
AI has no such limits: just look at Google’s chess-playing program AlphaZero, which learned by playing millions of games of chess, more than any human can play in a lifetime.
Norwood’s training can absorb hundreds of performances in a way no single actor could. How can that be compared to an actor’s performance—a craft they have developed throughout their training and career?
Van der Velden argues Norwood is “a new tool” for creators. Tools have previously been a paintbrush or a typewriter, which have helped facilitate or extend the creativity of painting or writing.
Here, Norwood as the tool performs the creative act itself. The AI is the tool and the artist.
Will audiences accept AI actors?
Norwood’s survival depends not on industry hype but on audience reception.
So far, humans show a negative bias against AI-generated art. Studies across art forms have shown people prefer works when told they were created by humans, even if the output is identical.
We don’t know yet if that bias could fade. A younger generation raised on streaming may be less concerned with whether an actor is “real” and more with immediate access, affordability or how quickly they can consume the content.
If audiences do accept AI actors, the consequences go beyond taste. There would be profound effects on labor. Entry- and mid-level acting jobs could vanish. AI actors could shrink the demand for whole creative teams—from make-up and costume to lighting and set design—since their presence reduces the need for on-set artistry.
Economics could prove decisive. For studios, AI actors are cheaper, more controllable and free from human needs or unions. Even if audiences are ambivalent, financial pressures could steer production companies toward AI.
The bigger picture
Tilly Norwood is not a question of the future of Hollywood. She is a cultural stress-test—a case study in how much we value human creativity.
What do we want art to be? Is it about efficiency, or human expression? If we accept synthetic actors, what stops us from replacing other creative labor—writers, musicians, designers—with AI trained on their work, but with no consent or remuneration?
We are at a crossroads. Do we regulate the use of AI in the arts, resist it, or embrace it?
Resistance may not be realistic. AI is here, and some audiences will accept it. The risk is that in choosing imitation over human artistry, we reshape culture in ways that cannot be easily reversed.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Citation:
Would you watch a film with an AI actor? What Tilly Norwood tells us about art—and labor rights (2025, October 8)
retrieved 8 October 2025
from https://techxplore.com/news/2025-10-ai-actor-tilly-norwood-art.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
-
Tech1 week ago
Interrupting encoder training in diffusion models enables more efficient generative AI
-
Tech1 week ago
OpenAI Is Preparing to Launch a Social App for AI-Generated Videos
-
Fashion1 week ago
Pay, human rights and the environment: the OECD puts Shein on notice
-
Business1 week ago
YouTube to pay $24.5m to settle Trump lawsuit over Capitol riot
-
Business1 week ago
Top stocks to buy today: Stock market recommendations for September 30, 2025 – check list – The Times of India
-
Fashion1 week ago
EU–Indonesia CEPA to unlock $352 mn for European sports industry: FESI
-
Fashion1 week ago
BioFibreLoop spins breakthrough in Lignin-based sustainable textiles
-
Tech1 week ago
Samsung Promo Codes: 30% Off in October 2025