Tech
Tide’s Evo Tiles Are a Fresh, Overengineered Take on the Tide Pod
Laundry is a $100 billion business. It can also be a real time suck, what with all the washing, drying, and folding. Detergent company Tide has found great success with its Pods that let you pop detergent right into a washing machine without having to measure and pour liquid or powder. Now, the next evolution is an exhaustively engineered single-use detergent called Tide Evo Tiles—a dry, fibery, single-use tile that can dissolve in cold water. It looks a lot less tasty than the bright, colorful Tide Pods, so hopefully, fewer people will try to eat this one.
Tide Evo Tiles have been in product development for over a decade. After spending a year in test markets, Tide and its parent company, Procter & Gamble, announced last week that Evo Tiles are now rolling out more broadly across the US. Prices range from $5 to $20 per box, depending on the retailer, with the price roughly 50 cents per tile.
“This is really a feat of engineering,” says Marcello Puddu, senior director of research and development at Tide. “There is a lot of very complicated engineering and formulation work that has gone to create that one single sleek tile that looks relatively simple.”
The primary hope for Tide Evo is simplicity. Single-use detergent pods are lauded for being more accessible to people who may struggle with the motor skills required to pour liquid soap or powders. Evo Tiles have a small ridge around the edges that makes them easier to pull out of the box. Deploying them is easy—just plop them (one tile for regular loads, two for heavy) into the washer as close to where the water comes out as possible, then toss the fabrics on top.
After the tile breaks apart, the ingredients work together to create a very high pH level in the water that cleans the fabrics. (Because of the high pH, Tide Evo does not use lipase, an enzyme that breaks down stains and is a popular ingredient in other detergents.)
Evo Tiles look like white, diamond-shaped Uncrustables. Instead of a Tide Pod’s colorful liquid pouches, these tiles are made of dry layers of interwoven detergent fibers—about 10,000 of them, which Tide says is enough to stretch for 15 miles, if you were inclined to do such a thing. The result is a looping, webbed lattice of tiny fibers, woven together into six layers that stay in place while on the shelf but break down quickly when they get wet, allowing separate releases of stain and odor fighters, brighteners, and fresheners.
“The structure of an assembled product allows us to do that, because we can separate things that don’t like to be together,” Puddu says. “We can put an enzyme between two layers so the two don’t attack each other. You can’t really do that as easily in other matrices.”
The goal is to combine the benefits of Tide Pods and laundry sheets and make something that packs in enough detergent to sufficiently clean a load of wash while also being lightweight and able to dissolve quickly. And, as Tide is eager to point out, it also makes things more eco-friendly.
Tide Evo tiles are specifically designed to dissolve in cold water, the idea being that washing fabrics without having to heat up water helps save energy. Packaging is also part of Tide’s ecological efforts. Unlike the plastic boxes Pods tend to come in, Tide Evo tiles are packaged in a recyclable cardboard box that is certified by the Forest Stewardship Council.
Still, Tide Evo does use polyvinyl alcohol (PVA) plastics to help the fibrous structure hold together. These are the same kind of plastics used to form the casing around Tide Pods. PVA plastics have been the subject of much debate about whether the polymers used in detergent casing can create microplastics when dissolved. They likely do not, but the products are still created within the broader plastics ecosystem and can lead to clogging of waterways if not treated properly.
Tech
Human-machine teaming dives underwater
The electricity to an island goes out. To find the break in the underwater power cable, a ship pulls up the entire line or deploys remotely operated vehicles (ROVs) to traverse the line. But what if an autonomous underwater vehicle (AUV) could map the line and pinpoint the location of the fault for a diver to fix?
Such underwater human-robot teaming is the focus of an MIT Lincoln Laboratory project funded through an internally administered R&D portfolio on autonomous systems and carried out by the Advanced Undersea Systems and Technology Group. The project seeks to leverage the respective strengths of humans and robots to optimize maritime missions for the U.S. military, including critical infrastructure inspection and repair, search and rescue, harbor entry, and countermine operations.
“Divers and AUVs generally don’t team at all underwater,” says principal investigator Madeline Miller. “Underwater missions requiring humans typically do so because they involve some sort of manipulation a robot can’t do, like repairing infrastructure or deactivating a mine. Even ROVs are challenging to work with underwater in very skilled manipulation tasks because the manipulators themselves aren’t agile enough.”
Beyond their superior dexterity, humans excel at recognizing objects underwater. But humans working underwater can’t perform complex computations or move very quickly, especially if they are carrying heavy equipment; robots have an edge over humans in processing power, high-speed mobility, and endurance. To combine these strengths, Miller and her team are developing hardware and algorithms for underwater navigation and perception — two key capabilities for effective human-robot teaming.
As Miller explains, divers may only have a compass and fin-kick counts to guide them. With few landmarks and potentially murky conditions caused by a lack of light at depth or the presence of biological matter in the water column, they can easily become disoriented and lost. For robots to help divers navigate, they need to perceive their environment. However, in the presence of darkness and turbidity, optical sensors (cameras) cannot generate images, while acoustic sensors (sonar) generate images that lack color and only show the shapes and shadows of objects in the scene. The historical lack of large, labeled sonar image datasets has hindered training of underwater perception algorithms. Even if data were available, the dynamic ocean can obscure the true nature of objects, confusing artificial intelligence. For instance, a downed aircraft broken into multiple pieces, or a tire covered in an overgrowth of mussels, may no longer resemble an aircraft or tire, respectively.
“Ultimately, we want to devise solutions for navigation and perception in expeditionary environments,” Miller says. “For the missions we’re thinking about, there is limited or no opportunity to map out the area in advance. For the harbor entry mission, maybe you have a satellite map but no underwater map, for example.”
On the navigation side, Miller’s team picked up on work started by the MIT Marine Robotics Group, led by John Leonard, to develop diver-AUV teaming algorithms. With their navigation algorithms, Leonard’s group ran simulations under optimal conditions and performed field testing in calm waters using human-paddled kayaks as proxies for both divers and AUVs. Miller’s team then integrated these algorithms into a mission-relevant AUV and began testing them under more realistic ocean conditions, initially with a support boat acting as a diver surrogate, and then with actual divers.
“We quickly learned that you need more sensing capabilities on the diver when you factor in ocean currents,” Miller explains. “With the algorithms demonstrated by MIT, the vehicle only needed to calculate the distance, or range, to the diver at regular intervals to solve the optimization problem of estimating the positions of both the vehicle and diver over time. But with the real ocean forces pushing everything around, this optimization problem blows up quickly.”
On the perception side, Miller’s team has been developing an AI classifier that can process both optical and sonar data mid-mission and solicit human input for any objects classified with uncertainty.
“The idea is for the classifier to pass along some information — say, a bounding box around an image — to the diver and indicate, “I think this is a tire, but I’m not sure. What do you think?” Then, the diver can respond, “Yes, you’ve got it right, or no, look over here in the image to improve your classification,” Miller says.
This feedback loop requires an underwater acoustic modem to support diver-AUV communication. State-of-the-art data rates in underwater acoustic communications would require tens of minutes to send an uncompressed image from the AUV to the diver. So, one aspect the team is investigating is how to compress information into a minimum amount to be useful, working within the constraints of the low bandwidth and high latency of underwater communications and the low size, weight, and power of the commercial off-the-shelf (COTS) hardware they’re using. For their prototype system, the team procured mostly COTS sensors and built a sensor payload that would easily integrate into an AUV routinely employed by the U.S. Navy, with the goal of facilitating technology transition. Beyond sonar and optical sensors, the payload features an acoustic modem for ranging to the diver and several data processing and compute boards.
Miller’s team has tested the sensor-equipped AUV and algorithms around coastal New England — including in the open ocean near Portsmouth, New Hampshire, with the University of New Hampshire’s (UNH) Gulf Surveyor and Gulf Challenger coastal research vessels as diver surrogates, and on the Boston-area Charles River, with an MIT Sailing Pavilion skiff as the surrogate.
“The UNH boats are well-equipped and can access realistic ocean conditions. But pretending to be a diver with a large boat is hard. With the skiff, we can move more slowly and get the relative motion in tune with how a diver and AUV would navigate together.”
Last summer, the team started testing equipment with human divers at Michigan Technological University’s Great Lakes Research Center. Although the divers lacked an interface to feed back information to the AUV, each swam holding the team’s tube-shaped prototype tablet, dubbed a “tube-let.” The tube-let was equipped with a pressure and depth sensor, inertial measurement unit (to track relative motion), and ranging modem — all necessary components for the navigation algorithms to solve the optimization problem.
“A challenge during testing was coordinating the motion of the diver and vehicle, because they don’t yet collaborate,” Miller says. “Once the divers go underwater, there is no communication with the team on the surface. So, you have to plan where to put the diver and vehicle so they don’t collide.”
The team also worked on the perception problem. The water clarity of the Great Lakes at that time of year allowed for underwater imaging with an optical sensor. Caroline Keenan, a Lincoln Scholars Program PhD student jointly working in the laboratory’s Advanced Undersea Systems and Technology Group and Leonard’s research group at MIT, took the opportunity to advance her work on knowledge transfer from optical sensors to sonar sensors. She is exploring whether optical classifiers can train sonar classifiers to recognize objects for which sonar data doesn’t exist. The motivation is to reduce the human operator load associated with labeling sonar data and training sonar classifiers.
With the internally funded research program coming to an end, Miller’s team is now seeking external sponsorship to refine and transition the technology to military or commercial partners.
“The modern world runs on undersea telecommunication and power cables, which are vulnerable to attack by disruptive actors. The undersea domain is becoming increasingly contested as more nations develop and advance the capabilities of autonomous maritime systems. Maintaining global economic security and U.S. strategic advantage in the undersea domain will require leveraging and combining the best of AI and human capabilities,” Miller says.
Tech
Bremont Is Sending a Watch to the Moon’s Surface
A multifaceted decahedral black ceramic bezel and sandwich-style three-piece case—a reworking of Bremont’s signature Trip-Tick construction—house a chronometer-rated automatic chronograph movement made by Sellita, with a 62-hour power reserve.
The watch will be a passenger aboard the FLIP rover, due to launch as part of Astrobotic’s Griffin Mission One (Griffin-1), expected to land at the lunar south pole at some point in the second half of this year.
It’s a one-way mission: The rover will remain permanently on the lunar surface, with the watch ticking away as it roams the landscape. FLIP’s objectives include reaching elevated positions on the lunar terrain, gathering data on lunar dust accumulation, testing dust-mitigation coatings, and surviving a two-week lunar night in hibernation (which would be a first for a US rover).
In terms of serious timekeeping data for Bremont, the mission is frankly symbolic. The watch will be positioned vertically in a specially designed housing within the FLIP’s chassis, between its front wheels. Only the watch head, weighing 107 grams, is included, glued in place using a specialist composite, its face visible to FLIP’s HD cameras. But the hibernatory periods will mean the watch (whose mechanical movement is driven in normal circumstances by the motion of the wearer’s arm) will stop running once its 62-hour power reserve runs down.
When the FLIP is on the move again, its motion should—in theory—jolt the mechanism into action once more. Despite the gravitational pull that’s a sixth of the Earth’s, the acceleration, pitches, and tilts of the rover should swing the winding rotor, if with less torque and efficiency than on Earth.
“My guess is that the watch will function from time to time, but for short periods,” Cerrato says. “We will learn along the way. But that’s what is exciting—it projects us into a thinking process that is absolutely out of the box. Just the fact of having it there is inspiring.” However, there is little doubt that Bremont will, just like other brands with any ties to the cosmos, mine its new space connection for all it is worth.
FLIP itself, which weighs just 1,058 pounds and carries a mix of commercial and government payloads, four HD cameras, and a deployable solar array, is fundamentally a technology demonstrator for Flexible Logistics and Exploration (FLEX), Astrolab’s much larger SUV-sized rover destined to support NASA’s Artemis program. The firm developed the FLIP from scratch after NASA’s equivalent vehicle for which the Griffin-1 mission was contracted, the VIPER, was put on pause in 2024. This left Astrobotic seeking a stand-in in short order. Astrolab, which signed the contract within a month of hearing about the opportunity in the fall of 2024, took the FLIP from blank sheet to finished rover in roughly a year.
Its standout feature is its hyper-deformable wheels, minutely structured from silicone, composite, and stainless steel, which create a soft, enlarged contact surface with the terrain. “It’s like if you’re off-roading in a Jeep or Land Rover where you let some air out of the tires to go softer and spread the load over a larger area,” explains Astrolab’s founder, Jaret Matthews. While the moon’s nighttime temperatures of around -200 degrees Celsius (around -328 Fahrenheit) would cause conventional rubber tires to become glass-like and shatter, Astrolab’s solution is intended to keep the rover from sinking into the unconsolidated lunar dust—or regolith—that covers the environment.
Tech
Novo Nordisk partners with OpenAI to AI-power drug development | Computer Weekly
Danish pharmaceutical company Novo Nordisk has partnered with OpenAI to support drug research and development. Through the partnership, Novo Nordisk said it plans to deploy advanced artificial intelligence (AI) capabilities to analyse complex datasets, identify promising drug candidates and reduce the time required to move from research to patient.
The company said its use of AI has been structured with strict data protection, governance and human oversight to ensure ethical and compliant use. This latest partnership is being positioned as a key part of the company’s strategy to use AI to transform healthcare and enable it to bring new and better treatment options to patients faster.
In 2024, a break-out session run during its Capitals Market Day presented Novo Nordisk’s strategy, discussing how it uses data science and AI and its future plans. The presentation shows that the company set up an AI centre of excellence in 2021, and had begun ramping up investment in high performance computing and graphics processor units (GPUs) by 2023. The company said it has deployed a data pool called FounData, where all data from completed clinical trials are pooled and prepared for insights-generation.
It has also deployed NovoScribe, an AI-powered platform built using MongoDB Atlas Vector Search, Amazon Bedrock and LangChain to automate and accelerate the creation of clinical study reports. Novo Nordisk said NovoScribe reduces the time to regulatory submissions.
At the time, the company said external partnerships and collaborations would continue to play an important role in reaching its AI ambitions.
Earlier this year, Christos Nicolaou, a senior scientific director at Novo Nordisk, posted on LinkedIn that the company has now joined Ligand-AI, a new project funded by the EU public-private partnership, Innovative Health Initiative (IHI).
In the post, he said the project’s goal is to generate high quality, large, open datasets of protein-ligand interactions for thousands of proteins. “In the spirit of open science collaboration, these datasets will be shared and used to implement models and methods to improve AI-driven drug discovery,” he said.
This latest partnership with OpenAI builds on technology partnerships it has with AWS, Microsoft, Google and Hugging Face, as well as its existing collaboration with OpenAI.
“This partnership is one important step in positioning Novo Nordisk to lead in the next era of healthcare,” said Mike Doustdar, president and CEO of Novo Nordisk. “There are millions of people living with obesity and diabetes who need treatment options, and we know there are therapies still waiting to be discovered that could change their lives.
“Integrating AI in our everyday work gives us the ability to analyse datasets at a scale that was previously impossible, identify patterns we could not see, and test hypotheses faster than ever. This means discovering new therapies and bringing them to market faster than ever before.”
OpenAI said it would be assisting Novo Nordisk in upskilling the company’s global workforce and enhancing AI literacy. Through the partnership OpenAI’s capabilities will also be used to improve efficiency in manufacturing, supply chain and distribution, and corporate operations. The company is starting pilot programmes across research and development, and manufacturing and commercial operations, with full integration by the end of 2026.
-
Fashion1 week agoIndia’s exports face reset as EU links trade to carbon metrics: EY
-
Entertainment6 days agoQueen Elizabeth II emotional message for Archie, Lilibet sparks speculation
-
Tech6 days agoAs the Strait of Hormuz Reopens, Global Shipping Will Take Months to Recover
-
Tech6 days agoAzure customers up in arms over ‘full’ UK South region | Computer Weekly
-
Entertainment1 week agoLamar Odom shocking response to Khloé Kardashian account of his overdose
-
Sports1 week agoWith Messi goal, Inter Miami open new stadium with dream moment
-
Tech5 days agoThis AI Button Wearable From Ex-Apple Engineers Looks Like an iPod Shuffle
-
Fashion6 days agoCII submits 20-pt agenda to Indian govt to back firms hit by Iran war
