Tech
Report on social media age assurance trial says there is not a one-size-fits-all solution
Australia’s government trial has found age-assurance for its under-16 social media ban can be done effectively and protect privacy, but there is not a one-size-fits-all model.
The report, from an independent company and released in full, also warns continued vigilance is needed on privacy and other issues.
It found some providers, in the absence of guidance, were collecting too much data, over-anticipating what regulators would require.
The ban on under 16s having their own social media accounts has been passed by parliament and comes into effect in December. It covers a wide range of platforms, including Facebook, Instagram, TikTok, X, and YouTube (which was recently added).
The measure is world-leading, and has been very controversial. One issue has been the degree of likely reliability of age verification.
The trial looked at various age assurance methods, including AI, facial analysis, parental consent and identity documents. The methods were judged on accuracy, usability and privacy grounds.
More than 60 technologies were examined from 48 age assurance vendors.
The report concluded age assurance systems “can be private, robust and effective.” Moreover there was “a plethora” of choices available for providers, and no substantial technological limitations.
“But we did not find a single ubiquitous solution that would suit all use cases, nor did we find solutions that were guaranteed to be effective in all deployments.” Instead, there was “a rich and rapidly evolving range of services which can be tailored and effective depending on each specified context of use.”
The age assurance service sector was “vibrant, creative and innovative,” according to the report, with “a pipeline of new technologies.”
It had a robust understanding of the handling of personal information and a strong commitment to privacy.
But the trial found opportunities for technological improvements, including ease of use.
On parental control systems, the trial found these could be effective.
“But they serve different purposes. Parental control systems are pre-configured and ongoing but they may fail to adapt to the evolving capacities of children including potential risks to their digital privacy as they grow and mature, particularly through adolescence.
“Parental consent mechanisms prompt active engagement between children and their parents at key decision points, potentially supporting informed access.”
The trial found while the assurance systems were generally secure, the rapidly evolving threat environment meant they could not be considered infallible.
They needed continual monitoring, improvement and attention to compliance with privacy requirements.
Also, “We found some concerning evidence that in the absence of specific guidance, service providers were apparently over-anticipating the eventual needs of regulators about providing personal information for future investigations.
“Some providers were found to be building tools to enable regulators, law enforcement or coroners to retrace the actions taken by individuals to verify their age which could lead to increased risk of privacy breaches, due to unnecessary and disproportionate collection and retention of data.”
Communications Minister Anika Wells said, “While there’s no one-size-fits-all solution to age assurance, this trial shows there are many effective options and importantly that user privacy can be safeguarded.”
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Citation:
Report on social media age assurance trial says there is not a one-size-fits-all solution (2025, September 1)
retrieved 1 September 2025
from https://techxplore.com/news/2025-09-social-media-age-trial-size.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
BMW Is Betting Big on the New iX3. The Good News Is It’s Superb
BMW’s first car on its new EV platform has finally arrived. But will a big range, thumping charging tech, and a new driving brain that aims to deliver the ultimate ride be enough to beat China?
Source link
Tech
MIT engineers design an aerial microrobot that can fly as fast as a bumblebee
In the future, tiny flying robots could be deployed to aid in the search for survivors trapped beneath the rubble after a devastating earthquake. Like real insects, these robots could flit through tight spaces larger robots can’t reach, while simultaneously dodging stationary obstacles and pieces of falling rubble.
So far, aerial microrobots have only been able to fly slowly along smooth trajectories, far from the swift, agile flight of real insects — until now.
MIT researchers have demonstrated aerial microrobots that can fly with speed and agility that is comparable to their biological counterparts. A collaborative team designed a new AI-based controller for the robotic bug that enabled it to follow gymnastic flight paths, such as executing continuous body flips.
With a two-part control scheme that combines high performance with computational efficiency, the robot’s speed and acceleration increased by about 450 percent and 250 percent, respectively, compared to the researchers’ best previous demonstrations.
The speedy robot was agile enough to complete 10 consecutive somersaults in 11 seconds, even when wind disturbances threatened to push it off course.
Credit: Courtesy of the Soft and Micro Robotics Laboratory
“We want to be able to use these robots in scenarios that more traditional quad copter robots would have trouble flying into, but that insects could navigate. Now, with our bioinspired control framework, the flight performance of our robot is comparable to insects in terms of speed, acceleration, and the pitching angle. This is quite an exciting step toward that future goal,” says Kevin Chen, an associate professor in the Department of Electrical Engineering and Computer Science (EECS), head of the Soft and Micro Robotics Laboratory within the Research Laboratory of Electronics (RLE), and co-senior author of a paper on the robot.
Chen is joined on the paper by co-lead authors Yi-Hsuan Hsiao, an EECS MIT graduate student; Andrea Tagliabue PhD ’24; and Owen Matteson, a graduate student in the Department of Aeronautics and Astronautics (AeroAstro); as well as EECS graduate student Suhan Kim; Tong Zhao MEng ’23; and co-senior author Jonathan P. How, the Ford Professor of Engineering in the Department of Aeronautics and Astronautics and a principal investigator in the Laboratory for Information and Decision Systems (LIDS). The research appears today in Science Advances.
An AI controller
Chen’s group has been building robotic insects for more than five years.
They recently developed a more durable version of their tiny robot, a microcassette-sized device that weighs less than a paperclip. The new version utilizes larger, flapping wings that enable more agile movements. They are powered by a set of squishy artificial muscles that flap the wings at an extremely fast rate.
But the controller — the “brain” of the robot that determines its position and tells it where to fly — was hand-tuned by a human, limiting the robot’s performance.
For the robot to fly quickly and aggressively like a real insect, it needed a more robust controller that could account for uncertainty and perform complex optimizations quickly.
Such a controller would be too computationally intensive to be deployed in real time, especially with the complicated aerodynamics of the lightweight robot.
To overcome this challenge, Chen’s group joined forces with How’s team and, together, they crafted a two-step, AI-driven control scheme that provides the robustness necessary for complex, rapid maneuvers, and the computational efficiency needed for real-time deployment.
“The hardware advances pushed the controller so there was more we could do on the software side, but at the same time, as the controller developed, there was more they could do with the hardware. As Kevin’s team demonstrates new capabilities, we demonstrate that we can utilize them,” How says.
For the first step, the team built what is known as a model-predictive controller. This type of powerful controller uses a dynamic, mathematical model to predict the behavior of the robot and plan the optimal series of actions to safely follow a trajectory.
While computationally intensive, it can plan challenging maneuvers like aerial somersaults, rapid turns, and aggressive body tilting. This high-performance planner is also designed to consider constraints on the force and torque the robot could apply, which is essential for avoiding collisions.
For instance, to perform multiple flips in a row, the robot would need to decelerate in such a way that its initial conditions are exactly right for doing the flip again.
“If small errors creep in, and you try to repeat that flip 10 times with those small errors, the robot will just crash. We need to have robust flight control,” How says.
They use this expert planner to train a “policy” based on a deep-learning model, to control the robot in real time, through a process called imitation learning. A policy is the robot’s decision-making engine, which tells the robot where and how to fly.
Essentially, the imitation-learning process compresses the powerful controller into a computationally efficient AI model that can run very fast.
The key was having a smart way to create just enough training data, which would teach the policy everything it needs to know for aggressive maneuvers.
“The robust training method is the secret sauce of this technique,” How explains.
The AI-driven policy takes robot positions as inputs and outputs control commands in real time, such as thrust force and torques.
Insect-like performance
In their experiments, this two-step approach enabled the insect-scale robot to fly 447 percent faster while exhibiting a 255 percent increase in acceleration. The robot was able to complete 10 somersaults in 11 seconds, and the tiny robot never strayed more than 4 or 5 centimeters off its planned trajectory.
“This work demonstrates that soft and microrobots, traditionally limited in speed, can now leverage advanced control algorithms to achieve agility approaching that of natural insects and larger robots, opening up new opportunities for multimodal locomotion,” says Hsiao.
The researchers were also able to demonstrate saccade movement, which occurs when insects pitch very aggressively, fly rapidly to a certain position, and then pitch the other way to stop. This rapid acceleration and deceleration help insects localize themselves and see clearly.
“This bio-mimicking flight behavior could help us in the future when we start putting cameras and sensors on board the robot,” Chen says.
Adding sensors and cameras so the microrobots can fly outdoors, without being attached to a complex motion capture system, will be a major area of future work.
The researchers also want to study how onboard sensors could help the robots avoid colliding with one another or coordinate navigation.
“For the micro-robotics community, I hope this paper signals a paradigm shift by showing that we can develop a new control architecture that is high-performing and efficient at the same time,” says Chen.
“This work is especially impressive because these robots still perform precise flips and fast turns despite the large uncertainties that come from relatively large fabrication tolerances in small-scale manufacturing, wind gusts of more than 1 meter per second, and even its power tether wrapping around the robot as it performs repeated flips,” says Sarah Bergbreiter, a professor of mechanical engineering at Carnegie Mellon University, who was not involved with this work.
“Although the controller currently runs on an external computer rather than onboard the robot, the authors demonstrate that similar, but less precise, control policies may be feasible even with the more limited computation available on an insect-scale robot. This is exciting because it points toward future insect-scale robots with agility approaching that of their biological counterparts,” she adds.
This research is funded, in part, by the National Science Foundation (NSF), the Office of Naval Research, Air Force Office of Scientific Research, MathWorks, and the Zakhartchenko Fellowship.
Tech
Thursday’s Cold Moon Is the Last Supermoon of the Year. Here’s How and When to View It
A cold supermoon is on its way. On December 4, Earth’s satellite will delight us with one of the last astronomical spectacles of 2025. Not only will it be the last full moon of the year, but it’s also a cold moon—which refers to the frigid temperatures typical of this time of year—and, finally, a supermoon. Here’s how and when best to enjoy this spectacle of the year-end sky.
What Is a Supermoon?
The term supermoon refers to a full moon that occurs when our satellite is at perigee, the point at which its orbit brings it closest to our planet. (The moon’s orbit is elliptical, and its distance from Earth varies between about 407,000 km at apogee, the point of maximum distance, and about 380,000 at perigee.)
In addition to being the third consecutive supermoon of the year, as reported by EarthSky, it will be about 357,000 km away from us, making it the second-closest full Moon of the year. Consequently it will also be the second-largest and brightest.
Although most of us won’t notice any difference in size compared to a normal full moon (it appears up to 8 percent larger to us), its brightness could exceed that of an ordinary full Moon by 16 percent. This time, moreover, it will be 100 percent illuminated just 12 hours after its perigee.
The Cold Supermoon
In addition to its name, which refers to the cold temperatures of this period, December’s full moon will be the last of 12 full moons in 2025 and the highest of the year. With the winter solstice approaching on December 21, the sun is at its lowest point in the sky, so the full moon is at its highest point. In other words, this means that the super cold moon will be particularly high in the sky. As EarthSky points out, however, it is not the closest full Moon to the December 21 solstice. While it occurs 17 days before, the first full moon of 2026 will occer on January 3—just 12 days ater teh solstice. That will be the fourth and last consecutive supermoon.
How to Enjoy the Show
Although the moon may appear full both the night before and the night after, the exact time of the full moon is scheduled for 6:14 pm ET on Thursday, December 4. In general, moonrise is the best time to be subject to the so-called lunar illusion, during which the moon appears larger than usual to us. NASA still doesn’t have a scientific explanation for why this happens, but as you might expect, the effect is greatest during a supermoon. Weather permitting, therefore, find an elevated place or meadow with an unobstructed view of the eastern horizon and enjoy the last moon show of the year.
This story originally appeared on WIRED Italia and has been translated from Italian.
-
Sports3 days agoIndia Triumphs Over South Africa in First ODI Thanks to Kohli’s Heroics – SUCH TV
-
Tech4 days agoGet Your Steps In From Your Home Office With This Walking Pad—On Sale This Week
-
Fashion3 days agoResults are in: US Black Friday store visits down, e-visits up, apparel shines
-
Politics3 days agoElon Musk reveals partner’s half-Indian roots, son’s middle name ‘Sekhar’
-
Entertainment3 days agoSadie Sink talks about the future of Max in ‘Stranger Things’
-
Tech3 days agoPrague’s City Center Sparkles, Buzzes, and Burns at the Signal Festival
-
Uncategorized1 week ago
[CinePlex360] Please moderate: “Americans would
-
Tech1 week agoWake Up—the Best Black Friday Mattress Sales Are Here
