Connect with us

Tech

Protecting your data in the EU means protecting an independent authority  | Computer Weekly

Published

on

Protecting your data in the EU means protecting an independent authority  | Computer Weekly


The effectiveness of the European data protection framework depends on two essential pillars:  robust individual rights and the institutional independence of the authority enforcing them. This principle is laid down in Article 8 of the Charter of Fundamental Rights of the EU, which requires that compliance with data protection rules must be subject to control by an independent authority. Without such independence, the rights laid down by EU law for all citizens cannot be guaranteed.

The requirement of complete independence of the European Data Protection Supervisor (EDPS) is enshrined in Article 55 of the Regulation (EU) 2018/1725, the so-called GDPR for EU institutions. The Court of Justice of the EU has clarified this principle in key rulings including C-518/07 Commission v Germany; C-614/10 Commission v Austria; and C-288/12 Commission v Hungary. These judgments establish that independence entails both freedom from influence external to the oversight authority, whether direct or indirect, and the exclusion of conflict of interests, such as supervising matters previously dealt with as a controlled entity in a different institutional capacity.

Consequently, the appointment procedure must meet the highest standards of transparency and procedural robustness and integrity.

Independence of the EDPS at risk

Current developments in the EDPS selection process raise serious concerns. Under Article 53 of the GDPR for EU institutions, the EU Commission, acting as data controller, leads the pre-selection procedure and proposes a shortlist of candidates. The European Parliament and the Council then appoint the EDPS by common accord. However, concerns have been raised regarding the transparency and impartiality of this procedure.

 An open letter signed by renowned academics argues that the pre-selection procedure may have been steered to favour a particular candidate who previously held management positions, serving as Head of Unit for the Commission’s Data Transfers Unit, and senior roles in the Cabinet of the Commissioner for Justice. A formal complaint has been submitted to the European Ombudsman and an investigation is ongoing.

 Further irregularities emerged during the European Parliament’s vote. The LIBE Committee in the European Parliament initially voted on four shortlisted candidates, but subsequently held a second vote restricted to two. While the first vote was conducted on an individual MEP basis, the second was carried out based on political group positions. This deviation injects a level of partisanship, which is incompatible with the principle of impartiality.

 While these issues may not necessarily render the appointment procedure unlawful, they point to serious procedural shortcomings with potentially significant constitutional implications. What is at stake is public trust, which demands not only formal compliance with the law, but also a higher standard of integrity, impartiality and transparency.

Eligibility criteria must be clear and rigorous, screening for conflicts of interest must be systematic, and the composition of the selection panel must itself be free of political entanglements. Decisions must be published in a timely and accessible manner, enabling public scrutiny. These are constitutional imperatives grounded in the principle of good administration.

Why the EDPS independence matters

 The EDPS oversees the processing of personal data by EU institutions and agencies, including Europol, Frontex and the EU Agency for Asylum. These entities are supervised by the EDPS for risks to the rights and freedoms associated with such processing. It is not the factual independence of the EDPS that matters, but also the perception of its autonomy by the public and civil society. This is especially relevant regarding the gaps in the oversight mechanism of EU agencies.

 For example, the recently adopted AI Act reinforces the EDPS’s supervisory role under Art. 70(9). It is now responsible for supervising the use of AI systems by institutions such as Europol. The EDPS will assess not only compliance with data protection, but also broader fundamental rights implications. Public confidence in these agencies also depends on the EDPS being perceived as independent and effective in its supervisory role. A lack of perceived independence could weaken the EDPS’s ability to issue impartial opinions on Commission proposals or to scrutinise data processing practices in its agencies.

 A call to restore independence

 To prevent a drift towards a unitary theory of the executive power and the erosion of constitutional checks and balances and, ultimately the foundation of rights and freedoms in the European Union, the European Parliament must ensure that the selected European Data Protection Supervisor (EDPS) is completely independent. This requires excluding candidates who have held management roles in entities subject to EDPS supervision.

 It is highly recommended to reinstate the procedure of voting by individual MEPs rather than by political groups. If necessary, the entire appointment procedure should be restarted.

 The EDPS is tasked with providing formal opinions to the Commission on the impact of its legislative proposals on fundamental rights related to privacy and to the protection of personal data. Its independence is vital to ensure impartial legislative advice and scrutiny of future legislative initiatives. Such legislative proposals must be grounded in evidence, informed by in-depth and accurate impact assessments on civil rights, societal and environmental sustainability, and include civil society consultation.

 By ensuring the transparency, independence, and accountability of the EDPS appointment process, the EU not only protects fundamental rights, but also reinforces the authority of the EDPS and the legitimacy of the European project. Strong data protection and privacy, democratic oversight and the rule of law are foundational commitments of the Union.

Aída Ponce Del Castillo is a senior researcher at the Foresight Unit at the European Trade Union Institute



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Watch Our Livestream Replay: Back to School in the Age of AI

Published

on

Watch Our Livestream Replay: Back to School in the Age of AI


Everyone has a stake in how tech is shaping education today. From the tech moguls and venture capitalists who are starting “microschools” and building ed-tech tools to policymakers who are writing bills to safeguard kids online and teachers who are getting creative about using AI for school.

WIRED explored all this and more in our recent back-to-school digital edition, which was the topic of our subscriber-only livestream on Thursday, August 28, 2025. Hosted by WIRED’s features director, Reyhan Harmanci, with writers Charley Locke and Julia Black. Watch the livestream replay below.

Check out past livestreams on the launch of GPT-5, essential features in ChatGPT, advice for getting started with Claude, and more.



Source link

Continue Reading

Tech

Robot regret: New research helps robots make safer decisions around humans

Published

on

Robot regret: New research helps robots make safer decisions around humans


From left, engineering professor Morteza Lahijanian and graduate student Karan Muvvala watch as a robotic arm completes a task using wooden blocks. Credit: Casey Cass/University of Colorado Boulder

Imagine for a moment that you’re in an auto factory. A robot and a human are working next to each other on the production line. The robot is busy rapidly assembling car doors while the human runs quality control, inspecting the doors for damage and making sure they come together as they should.

Robots and humans can make formidable teams in manufacturing, health care and numerous other industries. While the robot might be quicker and more effective at monotonous, repetitive tasks like assembling large auto parts, the person can excel at certain tasks that are more complex or require more dexterity.

But there can be a dark side to these robot-. People are prone to making mistakes and acting unpredictably, which can create unexpected situations that robots aren’t prepared to handle. The results can be tragic.

New and emerging research could change the way robots handle the uncertainty that comes hand-in-hand with human interactions. Morteza Lahijanian, an associate professor in CU Boulder’s Ann and H.J. Smead Department of Aerospace Engineering Sciences, develops processes that let robots make safer decisions around humans while still trying to complete their tasks efficiently.

In a new study presented at the International Joint Conference on Artificial Intelligence in August 2025, Lahijanian and graduate students Karan Muvvala and Qi Heng Ho devised new algorithms that help robots create the best possible outcomes from their actions in situations that carry some uncertainty and risk.

“How do we go from very structured environments where there is no human, where the robots are doing everything by themselves, to unstructured environments where there are a lot of uncertainties and other agents?” Lahijanian asked.

“If you’re a robot, you have to be able to interact with others. You have to put yourself out there and take a risk and see what happens. But how do you make that decision, and how much risk do you want to tolerate?”

Similar to humans, robots have mental models that they use to make decisions. When working with a human, a robot will try to predict the person’s actions and respond accordingly. The robot is optimized for completing a task—assembling an auto part, for example—but ideally, it will also take other factors into consideration.

In the new study, the research team drew upon game theory, a mathematical concept that originated in economics, to develop the for robots. Game theory analyzes how companies, governments and individuals make decisions in a system where other “players” are also making choices that affect the ultimate outcome.

In robotics, conceptualizes a robot as being one of numerous players in a game that it’s trying to win. For a robot, “winning” is completing a task successfully—but winning is never guaranteed when there’s a human in the mix, and keeping the human safe is also a top priority.

So instead of trying to guarantee a robot will always win, the researchers proposed the concept of a robot finding an “admissible strategy.” Using such a strategy, a robot will accomplish as much of its task as possible while also minimizing any harm, including to a human.

“In choosing a strategy, you don’t want the robot to seem very adversarial,” said Lahijanian. “In order to give that softness to the robot, we look at the notion of regret. Is the robot going to regret its action in the future? And in optimizing for the best action at the moment, you try to take an action that you won’t regret.”

Let’s go back to the auto factory where the robot and human are working side-by-side. If the person makes mistakes or is not cooperative, using the researchers’ algorithms, a robot could take matters into its own hands. If the person is making mistakes, the robot will try to fix these without endangering the person. But if that doesn’t work, the robot could, for example, pick up what it’s working on and take it to a safer area to finish its task.

Much like a chess champion who thinks several turns ahead about an opponent’s possible moves, a robot will try to anticipate what a person will do and stay several steps ahead of them, Lahijanian said.

But the goal is not to attempt the impossible and perfectly predict a person’s actions. Instead, the goal is to create robots that put people’s safety first.

“If you want to have collaboration between a human and a robot, the robot has to adjust itself to the human. We don’t want humans to adjust themselves to the robot,” he said. “You can have a human who is a novice and doesn’t know what they’re doing, or you can have a human who is an expert. But as a robot, you don’t know which kind of human you’re going to get. So you need to have a strategy for all possible cases.”

And when robots can work safely alongside humans, they can enhance people’s lives and provide real and tangible benefits to society.

As more industries embrace robots and , there are many lingering questions about what AI will ultimately be capable of doing, whether it will be able to take over the jobs that people have historically done, and what that could mean for humanity. But there are upsides to robots being able to take on certain types of jobs. They could work in fields with labor shortages, such as for older populations, and physically challenging jobs that may take a toll on workers’ health.

Lahijanian also believes that, when they’re used correctly, robots and AI can enhance human talents and expand what we’re capable of doing.

“Human- collaboration is about combining complementary strengths: humans contribute intelligence, judgment, and flexibility, while robots offer precision, strength, and reliability,” he said.

“Together, they can achieve more than either could alone, safely and efficiently.”

Citation:
Robot regret: New research helps robots make safer decisions around humans (2025, August 28)
retrieved 28 August 2025
from https://techxplore.com/news/2025-08-robot-robots-safer-decisions-humans.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Tech

Warehouse automation hasn’t made workers safer—it’s just reshuffled the risk, say researchers

Published

on

Warehouse automation hasn’t made workers safer—it’s just reshuffled the risk, say researchers


Credit: Unsplash/CC0 Public Domain

Rapid advancements in robotics are changing the face of the world’s warehouses, as dangerous and physically taxing tasks are being reassigned en masse from humans to machines. Automation and digitization are nothing new in the logistics sector, or any sector heavily reliant on manual labor. Bosses prize automation because it can bring up to two- to four-fold gains in productivity. But workers can also benefit from the putative improvements in safety that come from shifting dangerous tasks onto non-human shoulders.

At least, that’s the story employers such as Amazon have—largely successfully—promoted to the public.

In a recent study, Brad N. Greenwood, Dean’s Distinguished Professor at the Costello College of Business at George Mason University, investigated this question: Does automation make warehouse jobs safer? His co-authors include Gordon Burtch of Boston University and Kiron Ravindran of IE University. Their findings, which appear in ILR Review, reveal that the answer depends on how safety is defined.

The researchers distinguish between two types of injuries: severe and non-severe. Severe injuries include broken bones, traumatic falls, and other incidents that cause employees to miss work. Non- include sprains, strains, and repetitive motion problems, often leading to reassignment or light-duty work, but not missing work.

The findings showed that robots do seem to reduce severe injuries. In robotic fulfillment centers (FC), tasks like heavy lifting and long walks are handled by machines, reducing workers’ exposure to physical hazards. The researchers found a meaningful drop in the number of severe injuries in these facilities.

However, the overall picture is not so clear. In the same robotic warehouses, the researchers observed a sharp increase in non-severe injuries, especially during high-demand periods such as Amazon Prime Day and the winter holidays. The robotic fulfillment centers experienced a 40% decrease in severe injuries but a 77% increase in non-severe injuries compared to traditional centers.

To better understand their results, the researchers also analyzed thousands of online posts from Amazon warehouse workers.

“There was an immediate and obvious discrepancy in opinion, based on whether their fulfillment center was roboticized or not,” says Greenwood.

Humans working alongside robots described their daily experience as “not physically exhausting” and “better than working at a legacy FC.” However, they also reported being expected to meet much higher performance metrics than their counterparts in non-automated FCs—amounting to a two-to-three-times higher “pick rate” in some cases. The faster pace of the human/robot dance was accompanied by a far more repetitive work routine that induced burnout in some workers, while causing others to “zone out.”

This dual reality—robots reducing some injuries while exacerbating others—has serious implications. For employers, simply introducing automation is not enough. Without careful job design, task rotation, and realistic performance goals, the shift to robotics can create new health and safety risks.

“Companies have bottom-line reasons to take this issue seriously. Beyond simple issues of liability, there is a cost to the firm of workers being unable to perform their duties,” says Greenwood.

Traditional safety metrics often focus on injuries that result in lost workdays. But as the nature of work changes, this approach may miss more subtle forms of harm. Chronic, repetitive injuries may not lead to time off, but they still decrease worker well-being and performance.

Looking ahead, Greenwood and his colleagues plan to explore how these trends play out over longer timeframes and in other industries. As robots become more common in fields like manufacturing, retail, and health care, similar patterns may emerge. The researchers hope their findings will help inform both corporate and public policy, ensuring that the future of work is not only more efficient but also humane.

“That isn’t to deny that warehouse robotics benefits workers,” Greenwood explains. “But we need to think more carefully about how to use them, and what that means for the humans they work with.”

More information:
Gordon Burtch et al, Lucy and the Chocolate Factory: Warehouse Robotics and Worker Safety, ILR Review (2025). DOI: 10.1177/00197939251333754

Citation:
Warehouse automation hasn’t made workers safer—it’s just reshuffled the risk, say researchers (2025, August 28)
retrieved 28 August 2025
from https://techxplore.com/news/2025-08-warehouse-automation-hasnt-workers-safer.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Trending