Connect with us

Tech

Who is really accountable for the online safety gap? | Computer Weekly

Published

on

Who is really accountable for the online safety gap? | Computer Weekly


One in five people in the UK say they will only act on a message if it comes from a trusted source. That statistic, from a recent IET survey, reflects a troubling erosion of public confidence in both online safety and protection of data. This decline in trust is at once deeply concerning and indicative of a broader accountability gap in how the digital world is governed.

Whilst it is true to say that governments around the globe have committed to tackling online safety, many have done so by charging regulators with the drafting and enforcing of new rules. These efforts are undoubtedly well-intentioned aiming, as they do, to protect users – particularly children – without compromising freedom of speech, innovation, or investment.

Yet the reality is far more complex.

It is a long time since anyone truly believed that the internet is just a platform for expression. Today the ubiquitous view is one where online safety means operating in a virtual environment where the right to privacy and security should be a given. Yet whether we are talking about an individual level or one of national security, these are constantly at risk.  And balancing these conflicting priorities is no easy task.

No global consensus over regulation

Interventionist regimes in the UK, EU, and Australia contrast sharply with more laissez-faire models in the US and parts of Asia. Even within regions, laws vary widely. The EU’s Digital Services Act, for example, overlays a patchwork of national codes, while in the US, federal child protection laws coexist with a complex array of state-level regulations. For businesses, this means navigating a maze of obligations that are often overlapping, inconsistent, and subject to change.

The result has been a flurry of regulatory activity, with no global consensus on the best approach. It means that companies must decide whether to adopt the most stringent standards globally, tailor solutions for each jurisdiction, or take a hybrid approach. In my experience, the most pragmatic path is a hub-and-spoke model: anchor compliance in rigorous jurisdictions and adapt flexibly to local requirements. However, even this approach is fraught with difficulty. Minor breaches can lead to multi-million-pound fines, reputational damage, and class action claims.

The Venn diagram of regulation

And, sadly, we don’t live in a world where different issues have their own boundaries. Instead, ours is a world where the Venn diagram is a more appropriate image as, in truth, the concentric circles of regulations often overlap – meaning that online safety regulations often collide with other legal frameworks.

Put bluntly, complying with safety rules can create tension with a range of issues such as data privacy, intellectual property, and even emerging AI laws. Rigid controls including age verification and geo-blocking may offer a false sense of security, as users frequently find, or attempt to find, ways around them. And vague or abstract regulations can lead to legal uncertainty, making it harder for businesses to innovate or compete effectively.

Enforcement adds another layer of complexity. Regulators in some jurisdictions aim for proportionality, but this can increase compliance risks for larger firms while leaving smaller companies unsure how to calibrate their efforts. The pace of technological change only exacerbates the problem. New features and behaviours often outstrip the ability of traditional regulatory approaches to keep up, and tools designed to enhance safety—such as content scanning and automated moderation—raise fresh questions about effectiveness and accountability.

Why online safety must be shared

So, who is accountable for the online safety gap? In my view, responsibility is shared. Governments must provide clear, coherent frameworks and engage meaningfully with industry. The onus is also on businesses to invest in specialist talent, to integrate compliance into strategic planning, and foster a culture of continuous improvement. Easy to say, less easy to do. That’s why individuals must remain vigilant, demanding both protection and respect for their rights.

Looking ahead, I see little prospect of true global harmonisation. Laws will continue to reflect local cultures and priorities, and while international cooperation – especially on child protection – is increasing, convergence in some areas will be offset by divergence in others. The best hope lies in sharing best practices, aligning on fundamental principles, and building trust across borders.

Ultimately, online safety is not a destination but a journey. It requires adaptability, collaboration, and a willingness to confront difficult trade-offs. Only by embracing this complexity can we hope to close the gap between aspiration and reality—and begin to restore the public’s faith in the digital world.

 Hayley Brady is UK head of media and digital at Herbert Smith Freehills Kramer



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

WIRED Found the Most Manly Pants. And the Manliest Knife

Published

on

WIRED Found the Most Manly Pants. And the Manliest Knife



When you need something that’s as mannishly masculinized as you can get for the Man™ in your life, we have you covered.



Source link

Continue Reading

Tech

Kids and Teen Influencers in Australia Say ‘Bye-Bye’ to Social Media

Published

on

Kids and Teen Influencers in Australia Say ‘Bye-Bye’ to Social Media


When 15-year-old Carlee Jade Clements wakes up, her first thought is to record a Get Ready With Me video to share with her friends on TikTok. “I love recording everything and posting it the moment I have it,” says Clements, who lives in Melbourne, Australia.

Like many teenagers, Clements communicates with the world primarily through social media: Snapchat for messaging her friends, Pinterest for inspiration, TikTok for … well, everything. Unlike many teenagers, she also uses social media professionally; Clements has over 37,000 followers on Instagram, where she often posts product reviews (skin care, slime) and photos from her modeling and acting gigs.

But as of December 10, 2025, that will change. That’s when Australia’s Social Media Minimum Age regulation will go into effect, which will prevent Australians under 16 years old from having social media accounts. “It’s gonna be very weird and quiet and isolated,” says Clements. “I’m going to feel like I’m cut off from the world.”

Globally, people are starting to realize how social media can negatively impact adolescents. Even teenagers themselves are seeing this: Almost half of adolescents in the US claim these platforms harm people their age. Australia is the first country to take serious action. In December 2024, legislators passed the Social Media Minimum Age Bill, which will penalize tech platforms (including TikTok, Snapchat, Instagram, Facebook, X, YouTube, and Reddit) that allow under-16s to access their platforms.

In response, platforms are locking accounts and adopting age verification requirements. Some platforms, including Meta, started to enforce it early.

Teen content creators are taking steps, too. Zoey Bender, age 14, likes to post GRWM videos and tips: for making friends in high school, for starting seventh grade, for dealing with braces. “I love being creative about it,” says Bender, who has 58,000 followers on TikTok. “It’s my outlet.”

Her handle used to be @heyitszoey. In November, she and her dad, Mark, changed it to @_heyitszoeyandmark, with the hopes that her account won’t be deleted on December 10 because it’s now managed by an adult. She says that many other teenagers with large followings are doing the same; Clements’ mom already manages her Instagram account.

That means that once the age restrictions are in place, their professional accounts will likely still exist—although as teen and kid accounts are suspended, their engagement will likely go down, and they may lose followers, too. That would mean a decline in free products and in revenue, though it’s generally not a huge amount: Ava Jones, 12, who has 11,500 followers on Instagram, estimates that she makes $1,000-$2,000 Australian ($600-$1,300 US) per year, which she generally spends on makeup and clothes. “If that went away, I’d have to do more chores at home,” she says.



Source link

Continue Reading

Tech

Silicon Valley Is All About the Hard Sell These Days

Published

on

Silicon Valley Is All About the Hard Sell These Days


OpenAI CEO Sam Altman was at the center of Silicon Valley’s most visible publicity push in recent memory Monday night when he appeared on The Tonight Show. In a predictably softball interview with host Jimmy Fallon, Altman explained how ChatGPT has helped him alleviate the anxiety that comes with being a new parent.

It was a distinctly clever, if somewhat surprising, choice from Altman who has mostly kept his personal life out of the media spotlight. But Altman is a salesman, and a good salesman understands the optics of good television. So he talked about being a dad and being worried that his son—who wasn’t crawling at six months—was developing slower than other children (spoiler: he’s not). “I cannot imagine having gone through, figuring out how to raise a newborn without ChatGPT,” Altman told Fallon. “People did it for a long time, no problem. So clearly it was possible, but I have relied on it so much.”

As the fears around the future of AI continue to mount, the subtext was patently obvious: Technology can help people better understand their kids. We should welcome it. The timing of that particular message was not by mistake.

Of late, the tech establishment has gone on a charm offensive as age-verification laws sweep the US and the world, and the public backlash to AI intensifies.

Altman acknowledged as much but didn’t get into specifics during the interview. “One of the things that I’m worried about is just the rate of change that’s happening in the world right now. This is a three-year-old technology. No other technology has ever been adopted by the world this fast,” Altman said. “Making sure that we introduce this to the world in a responsible way, where people have time to adapt, to give input, to figure out how to do this—you could imagine us getting that wrong.”

Those concerns have only accelerated a concentrated campaign out of the Valley to better control the narrative, which has included everything from TV ads to pop-ups to create better brand awareness, and explain why the virtues of AI and social media, and all that it can do for people, outweigh the harms. If Silicon Valley is in its “hard tech era,” it is making an even harder sell.

The ads are everywhere you are: streaming, cable, social media. TikTok is great for dad advice. ChatGPT can teach you how to properly exercise, cook memorable dishes, or can curate an unforgettable road trip. Google wants you to “ask more of your phone” with its AI features. Anthropic—which, in a September ad spot, claimed “there’s never been a better time” for AI—is even hosting pop-ups and selling merch. Meta promises to be your personal AI for, well, everything.



Source link

Continue Reading

Trending