Connect with us

Tech

Teen hackers aren’t the problem. They’re the wake-up call | Computer Weekly

Published

on

Teen hackers aren’t the problem. They’re the wake-up call | Computer Weekly


The face of cyber crime has changed. It’s no longer the cliche of a shadowy figure operating in anonymity from their basement. Today, some of the most disruptive attacks are being carried out by smart, curious teenagers, often still in school and forming their identity, but already capable of breaching billion-dollar systems. The global cost of cyber crime in 2025 is expected to hit $10.5tn (£7.7tn) – the global cost of Covid.

Recent cases have exposed this uncomfortable reality. Multi-million pound attacks on well known UK companies Co-op, M&S, and Harrods have been traced to individuals aged 17, 19, and 20. Scattered Spider is the most recent, most famous case, but it isn’t an isolated event – the average age of someone arrested for cyber crime is 19, compared to 34 for other crimes. Law enforcement is understandably cautious in how they talk about these cases due to the vulnerability of those involved, but the pattern is clear.

The uncomfortable truth is that these young people are not criminal masterminds. Europol found 69% of European teens have committed a cyber crime or misdemeanor. These are middle-of-the-curve kids, not hardened hackers, emboldened by cash, with years of professional experience. They are symptoms of a much broader failure to engage with the emerging reality of a generation that learns, explores and socialises online, and increasingly pushes boundaries there too.

We need to stop treating teenage cyber crime as an isolated behavioural issue

What we’re seeing is a societal and educational blind spot, where the true failure is a lack of guidance, development, and opportunity. These teenagers aren’t joining gangs on street corners, which does come with a code of ethics in its own way. They’re testing code and pushing systems because they’re curious, driven, and have nowhere else to aim that energy. This natural inclination, when nurtured, is the foundation of intelligence and underpins all innovation. As Steve Jobs said, “Much of what I stumbled into by following my curiosity and intuition turned out to be priceless later on.” We need to celebrate this curiosity rather than stifling it.

In many cases, they don’t fully understand the legal consequences. They’re experimenting, often for recognition in the Discord servers that egg them on, sometimes for the challenge itself. The system fails to spot their potential early on and only responds once they cross a line. That’s not a security strategy, it’s a failure of imagination from all involved.

As we mention ethics above, every talking point comes back to young people being left to figure out the rules of an online world without the guidance or opportunities that might direct them towards something constructive. Organised crime groups have already recognised this and are actively recruiting. Networks like The Com or 764 on Discord and Telegram have groomed children in private chat groups, coercing them into extortion, doxing, self-harm material, and laundering stolen data. Young recruits often remain unaware of the full magnitude of their crimes until it’s far too late.

Every organisation that relies on digital systems – which is to say, obviously, is nearly all of them – is now facing a growing threat landscape and a critical shortage of talent to defend against it. Globally, there are nearly five million unfilled cyber security roles. At the same time, governments, businesses and schools continue to treat cyber security as a niche subject rather than a foundational skill set. It’s taken quite literally decades to have an appropriate amount of technology literacy at schools as preparation for a world where careers are increasingly moving solely online. Yet, there is an entire generation of natural born hackers.

If we took half the effort we spend on reacting to youth cyber crime and redirected it toward early education, real-world challenges and career pathways in cyber, we could start converting those vulnerabilities into national assets. The UK government has taken a step towards this with the TechFirst programme, investing £187m over four years to impact a million British kids in cyber, AI, and engineering.

What do we need to do to create change?

We need to meet kids where they are, on gaming platforms, watching content, and on social media, to spark passion for cyber. Here at The Hacking Games, our AI platform, HAPTAI, does exactly that. It looks at gaming behaviours and performances, modding, psychographics, and tests aptitudes for cyber skills, offering a solution for inspiring, evaluating, and placing talent.

A great example of this step in the right direction is The Hacking Games recent community partnership with Co-op. This new partnership will combine Co-op’s reach into every post code area of the UK, community expertise, 38 Co-op Academy schools, 20,000 students, and their 6.5 million member base with The Hacking Games’ extensive knowledge and expertise in cyber crime.

Having been attacked and understanding the implications, Co-op wants to help prevent cyber crime before it starts by supporting young people to put their skills to good use. By opening doors and widening access, it aims to reduce risk and offer real alternatives to those who might otherwise be led down the wrong path.

The partnership, a long term initiative with ambitions to develop into a large scale national movement, activated through a wide scale, multi-channel approach, begins with an independent research study led by professor Jonathan Lusthaus of the University of Oxford, a leading expert on the social dimensions of cyber crime and hacking, with the findings informing future prevention strategies.

What these stories of teen hackers really reveal is a failure to connect the dots.

If a 16-year-old manages to breach a corporation’s defences, that’s not just a lapse in cyber security, it’s an indictment of every system that failed to notice their capabilities sooner.

But we don’t have to wait for more young people to be caught on the wrong side of the law to start changing that. The talent is already here. It’s in the schools. It’s online. It’s writing scripts, testing limits, and trying to figure out where it fits in.

If we build the right pathways, these young people could be our greatest line of defence. Ignore them, and they may just become the next threat. We need to create a generation of ethical hackers to make the world safer.

Fergus Hay is co-founder and CEO at The Hacking Games.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

OpenAI Really Wants Codex to Shut Up About Goblins

Published

on

OpenAI Really Wants Codex to Shut Up About Goblins


OpenAI has a goblin problem.

Instructions designed to guide the behavior of the company’s latest model as it writes code have been revealed to include a line, repeated several times, that specifically forbids it from randomly mentioning an assortment of mythical and real creatures.

“Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user’s query,” read instructions in Codex CLI, a command-line tool for using AI to generate code.

It is unclear why OpenAI felt compelled to spell this out for Codex—or indeed why its models might want to discuss goblins or pigeons in the first place. The company did not immediately respond to a request for comment.

OpenAI’s newest model, GPT-5.5, was released with enhanced coding skills earlier this month. The company is in a fierce race with rivals, especially Anthropic, to deliver cutting-edge AI, and coding has emerged as a killer capability.

In response to a post on X that highlighted the lines, however, some users claimed that OpenAI’s models occasionally become obsessed with goblins and other creatures when used to power OpenClaw, a tool that lets AI take control of a computer and apps running on it in order to do useful things for users.

“I was wondering why my claw suddenly became a goblin with codex 5.5,” one user wrote on X.

“Been using it a lot lately and it actually can’t stop speaking of bugs as ‘gremlins’ and ‘goblins’ it’s hilarious,” posted another.

The discovery quickly became its own meme, inspiring AI-generated scenes of goblins in data centers, and plug-ins for Codex that put it in a playful “goblin mode.”

AI models like GPT-5.5 are trained to predict the word—or code—that should follow a given prompt. These models have become so good at doing this that they appear to exhibit genuine intelligence. But their probabilistic nature means that they can sometimes behave in surprising ways. A model might become more prone to misbehavior when used with an “agentic harness” like OpenClaw that puts lots of additional instructions into prompts, such as facts stored in long-term memory.

OpenAI acquired OpenClaw in February not long after the tool became a viral hit among AI enthusiasts. OpenClaw can use any AI model to automate useful tasks like answering emails or buying things on the web. Users can select any of various personae for their helper, which shapes its behavior and responses.

OpenAI staffers appeared to acknowledge the prohibition. In response to a post highlighting OpenClaw’s goblin tendencies, Nik Pash, who works on Codex, wrote, “This is indeed one of the reasons.”

Even Sam Altman, OpenAI’s CEO, joined in with the memes, posting a screenshot of a prompt for ChatGPT. It read: “Start training GPT-6, you can have the whole cluster. Extra goblins.”



Source link

Continue Reading

Tech

Elon Musk Testifies That He Started OpenAI to Prevent a ‘Terminator Outcome’

Published

on

Elon Musk Testifies That He Started OpenAI to Prevent a ‘Terminator Outcome’


Elon Musk and Sam Altman appeared in a federal courtroom together for the first time on Tuesday as they fight over OpenAI’s decade-long evolution and what it means for the company’s future.

The trial in Musk’s lawsuit against Altman could result in financial damages and, more significantly, governance changes at OpenAI that may complicate its plans for an initial public offering as soon as this year.

As the first witness on the stand, Musk immediately sought to frame his case as more than just about OpenAI. Siding with Altman “will give license to looting every charity in America” and shake the “entire foundation of charitable giving,” Musk told a panel of nine jurors advising US District Judge Yvonne Gonzalez Rogers on how to rule.

Musk has been concerned about computers becoming smarter than people “since he was a young man in college,” his attorney Steven Molo told jurors. Molo explained that Musk lobbied governments to pass regulations addressing the prospect of so-called artificial general intelligence, including meeting with then-President Barack Obama in 2015. “But the government was not stepping up,” Molo said. “Elon felt he had to do something.”

Around the same time, Musk met with Altman, a then-30-year-old investor “whom he didn’t know very well,” Molo said. They soon launched OpenAI together as a nonprofit. Google’s unchecked progress on AI development had sparked concerns for both OpenAI cofounders, and they wanted to create a competing lab with a greater focus on safety. “My perspective is [OpenAI] exists because Larry Page called me a speciesist for being pro-humanity,” Musk said, referring to the Google cofounder. “What would be the opposite of Google? An open-source nonprofit.”

While Musk believes AI could cure diseases and generate prosperity for humanity, he also told the court that he thinks the technology could veer off into catastrophic scenarios straight out of science fiction. “It could also kill all of us … the Terminator outcome. I think we want to be in a movie … like Star Trek, not a James Cameron movie,” Musk said. (While Musk has long raised alarms about AI safety, his current firm, xAI, has been criticized by researchers at other AI labs for its “reckless” safety culture.)

As OpenAI began notching some of its own successes, Musk and Altman agreed that a for-profit arm with fixed returns for investors was necessary to raise extraordinary sums of money needed to fund hiring and computing, according to Molo. He compared it to a nonprofit museum that receives some proceeds from a for-profit store. “I was not opposed to there being a small for-profit as long as the tail didn’t wag the dog,” Musk said on the stand.

Musk felt that the approach had gone too far when Microsoft, another defendant in the trial, agreed to invest $10 billion in 2023, and OpenAI increasingly moved intellectual property and staff to the for-profit company. “The museum store sold the Picassos so they were locked up where no one could see them,” Molo said.

OpenAI’s Rebuttal

William Savitt, an attorney for OpenAI, told jurors that OpenAI never promised Musk that it would remain a nonprofit and publish all its code. “The evidence here will show what Musk says happened did not happen,” Savitt said.

He added that Musk knew about plans to raise corporate investment exceeding $10 billion as far back as 2018. Musk even raised concerns about Microsoft’s involvement in a 2020 tweet. But he didn’t file a lawsuit until he founded a competitor, xAI, in 2023.



Source link

Continue Reading

Tech

Sniffies’ Users Worry About a ‘Straightification’ of the Gay Hookup App

Published

on

Sniffies’ Users Worry About a ‘Straightification’ of the Gay Hookup App


Of all the gay hookup apps Brennan Zubrick uses, Sniffies, a cruising app for men interested in discreet sex-positive casual encounters with other men, is by far his favorite. Some of the most popular kinks among members on the platform include edging, cum play, and BDSM. “I overwhelmingly prefer the experience I get and the community I can access,” he tells WIRED. But Zubrick, who is 40 and based in Washington, DC, has a bad feeling that could soon change.

Tinder and Hinge parent company Match Group announced on Monday an investment of $100 million into Sniffies. The deal gives Match Group a large minority share and the choice to become the sole owner later on. The announcement has set off an intense firestorm of reactions from users who are second-guessing the direction of the company and the longterm sustainability of the app.

“Sniffies has long held its market position as the little guy, catering to a specific section of the gay community, and is somewhere people who might not be comfortable with Grindr—where no face-pic, no-chat culture runs rampant—go to connect with other like-minded people in a more direct and discreet way,” Zubrick tells WIRED.

“This partnership is about supporting that, not redefining it,” Sniffies founder and CEO Blake Gallagher said in a statement, noting that the investment will help the platform focus on three key areas users want: “stronger trust and safety, expansive network growth, and continued product improvements.” According to the agreement, Match Group will offer guidance on the right roles, procedures, and tech to help Sniffies build on its trust and safety efforts.

But users aren’t buying what Gallagher is selling. The Instagram post announcing the news was inundated with negative reactions, as users expressed worry over the strategic partnership. “Please don’t let this be the straightification of sniffies,” expressed one. “You sold out. Plain and simple. Where we moving to next boys?” added Marc Sundstrom, a user in Philadelphia. “Partnering with Match feels very gentrified and straight. Highly concerned about the app being allowed to be what it is in order to court investors,” wrote another. By Tuesday afternoon, comments on the post had been shut off.

Though it remains to be seen how Gallagher will position Sniffies in the months ahead, already users are saying this marks the beginning of the end for the app. “Straight people shouldn’t even know what Sniffies is for fuck sake,” one wrote in the r/askgaybros subreddit. And despite promises, some say a major corporation like Match is not ethically aligned with the indie spirit of Sniffies. On LinkedIn, the top comment under Gallagher’s post questioned the real intent behind Match Group’s investment. “Interested to see how ties to Palantir affect Sniffies’ growth. Hopefully this doesn’t become a surveillance application.”

Spencer Rascoff, who became CEO of Match Group in 2025, previously served on the board of Palantir, the defense tech and data mining company that has become a “technological backbone” of the Trump administration.

Sniffies maintains that it will continue to own and control how its user data is stored, handled, and protected. According to the company, there are no changes planned to its data practices as part of the investment.

But the outrage underscores the significance of platforms like Sniffies and what it would mean to a community of people who already feel like they have so few quality options for seeking desire online.

“It’s a mess and obviously to be expected. It’s definitely an indicator of its fast rise, so no shade, but we saw what happened with Grindr,” says Brad Allen, a 34-year-old event producer and the creator behind Club Quarantine, who joined Sniffies in 2023. “I really am pulling for them to somehow navigate this differently since it’s essential to the cruising community now. Hopefully the pop-up Candy Crush ads don’t light up too much in the bushes.”





Source link

Continue Reading

Trending