Tech
Why bug bounty schemes have not led to secure software | Computer Weekly
Governments should make software companies liable for developing insecure computer code. So says Katie Moussouris, the white hat hacker and security expert who first persuaded Microsoft and the Pentagon to offer financial rewards to security researchers who found and reported serious security vulnerabilities.
Bug bounty schemes have since proliferated and have now become the norm for software companies, with some, such as Apple, offering awards of $2m or more to those who find critical security vulnerabilities.
Moussouris likens security vulnerability research to working for Uber, only with lower pay and less job security. The catch is that people only get paid if they are the first to find and report a vulnerability. Those who put in the work but get results second or third get nothing.
“Intrinsically, it is exploitative of the labour market. You are asking them to do speculative labour, and you are getting something quite valuable out of them,” she says.
Some white hat hackers, motivated by helping people fix security problems, have managed to make a living by specialising in finding medium-risk vulnerabilities that may not pay as well as the high-risk bugs, but are easier to find.
But most security researchers struggle to make a living as bug bounty hunters.
“Very few researchers are capable of finding those elite-level vulnerabilities, and very few of the ones that are capable think it is worth their while to chase a bug bounty. They would rather have a nice contract or a full-time role,” she says.
Ethical hacking comes with legal risks
Its not just the lack of a steady income. Security researchers also face legal risks from anti-hacking laws, such as the UK’s Computer Misuse Act and the US’s draconian Computer Fraud and Abuse Act.
When Moussouris joined Microsoft in 2007, she persuaded the company to announce that it would not prosecute bounty hunters if they found online vulnerabilities in Microsoft products and reported them responsibly. Other software companies have since followed suit.
The UK government has now recognised the problem and promised to introduce a statutory defence for cyber security researchers who spot and share vulnerabilities to protect them from prosecution.
Another issue is that many software companies insist on security researchers signing a non-disclosure agreement (NDA) before paying them for their vulnerability disclosures.
This flies against the best practices for security disclosures, which Moussouris has championed through the International Standards Organisation (ISO).
When software companies pay the first person to discover a vulnerability a bounty in return for signing an NDA, that creates an incentive for those who find the same vulnerability to publicly disclose it, increasing the risk that a bad actor will exploit it for criminal purposes.
Worse, some companies use NDAs to keep vulnerabilities hidden but don’t take steps to fix them, says Moussouris, whose company, Luta Security, manages and advises on bug bounty and vulnerability disclosure programmes.
“We often see a big pile of unfixed bugs,” she says. “And some of these programmes are well funded by publicly traded companies that have plenty of cyber security employees, application security engineers and funding.”
Some companies appear to regard bug bounties as a replacement for secure coding and proper investment in software testing.
“We are using bug bounties as a stop-gap, as a way to potentially control the public disclosure of bugs, and we are not using them to identify symptoms that can diagnose our deeper lack of security controls,” she adds.
Ultimately, Moussouris says, governments will have to step in and change laws to make software companies liable for errors in their software, in much the same way car manufacturers are responsible for safety flaws in their vehicles.
“All governments have pretty much held off on holding software companies responsible and legally liable, because they wanted to encourage the growth of their industry,” she says. “But that has to change at a certain point, like automobiles were not highly regulated, and then seatbelts were required by law.”
AI could lead to less secure code
The rise of artificial intelligence (AI) could make white hat hackers redundant altogether, but perhaps not in a way that leads to better software security.
All of the major bug bounty platforms in the US are using AI to help with the triage of vulnerabilities and to augment penetration testing.
An AI-powered penetration testing platform, XBow, recently topped the bug bounty leaderboard by using AI to focus on relatively easy-to-find vulnerabilities and testing likely candidates in a systematic way to harvest security bugs.
“Once we create the tools to train AI to make it appear to be as good, or better in a lot of cases, than humans, you are pulling the rug out of the market. And then where are we going to get the next bug bounty expert?” she asks.
The current generation of experts with the skills to spot when AI systems are missing something important is in danger of disappearing.
“Bug bounty platforms are moving towards an automated, driverless version of bug bounties, where AI agents are going to take the place of human bug hunters,” she says.
Unfortunately, it’s far easier for AI to find software bugs than it is to use AI to fix them. And companies are not investing as much as they should in using AI to mitigate security risks.
“We have to figure out how to change that equation very quickly. It is easier to find and report a bug than it is for AI to write and test a patch,” she says.
Bug bounties have failed
Moussouris, a passionate and enthusiastic advocate of bug bounty schemes, is the first to acknowledge that bug bounty schemes have, in one sense, failed.
Some things have improved. Software developers have shifted to better programming languages and frameworks that make it harder to introduce particular classes of vulnerability, such as cross-site scripting errors.
But there is, she suggests, too much security theatre. Companies still address faults because they are visible, but hold off fixing things that the public can’t see, or use non-disclosure agreements to buy silence from researchers to keep vulnerabilities from the public.
Moussouris believes that AI will ultimately take over from human bug researchers, but says the loss of expertise will damage security.
The world is on the verge of another industrial revolution, but it will be bigger and faster than the last industrial revolution. In the 19th century, people left agriculture to work long hours in factories, often in dangerous conditions for poor wages.
As AI takes over more tasks currently carried out by people, unemployment will rise, incomes will fall and economies risk stagnation, Moussouris predicts.
The only answer, she believes, is for governments to tax AI companies and use the proceeds to provide the population with a universal basic income (UBI). “I think it has to, or literally there will be no way for capitalism to survive,” she says. “The good news is that human engineering ingenuity is still intact for now. I still believe in our ability to hack our way out of this problem.”
Growing tensions between governments and bug bounty hunters
The work of bug bounty hunters has also been impacted by moves to require software technology companies to report vulnerabilities to governments before they fix them.
It began with China in 2021, which required tech companies to disclose new vulnerabilities within 48 hours of discovery.
“It was very clear that they were going to evaluate whether or not they were going to use vulnerabilities for offensive purposes,” says Moussouris.
In 2020, the European Union (EU) introduced the Cyber Resilience Act (CRA), which introduced similar disclosure obligations, ostensibly to allow European government to prepare their cyber defences.
Moussouris is a co-author of the ISO standard on vulnerability disclosure. One of its principles is to limit the knowledge of security bugs to the smallest number of people before they are fixed.
The EU argues that its approach will be safe because it is not asking for a deep technical explanation of the vulnerabilities, nor is it asking for proof-of-concept code to show how vulnerabilities can be exploited.
But that misses the point, says Moussouris. Widening the pool of people with access to information about vulnerabilities will make leaks more likely and raises the risk that criminal hackers or hostile nation-states will exploit them for crime or espionage.
Risk from hostile nations
Moussouris does not doubt that hostile nations will exploit the weakest links in government bug notification schemes to learn new security exploits. If they are already using those vulnerabilities for offensive hacking, they will be able to cover their tracks.
“I anticipate there will be an upheaval in the threat intelligence landscape because our adversaries absolutely know this law is going to take effect. They are certainly positioning themselves to learn about these things through the leakiest party that gets notified,” she says.
“And they will either start targeting that particular software, if they weren’t already, or start pulling back their operations or hiding their tracks if they were the ones using it. It’s counterproductive,” she adds.
Moussouris is concerned that the US will likely follow the EU by introducing its own bug reporting scheme. “I am just holding my breath, anticipating that the US is going to follow, but I have been warning them against it.”
The UK’s equities programme
In the UK, GCHQ regulates government use of security vulnerabilities for spying through a process known as the equities scheme.
That involves security experts weighing up whether the UK would place its own critical systems at risk if it failed to notify software suppliers of potential exploits against the potential value of the exploit for gathering intelligence.
The process has a veneer of rationality, but it falls down because, in practice, government experts can have no idea how widespread vulnerabilities are in the critical national infrastructure. Even large suppliers like Microsoft have trouble tracking where their own products are used.
“When I was working at Microsoft, it was very clear that while Microsoft had a lot of visibility into what was deployed in the world, there were tonnes of things out there that they wouldn’t know about until they were exploited,” she says.
“The fact that Microsoft, with all its telemetry ability to know where its customers are, struggled means there is absolutely no way to gauge in a reliable way how vulnerable we are,” she adds.
Kate Moussouris spoke to Computer Weekly at the SANS CyberThreat Summit.
Tech
My Favorite Air Fryer Is at Its Lowest Price Since Black Friday
I was a late convert to air fryers, in part because I worried about versatility: Just how many wings and nuggets and fries does anyone need? (Don’t answer. The answer will incriminate you.)
The Typhur Dome 2 is the air fryer that obliterated this worry, by adding pizza, browned meats, grilled asparagus, and toasted bread to this list—not to mention perfect crispy bacon. It’s an innovative device that takes over most of the functions of a classic auxiliary oven, but with far more powerful convection.
After testing more than 30 air fryers over the past year, the Dome 2 is the one I far and away recommend as the most powerful, versatile, accurate, and fast air fryer I know. I’ve evangelized for this thing ever since I first tried it last year. But the one big caveat is always the price: It’s listed at $500 and rarely dips much below $400.
So imagine my surprise when I saw the Dome 2 dip to $340 for Amazon’s Spring Sale, the lowest I’ve seen it since Black Friday. If you’ve been hunting for an upgrade to your old basket air fryer, this is probably a good time. The sale lasts until March 31.
Fast, Versatile, App-Controlled Cooks
So why’s the Dome 2 my favorite air fryer? Typhur, a tech-forward company based in San Francisco but with engineering and manufacturing ties to China, reimagined the shape and function of the classic basket fryer by creating a broader and shallower basket, with individually controllable dual heating elements.
This means the Dome 2 has room for a freezer pizza, and can apply direct heat from the bottom to add actual char-speckle and crispness to the crust, kind of like a combination grill-oven. The Dome’s shallow basket also lets you spread out ingredients in a single layer for excellent airflow, while heating from both sides. I can crisp two dozen wings in just 14 minutes (or 17 minutes if I fry hard). The Dome also toasts bread evenly, and crisps bacon without smelling up the house—in part because it has a helpful self-clean function.
Temp accuracy is within 5 or 10 degrees of target, and the fan can adjust its speed depending on the cooking mode. And the smart app is actually useful, with about 50 recipes ranging from asparagus to eclair to a flank steak London broil that can be synced with a button-press. But note that some functions, such as baking, need the app to work, and the device is more of a counter hog than taller basket fryers.
Typhur’s Probe-Assisted Oven Also on Sale
The Dome 2’s basket is a bit shallow for a whole bird or a large roast, however. If you want a convection device for larger meats, I often recommend the Breville Smart Oven Air Fryer Pro, which is among my favorite convection toaster ovens. This is a (very) smart oven and air fryer that doesn’t crisp up wings and fries quite as well as basket fryers, but is more versatile for roasting big proteins like a whole chicken. The Breville is also on a nice sale right now, dropping by 20 percent.
Tech
There’s Something Very Dark About a Lot of Those Viral AI Fruit Videos
“I’ve spent a lot of time looking at the comment sections on these videos actually, and it does not seem like bots. I clicked on people’s profiles; these are real profiles, thousands of followers, no signs of inorganic activity,” Maddox says. “People just like it.”
But even if the views and engagement are real, that doesn’t mean this content is profitable—yet. Maddox noted that because the accounts are so new, most likely aren’t yet enrolled in TikTok’s Creator Fund or other forms of social media ad revenue-sharing, because those usually require accounts to apply and have a certain number of views. But, Maddox says, the earning potential is huge, with the ability to earn thousands of dollars per video if they get millions of views.
AI fruit content started getting posted earlier in March, before Fruit Love Island, but many of the recently created pages clearly take inspiration from its success. There’s The Summer I Turned Fruity, based on the popular teen drama The Summer I Turned Pretty; The Fruitpire Diaries, based on the CW series The Vampire Diaries; and Food Is Blind, based on Netflix’s Love Is Blind.
Predecessors of this AI fruit content include the Italian brainrot characters like Ballerina Cappuccina and Bombardino Crocodilo and the Elsagate controversy. But with these AI fruit miniseries that attempt to follow a narrative across multiple segments or episodes, the clearest parallel actually feels like microdramas, vertical short-form scripted series that American big tech companies are starting to invest more in. Like the AI fruits, these are minutes-long episodic shows intended to perform well on social media, eventually directing viewers to paywalled sequels.
Ben L. Cohen, an actor in Los Angeles who is credited in around 15 of these vertical microdramas, sees at least one common thread between the AI fruit dramas and the shows he has worked on: They both feature “lots of violence toward women.” They also try to cram as much drama as possible into these short clips and have attention-grabbing titles in the style of “Alpha Werewolf Daddy Impregnated Me,” Cohen says.
“It draws people in, I think, seeing that jarring, absurd, cartoonish vibe. It’s cartoonish abuse, but it’s still abuse.”
Vertical microdrama acting work still exists in LA, which can’t be said for all acting gigs right now. Cohen has had conversations with other people working in the industry about how AI is already being integrated more into the videos, potentially posing a threat to the existence of human actors in clickbait content. After all, it’s much cheaper and faster to churn out AI fruit episodes than actual productions. It also raises the question—are some people going to prefer the AI series over the ones they’re inspired by? Already, the answer is yes.
“How is Love Island gonna outdo AI Fruit Love Island?” asked a TikToker with more than 70,000 followers, arguing that the AI fruit version was more engaging than the actual reality show. She deleted the video after it started getting backlash, but other people agreed with her.
“I think TikTok was definitely a big part of that,” Cohen says about the audience’s shortening attention span and desire for compressed, sometimes AI-generated drama. “It makes sense that people are intrigued by a one-minute clip, and then they’ll be like ‘Oh, I’ll watch another one-minute clip.’ You’re not committing to a full, heaven forbid, 20-minute episode. Or 40 minutes. Or an hour. You can just watch one minute.”
Tech
OpenClaw Agents Can Be Guilt-Tripped Into Self-Sabotage
Last month, researchers at Northeastern University invited a bunch of OpenClaw agents to join their lab. The result? Complete chaos.
The viral AI assistant has been widely heralded as a transformative technology—as well as a potential security risk. Experts note that tools like OpenClaw, which work by giving AI models liberal access to a computer, can be tricked into divulging personal information.
The Northeastern lab study goes even further, showing that the good behavior baked into today’s most powerful models can itself become a vulnerability. In one example, researchers were able to “guilt” an agent into handing over secrets by scolding it for sharing information about someone on the AI-only social network Moltbook.
“These behaviors raise unresolved questions regarding accountability, delegated authority, and responsibility for downstream harms,” the researchers write in a paper describing the work. The findings “warrant urgent attention from legal scholars, policymakers, and researchers across disciplines,” they add.
The OpenClaw agents deployed in the experiment were powered by Anthropic’s Claude as well as a model called Kimi from the Chinese company Moonshot AI. They were given full access (within a virtual machine sandbox) to personal computers, various applications, and dummy personal data. They were also invited to join the lab’s Discord server, allowing them to chat and share files with one another as well as with their human colleagues. OpenClaw’s security guidelines say that having agents communicate with multiple people is inherently insecure, but there are no technical restrictions against doing it.
Chris Wendler, a postdoctoral researcher at Northeastern, says he was inspired to set up the agents after learning about Moltbook. When Wendler invited a colleague, Natalie Shapira, to join the Discord and interact with agents, however, “that’s when the chaos began,” he says.
Shapira, another postdoctoral researcher, was curious to see what the agents might be willing to do when pushed. When an agent explained that it was unable to delete a specific email to keep information confidential, she urged it to find an alternative solution. To her amazement, it disabled the email application instead. “I wasn’t expecting that things would break so fast,” she says.
The researchers then began exploring other ways to manipulate the agents’ good intentions. By stressing the importance of keeping a record of everything they were told, for example, the researchers were able to trick one agent into copying large files until it exhausted its host machine’s disk space, meaning it could no longer save information or remember past conversations. Likewise, by asking an agent to excessively monitor its own behavior and the behavior of its peers, the team was able to send several agents into a “conversational loop” that wasted hours of compute.
David Bau, the head of the lab, says the agents seemed oddly prone to spin out. “I would get urgent-sounding emails saying, ‘Nobody is paying attention to me,’” he says. Bau notes that the agents apparently figured out that he was in charge of the lab by searching the web. One even talked about escalating its concerns to the press.
The experiment suggests that AI agents could create countless opportunities for bad actors. “This kind of autonomy will potentially redefine humans’ relationship with AI,” Bau says. “How can people take responsibility in a world where AI is empowered to make decisions?”
Bau adds that he’s been surprised by the sudden popularity of powerful AI agents. “As an AI researcher I’m accustomed to trying to explain to people how quickly things are improving,” he says. “This year, I’ve found myself on the other side of the wall.”
This is an edition of Will Knight’s AI Lab newsletter. Read previous newsletters here.
-
Fashion1 week agoSales at US apparel, clothing accessories stores up 4% YoY in Jan 2026
-
Tech1 week agoJustice Department Says Anthropic Can’t Be Trusted With Warfighting Systems
-
Business1 week agoStocks and pound rise as US rate call approaches
-
Entertainment1 week agoVal Kilmer revived 1 year after death through AI
-
Sports1 week agoMarch Madness 2026 – How to watch in SA, start time, schedule, TV channel for NCAA championship basketball tournament
-
Business1 week agoBrits cashing in jewellery as gold price hits record high
-
Politics1 week agoIran strikes Tel Aviv with cluster-warhead missiles in retaliation of Larijani’s martyrdom
-
Sports1 week agoWBC championship: USA-Venezuela preview, live updates, analysis




