Tech
Why bug bounty schemes have not led to secure software | Computer Weekly
Governments should make software companies liable for developing insecure computer code. So says Katie Moussouris, the white hat hacker and security expert who first persuaded Microsoft and the Pentagon to offer financial rewards to security researchers who found and reported serious security vulnerabilities.
Bug bounty schemes have since proliferated and have now become the norm for software companies, with some, such as Apple, offering awards of $2m or more to those who find critical security vulnerabilities.
Moussouris likens security vulnerability research to working for Uber, only with lower pay and less job security. The catch is that people only get paid if they are the first to find and report a vulnerability. Those who put in the work but get results second or third get nothing.
“Intrinsically, it is exploitative of the labour market. You are asking them to do speculative labour, and you are getting something quite valuable out of them,” she says.
Some white hat hackers, motivated by helping people fix security problems, have managed to make a living by specialising in finding medium-risk vulnerabilities that may not pay as well as the high-risk bugs, but are easier to find.
But most security researchers struggle to make a living as bug bounty hunters.
“Very few researchers are capable of finding those elite-level vulnerabilities, and very few of the ones that are capable think it is worth their while to chase a bug bounty. They would rather have a nice contract or a full-time role,” she says.
Ethical hacking comes with legal risks
Its not just the lack of a steady income. Security researchers also face legal risks from anti-hacking laws, such as the UK’s Computer Misuse Act and the US’s draconian Computer Fraud and Abuse Act.
When Moussouris joined Microsoft in 2007, she persuaded the company to announce that it would not prosecute bounty hunters if they found online vulnerabilities in Microsoft products and reported them responsibly. Other software companies have since followed suit.
The UK government has now recognised the problem and promised to introduce a statutory defence for cyber security researchers who spot and share vulnerabilities to protect them from prosecution.
Another issue is that many software companies insist on security researchers signing a non-disclosure agreement (NDA) before paying them for their vulnerability disclosures.
This flies against the best practices for security disclosures, which Moussouris has championed through the International Standards Organisation (ISO).
When software companies pay the first person to discover a vulnerability a bounty in return for signing an NDA, that creates an incentive for those who find the same vulnerability to publicly disclose it, increasing the risk that a bad actor will exploit it for criminal purposes.
Worse, some companies use NDAs to keep vulnerabilities hidden but don’t take steps to fix them, says Moussouris, whose company, Luta Security, manages and advises on bug bounty and vulnerability disclosure programmes.
“We often see a big pile of unfixed bugs,” she says. “And some of these programmes are well funded by publicly traded companies that have plenty of cyber security employees, application security engineers and funding.”
Some companies appear to regard bug bounties as a replacement for secure coding and proper investment in software testing.
“We are using bug bounties as a stop-gap, as a way to potentially control the public disclosure of bugs, and we are not using them to identify symptoms that can diagnose our deeper lack of security controls,” she adds.
Ultimately, Moussouris says, governments will have to step in and change laws to make software companies liable for errors in their software, in much the same way car manufacturers are responsible for safety flaws in their vehicles.
“All governments have pretty much held off on holding software companies responsible and legally liable, because they wanted to encourage the growth of their industry,” she says. “But that has to change at a certain point, like automobiles were not highly regulated, and then seatbelts were required by law.”
AI could lead to less secure code
The rise of artificial intelligence (AI) could make white hat hackers redundant altogether, but perhaps not in a way that leads to better software security.
All of the major bug bounty platforms in the US are using AI to help with the triage of vulnerabilities and to augment penetration testing.
An AI-powered penetration testing platform, XBow, recently topped the bug bounty leaderboard by using AI to focus on relatively easy-to-find vulnerabilities and testing likely candidates in a systematic way to harvest security bugs.
“Once we create the tools to train AI to make it appear to be as good, or better in a lot of cases, than humans, you are pulling the rug out of the market. And then where are we going to get the next bug bounty expert?” she asks.
The current generation of experts with the skills to spot when AI systems are missing something important is in danger of disappearing.
“Bug bounty platforms are moving towards an automated, driverless version of bug bounties, where AI agents are going to take the place of human bug hunters,” she says.
Unfortunately, it’s far easier for AI to find software bugs than it is to use AI to fix them. And companies are not investing as much as they should in using AI to mitigate security risks.
“We have to figure out how to change that equation very quickly. It is easier to find and report a bug than it is for AI to write and test a patch,” she says.
Bug bounties have failed
Moussouris, a passionate and enthusiastic advocate of bug bounty schemes, is the first to acknowledge that bug bounty schemes have, in one sense, failed.
Some things have improved. Software developers have shifted to better programming languages and frameworks that make it harder to introduce particular classes of vulnerability, such as cross-site scripting errors.
But there is, she suggests, too much security theatre. Companies still address faults because they are visible, but hold off fixing things that the public can’t see, or use non-disclosure agreements to buy silence from researchers to keep vulnerabilities from the public.
Moussouris believes that AI will ultimately take over from human bug researchers, but says the loss of expertise will damage security.
The world is on the verge of another industrial revolution, but it will be bigger and faster than the last industrial revolution. In the 19th century, people left agriculture to work long hours in factories, often in dangerous conditions for poor wages.
As AI takes over more tasks currently carried out by people, unemployment will rise, incomes will fall and economies risk stagnation, Moussouris predicts.
The only answer, she believes, is for governments to tax AI companies and use the proceeds to provide the population with a universal basic income (UBI). “I think it has to, or literally there will be no way for capitalism to survive,” she says. “The good news is that human engineering ingenuity is still intact for now. I still believe in our ability to hack our way out of this problem.”
Growing tensions between governments and bug bounty hunters
The work of bug bounty hunters has also been impacted by moves to require software technology companies to report vulnerabilities to governments before they fix them.
It began with China in 2021, which required tech companies to disclose new vulnerabilities within 48 hours of discovery.
“It was very clear that they were going to evaluate whether or not they were going to use vulnerabilities for offensive purposes,” says Moussouris.
In 2020, the European Union (EU) introduced the Cyber Resilience Act (CRA), which introduced similar disclosure obligations, ostensibly to allow European government to prepare their cyber defences.
Moussouris is a co-author of the ISO standard on vulnerability disclosure. One of its principles is to limit the knowledge of security bugs to the smallest number of people before they are fixed.
The EU argues that its approach will be safe because it is not asking for a deep technical explanation of the vulnerabilities, nor is it asking for proof-of-concept code to show how vulnerabilities can be exploited.
But that misses the point, says Moussouris. Widening the pool of people with access to information about vulnerabilities will make leaks more likely and raises the risk that criminal hackers or hostile nation-states will exploit them for crime or espionage.
Risk from hostile nations
Moussouris does not doubt that hostile nations will exploit the weakest links in government bug notification schemes to learn new security exploits. If they are already using those vulnerabilities for offensive hacking, they will be able to cover their tracks.
“I anticipate there will be an upheaval in the threat intelligence landscape because our adversaries absolutely know this law is going to take effect. They are certainly positioning themselves to learn about these things through the leakiest party that gets notified,” she says.
“And they will either start targeting that particular software, if they weren’t already, or start pulling back their operations or hiding their tracks if they were the ones using it. It’s counterproductive,” she adds.
Moussouris is concerned that the US will likely follow the EU by introducing its own bug reporting scheme. “I am just holding my breath, anticipating that the US is going to follow, but I have been warning them against it.”
The UK’s equities programme
In the UK, GCHQ regulates government use of security vulnerabilities for spying through a process known as the equities scheme.
That involves security experts weighing up whether the UK would place its own critical systems at risk if it failed to notify software suppliers of potential exploits against the potential value of the exploit for gathering intelligence.
The process has a veneer of rationality, but it falls down because, in practice, government experts can have no idea how widespread vulnerabilities are in the critical national infrastructure. Even large suppliers like Microsoft have trouble tracking where their own products are used.
“When I was working at Microsoft, it was very clear that while Microsoft had a lot of visibility into what was deployed in the world, there were tonnes of things out there that they wouldn’t know about until they were exploited,” she says.
“The fact that Microsoft, with all its telemetry ability to know where its customers are, struggled means there is absolutely no way to gauge in a reliable way how vulnerable we are,” she adds.
Kate Moussouris spoke to Computer Weekly at the SANS CyberThreat Summit.
Tech
Alcatel-Lucent, Nokia team to deliver end-to-end enterprise network services | Computer Weekly
Looking to help modernise and future-proof campus networks across a range of use cases and industries, Alcatel-Lucent Enterprise (ALE) and Nokia have strengthened their strategic partnership to deliver an end-to-end portfolio of network services designed to support the digital transformation of critical industries such as transportation, smart cities, energy and utilities, healthcare, and hospitality.
The joint networking services have been deployed by Ikos Resorts in Greece, Pantai Jerudong Hospital in Brunei and Wembley Park in the UK in deployments designed to help establish campus-wide fibre-based LAN networks capable of delivering multi-gigabit data speeds to customers.
Nokia and ALE say the wins mark a significant milestone in the five-year partnership, and add to a long list of successful deployments at some of the world’s most demanding projects, such as Grand Paris Express, Montreal Railways and Okada Manila Resort.
By integrating their respective networking portfolios, the two companies say that they are “uniquely positioned” to meet the evolving demands of complex environments such as hospitality segments where resorts like Ikos are using their combined offering to connect hundreds of bedrooms across their luxury all-inclusive sites. With Nokia and ALE, Ikos was able to run its guest services, CCTV, voice, Wi-Fi and building safety sensors through a single, high-availability network architecture.
The fibre infrastructure also helped to save space and reduce the number of network layers. Boasting a legacy in delivering optical fibre services and being a trusted integrator in enterprise communications, Nokia and ALE have deployed their joint offering into more than 100 enterprises globally.
At the heart of the infrastructures is Nokia’s Optical LAN, which is designed to provide enterprises and campuses with a high-capacity fibre-based network capable of supporting the growing bandwidth needs for all in-campus devices and applications.
The optical LAN includes network performance with 10 gigabit speeds; “significantly” reduced power consumption, making operations more sustainable and cost-effective; and a light infrastructure in which the network can be simplified with minimal hardware requirements, reducing complexity and enhancing reliability. It is also attributed with lower total cost of ownership through efficient design and reduced maintenance, maximising return on investment.
Integrated into ALE’s network offering for enterprise in-building and campus connectivity, the technology is said to offer significant advantages, including lower energy consumption and total cost of ownership. ALE’s LAN and Wi-Fi also see use in providing an automated service that is claimed to be able to onboard devices efficiently while securing the network thanks to asset discovery and classification, virtual segmentation and continuous monitoring. Features include Layer 2 services, HPOE and optional redundant uplinks.
“By combining ALE’s agile enterprise networking solutions with Nokia’s carrier-grade infrastructure, we offer a comprehensive portfolio that addresses the unique needs of critical industries,” said Sandrine El Khodry, Alcatel-Lucent Enterprise’s executive vice-president of global sales and marketing. “Our partnership is built on trust, innovation and a shared commitment to customer success.”
Matthieu Bourguignon, senior vice-president and head of Europe at Nokia, added: “Our collaboration with Alcatel-Lucent Enterprise allows us to deliver end-to-end, mission-critical solutions that go beyond traditional boundaries. We are proud of the joint successes we’ve achieved and look forward to enabling even more transformative projects together.”
Tech
The Military Almost Got the Right to Repair. Lawmakers Just Took It Away
US lawmakers have removed provisions in the National Defense Authorization Act for 2026 that would have ensured military members’ right to repair their own equipment.
The final language of the NDAA was shared by the House Armed Services Committee on Sunday, after weeks of delays pushed the annual funding bill to the end of the year. Among a host of other language changes made as part of reconciling different versions of the legislation drafted by the Senate and the House of Representatives, two provisions focused on the right to repair—Section 836 of the Senate bill and Section 863 of the House bill—have both been removed. Also gone is Section 1832 of the House version of the bill, which repair advocates worried could have implemented a “data-as-a-service” relationship with defense contractors that would have forced the military to pay for subscription repair services.
As reported by WIRED in late November, defense contractor lobbying efforts seem to have worked to convince lawmakers who led the conference process, including Mike Rogers, a Republican from Alabama who is chair of the House Armed Services Committee, and ranking member Adam Smith of Washington, to pull the repair provisions, which enjoyed bipartisan support and was championed by the Trump administration, from the act.
The move is a blow to the broader right-to-repair movement, which advocates for policies that make it easier for device users, owners, or third parties to work on and repair devices without needing to get—or pay for—manufacturer approval. But while ensuring repair rights for service members did not make the final cut, neither did the competing effort to make the military dependent on repair-as-a-service subscription plans.
“For decades, the Pentagon has relied on a broken acquisition system that is routinely defended by career bureaucrats and corporate interests,” wrote senators Elizabeth Warren, the Massachusetts Democrat, and Tim Sheehy, a Republican of Montana, in a joint statement shared with WIRED. Both support right-to-repair efforts and were behind the language in the Senate version of the NDAA. “Military right to repair reforms are supported by the Trump White House, the Secretary of War, the Secretary of the Army, the Secretary of the Navy, entrepreneurs, small businesses, and our brave service members. The only ones against this common-sense reform are those taking advantage of a broken status quo at the expense of our warfighters and taxpayers,” they say.
Tech
Ethical hackers can be heroes: It’s time for the law to catch up | Computer Weekly
The last year has seen some of the costliest cyber attacks on UK businesses to date. Attacks on Marks &Spencer cost the supermarket chain hundreds of millions in lost profits and led to empty shelves. The Jaguar Land Rover attack sent shockwaves throughout its supply chain, which ultimately dragged down UK GDP in the third quarter.
While the perpetrators of cyber crime often operate across international borders, and beyond the reach of law enforcement, the M&S attack has resulted in several arrests in the UK, under the Computer Misuse Act [CMA] of 1990. With a new Cyber Security and Resilience Act on the way, it might seem UK authorities will soon have greater powers to force organisations to build better defences.
But while the UK government continues to pursue cyber criminals, it also needs to be much clearer about the crucial role of cyber security researchers and ethical hackers in defending against them.
Last week, UK security minister Dan Jarvis told a conference that the government was looking at changes to the CMA to introduce a “statutory defence” for cyber security experts who spot and share vulnerabilities.
It would mean that, as long they meet “certain safeguards”, researchers would be protected from prosecution.
To understand why this is so significant it’s worth recalling the background to the CMA. In the mid-1980s, IT journalist Steve Gold and fellow hacker Robert Schifreen were accused of accessing the Duke of Edinburgh’s BT Prestel email account.
They were prosecuted and convicted under the Forgery and Counterfeiting Act, but this was overturned on appeal, because that act didn’t specifically cover computer crimes.
This led to the CMA which set prison sentences for gaining unauthorised access to computer material.
The date is significant. At that time, most computer systems were tightly-controlled and effectively inaccessible to the majority of the population.
Very few people had a (BT-approved) modem at the time. The web had been developed just a year before. The dot com boom was years in the future, the term cyber war had yet to be coined, and the prospect of industrial level cyber crime barely considered.
The legislators who crafted the CMA can be forgiven for not anticipating the transformation of today’s digital environment, from mobile to cloud to AI. So, it’s perhaps understandable that the act didn’t anticipate the emergence of cyber security researchers, who would look for vulnerabilities and misconfigurations and share that information with the organisations concerned.
Less understandable is why this hasn’t been addressed since. As cyber crime transformed from a small niche into a worldwide epidemic over the last two decades, white hat hackers have been key to exposing and mitigating the methods and technologies cyber criminals have exploited. This has necessarily meant thinking and acting like a hacker.
Yet the CMA, and similar legislation in other countries, have proven to be a blunt instrument when it comes to deterring cyber crime.
It’s fair to point out that the number of prosecutions under the CMA and similar laws has been fairly low. But that is more because of the asymmetric nature of cyber crime: Most threats are coming from individuals beyond the reach of the UK and its allies, who are unlikely to be deterred by the CMA.
This imbalance has only become more stark as vulnerabilities and flaws have been exploited indiscriminately and at internet scale not just by criminals but by nation states willing to compromise critical national infrastructure, foreign businesses and consumers for strategic gains.
It has left researchers, and their potential clients, in a legal grey area. It has, on occasion, led to prosecutions of legitimate good guys.
Meanwhile, that ongoing threat of prosecution has an effect on another group of individuals – the next generation we need to encourage to join the industry. We are already suffering a chronic skills crisis, and the prospect of a criminal record hardly represents a golden hello.
None of this is new. The Criminal Law Reform Network highlighted in 2020 how “the CMA 1990 requires significant reform to make it fit for the 21st century.” and recommended the addition of required harms. The Home Office began a review of the act in 2021, which concluded in 2023, and did consider the question of a defence for researchers. the addition of required harms.
When the Cyber security and Resiliency Act becomes law in the UK, many more organisations will be obliged to report breaches, and be under more pressure to manage their security posture, including vulnerabilities.
They’re not going to be able to do that without the help of ethical hackers and cyber security researchers, who should be able to operate without fear of prosecution. It’s certainly do-able. Portugal has just announced built in defences for researchers in its implementation of NIS2.
Jarvis’ statement is welcome. But now we need action. We can’t wait another five years for the government to act to give cyber researchers and ethical hackers the cover they need. And we definitely can’t wait another 35.
Ed Parsons is chief operating officer at bug bounty, vulnerability disclosure and penetration testing services provider Intigriti, and a former vice president and cyber professional member association ISC2. A career risk and cyber expert, Parsons is a is a Certified Information Systems Security Professional (CISSP) and a UK Chartered Cyber Security Professional.
-
Entertainment1 week agoSadie Sink talks about the future of Max in ‘Stranger Things’
-
Sports1 week agoIndia Triumphs Over South Africa in First ODI Thanks to Kohli’s Heroics – SUCH TV
-
Fashion1 week agoResults are in: US Black Friday store visits down, e-visits up, apparel shines
-
Politics1 week agoElon Musk reveals partner’s half-Indian roots, son’s middle name ‘Sekhar’
-
Tech1 week agoPrague’s City Center Sparkles, Buzzes, and Burns at the Signal Festival
-
Sports1 week agoBroncos secure thrilling OT victory over Commanders behind clutch performances
-
Entertainment1 week agoNatalia Dyer explains Nancy Wheeler’s key blunder in Stranger Things 5
-
Business7 days agoCredit Card Spends Ease In October As Point‑Of‑Sale Transactions Grow 22%
