Tech
Could a ‘gray swan’ event bring down the AI revolution? Here are 3 risks we should be preparing for
The term “black swan” refers to a shocking event on nobody’s radar until it actually happens. This has become a byword in risk analysis since a book called “The Black Swan” by Nassim Nicholas Taleb was published in 2007. A frequently cited example is the 9/11 attacks.
Fewer people have heard of “gray swans“. Derived from Taleb’s work, gray swans are rare but more foreseeable events. That is, things we know could have a massive impact, but we don’t (or won’t) adequately prepare for.
COVID was a good example: precedents for a global pandemic existed, but the world was caught off guard anyway.
Although he sometimes uses the term, Taleb doesn’t appear to be a big fan of gray swans. He’s previously expressed frustration that his concepts are often misused, which can lead to sloppy thinking about the deeper issues of truly unforeseeable risks.
But it’s hard to deny there is a spectrum of predictability, and it’s easier to see some major shocks coming. Perhaps nowhere is this more obvious than in the world of artificial intelligence (AI).
Putting our eggs in one basket
Increasingly, the future of the global economy and human thriving has become tied to a single technological story: the AI revolution. It has turned philosophical questions about risk into a multitrillion-dollar dilemma about how we align ourselves with possible futures.
US tech company Nvidia, which dominates the market for AI chips, recently surpassed US$5 trillion (about A$7.7 trillion) in market value. The “Magnificent Seven” US tech stocks—Amazon, Alphabet (Google), Apple, Meta, Microsoft, Nvidia and Tesla—now make up about 40% of the S&P 500 stock index.
The impact of a collapse for these companies—and a stock market bust—would be devastating at a global level, not just financially but also in terms of dashed hopes for progress.
AI’s gray swans
There are three broad categories of risk—beyond the economic realm—that could bring the AI euphoria to an abrupt halt. They’re gray swans because we can see them coming but arguably don’t (or won’t) prepare for them.
1. Security and terror shocks
AI’s ability to generate code, malicious plans and convincing fake media makes it a force multiplier for bad actors. Cheap, open models could help design drone swarms, toxins or cyber attacks. Deepfakes could spoof military commands or spread panic through fake broadcasts.
Arguably, the closest of these risks to a “white swan“—a foreseeable risk with relatively predictable consequences—stems from China’s aggression toward Taiwan.
The world’s biggest AI firms depend heavily on Taiwan’s semiconductor industry for the manufacture of advanced chips. Any conflict or blockade would freeze global progress overnight.
2. Legal shocks
Some AI firms have already been sued for allegedly using text and images scraped from the internet to train their models.
One of the best-known examples is the ongoing case of The New York Times versus OpenAI, but there are many similar disputes around the world.
If a major court were to rule that such use counts as commercial exploitation, it could unleash enormous damages claims from publishers, artists and brands.
A few landmark legal rulings could force major AI companies to press pause on developing their models further—effectively halting the AI build-out.
3. One breakthrough too many: innovation shocks
Innovation is usually celebrated, but for companies investing in AI, it could be fatal. New AI technology that autonomously manipulates markets (or even news that one is already doing so) would make current financial security systems obsolete.
And an advanced, open-source, free AI model could easily vaporize the profits of today’s industry leaders. We got a glimpse of this possibility in January’s DeepSeek dip, when details about a relatively cheaper, more efficient AI model developed in China caused US tech stocks to plummet.
Why we struggle to prepare for gray swans
Risk analysts, particularly in finance, often talk in terms of historical data. Statistics can give a reassuring illusion of consistency and control. But the future doesn’t always behave like the past.
The wise among us apply reason to carefully confirmed facts and are skeptical of market narratives.
Deeper causes are psychological: our minds encode things efficiently, often relying on one symbol to represent very complex phenomena.
It takes us a long time to remodel our representations of the world into believing a looming big risk is worth taking action over—as we’ve seen with the world’s slow response to climate change.
How can we deal with gray swans?
Staying aware of risks is important. But what matters most isn’t prediction. We need to design for a deeper sort of resilience that Taleb calls “antifragility“.
Taleb argues systems should be built to withstand—or even benefit from—shocks, rather than rely on perfect foresight.
For policymakers, this means ensuring regulation, supply chains and institutions are built to survive a range of major shocks. For individuals, it means diversifying our bets, keeping options open and resisting the illusion that history can tell us everything.
Above all, the biggest problem with the AI boom is its speed. It is reshaping the global risk landscape faster than we can chart its gray swans. Some may collide and cause spectacular destruction before we can react.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Citation:
Could a ‘gray swan’ event bring down the AI revolution? Here are 3 risks we should be preparing for (2025, November 5)
retrieved 5 November 2025
from https://techxplore.com/news/2025-11-gray-swan-event-ai-revolution.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
Why people don’t demand data privacy, even as governments and corporations collect more personal information
When the Trump administration gave Immigration and Customs Enforcement access to a massive database of information about Medicaid recipients in June 2025, privacy and medical justice advocates sounded the alarm. They warned that the move could trigger all kinds of public health and human rights harms.
But most people likely shrugged and moved on with their day. Why is that? It’s not that people don’t care. According to a 2023 Pew Research Center survey, 81% of American adults said they were concerned about how companies use their data, and 71% said they were concerned about how the government uses their data.
At the same time, though, 61% expressed skepticism that anything they do makes much difference. This is because people have come to expect that their data will be captured, shared and misused by state and corporate entities alike. For example, many people are now accustomed to instinctively hitting “accept” on terms of service agreements, privacy policies and cookie banners regardless of what the policies actually say.
At the same time, data breaches have become a regular occurrence, and private digital conversations exposing everything from infidelity to military attacks have become the stuff of public scrutiny. The cumulative effect is that people are loath to change their behaviors to better protect their data − not because they don’t care, but because they’ve been conditioned to think that they can’t make a difference.
As scholars of data, technology and culture, we find that when people are made to feel as if data collection and abuse are inevitable, they are more likely to accept it—even if it jeopardizes their safety or basic rights.
Where regulation falls short
Policy reforms could help to change this perception, but they haven’t yet. In contrast to a growing number of countries that have comprehensive data protection or privacy laws, the United States offers only a patchwork of policies covering the issue.
At the federal level, the most comprehensive data privacy laws are nearly 40 years old. The Privacy Act of 1974, passed in the wake of federal wiretapping in the Watergate and the Counterintelligence Program scandals, limited how federal agencies collected and shared data. At the time government surveillance was unexpected and unpopular.
But it also left open a number of exceptions—including for law enforcement—and did not affect private companies. These gaps mean that data collected by private companies can end up in the hands of the government, and there is no good regulation protecting people from this loophole.
The Electronic Communications Privacy Act of 1986 extended protections against telephone wire tapping to include electronic communications, which included services such as email. But the law did not account for the possibility that most digital data would one day be stored on cloud servers.
Since 2018, 19 U.S. states have passed data privacy laws that limit companies’ data collection activities and enshrine new privacy rights for individuals. However, many of these laws also include exceptions for law enforcement access.
These laws predominantly take a consent-based approach—think of the pesky banner beckoning you to “accept all cookies”—that encourages you to give up your personal information even when it’s not necessary. These laws put the onus on individuals to protect their privacy, rather than simply barring companies from collecting certain kinds of information from their customers.
The privacy paradox
For years, studies have shown that people claim to care about privacy but do not take steps to actively protect it. Researchers call this the privacy paradox. It shows up when people use products that track them in invasive ways, or when they consent to data collection, even when they could opt out. The privacy paradox often elicits appeals to transparency: If only people knew that they had a choice, or how the data would be used, or how the technology works, they would opt out.
But this logic downplays the fact that options for limiting data collection are often intentionally designed to be convoluted, confusing and inconvenient, and they can leave users feeling discouraged about making these choices, as communication scholars Nora Draper and Joseph Turow have shown. This suggests that the discrepancy between users’ opinions on data privacy and their actions is hardly a contradiction at all. When people are conditioned to feel helpless, nudging them into different decisions isn’t likely to be as effective as tackling what makes them feel helpless in the first place.
Resisting data disaffection
The experience of feeling helpless in the face of data collection is a condition we call data disaffection. Disaffection is not the same as apathy. It is not a lack of feeling but rather an unfeeling—an intentional numbness. People manifest this numbness to sustain themselves in the face of seemingly inevitable datafication, the process of turning human behavior into data by monitoring and measuring it.
It is similar to how people choose to avoid the news, disengage from politics or ignore the effects of climate change. They turn away because data collection makes them feel overwhelmed and anxious—not because they don’t care.
Taking data disaffection into consideration, digital privacy is a cultural issue—not an individual responsibility—and one that cannot be addressed with personal choice and consent. To be clear, comprehensive data privacy law and changing behavior are both important. But storytelling can also play a powerful role in shaping how people think and feel about the world around them.
We believe that a change in popular narratives about privacy could go a long way toward changing people’s behavior around their data. Talk of “the end of privacy” helps create the world the phrase describes. Philosopher of language J.L. Austin called those sorts of expressions performative utterances. This kind of language confirms that data collection, surveillance and abuse are inevitable so that people feel like they have no choice
Cultural institutions have a role to play here, too. Narratives reinforcing the idea of data collection as being inevitable come not only from tech companies’ PR machines but also mass media and entertainment, including journalists. The regular cadence of stories about the federal government accessing personal data, with no mention of recourse or justice, contributes to the sense of helplessness.
Alternatively, it’s possible to tell stories that highlight the alarming growth of digital surveillance and frame data governance practices as controversial and political rather than innocuous and technocratic. The way stories are told affects people’s capacity to act on the information that the stories convey. It shapes people’s expectations and demands of the world around them.
The ICE-Medicaid data-sharing agreement is hardly the last threat to data privacy. But the way people talk and feel about it can make it easier—or more difficult—to ignore data abuses the next time around.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Citation:
Why people don’t demand data privacy, even as governments and corporations collect more personal information (2025, November 5)
retrieved 5 November 2025
from https://techxplore.com/news/2025-11-people-dont-demand-privacy-corporations.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
Zohran Mamdani Just Inherited the NYPD Surveillance State
Mamdani’s campaign did not respond to a request for comment.
The NYPD’s turn toward mass surveillance was begun in earnest by Commissioner Raymond Kelly during the immediate aftermath of September 11, buoyed by hundreds of millions of dollars in federal anti-terrorism grants. However, Ferguson says Kelly’s rival, former commissioner William Bratton, was a key architect behind the NYPD’s reliance on “big data,” by implementing the CompStat data analysis system to map and electronically collate crime data during the mid-1990s and again during his return to New York City in 2014 under Mayor Bill de Blasio. Bratton was also a mentor to Jessica Tisch and has spoken admiringly of her since leaving the NYPD.
Tisch was a main architect of the NYPD’s Domain Awareness System, an enormous, $3 billion, Microsoft-based surveillance network of tens of thousands of private and public surveillance cameras, license plate readers, gunshot detectors, social media feeds, biometric data, cryptocurrency analysis, location data, bodyworn and dashcam livestreams, and other technology that blankets the five boroughs’ 468-square-mile territory. Patterned off London’s 1990s CCTV surveillance network, the “ring of steel” was initially developed under Kelly as an anti-terrorism surveillance system for Lower and Midtown Manhattan before being rebranded as the DAS and marketed to other police departments as a potential for-profit tool. Several dozen of the 17,000 cameras in New York City public housing developments were also linked through backdoor methods by the Eric Adams administration last summer with thousands more in the pipeline, according to NY Focus.
Though the DAS has been operational for more than a decade and survived prior challenges over data retention and privacy violations from civil society organizations like the New York Civil Liberties Union, it remains controversial. In late October, a Brooklyn couple filed a civil suit along with Surveillance Technology Oversight Project (STOP), a local privacy watchdog, against the DAS, alleging violations of New York State’s constitutional right to privacy by the NYPD’s persistent mass surveillance and data retention. NYPD officers, the suit claims, can “automatically track an individual across the city using computer vision software, which follows a person from one camera to the next based on descriptors as simple as the color of a piece of clothing.” The technology, they allege, “transforms every patrol officer into a mobile intelligence unit, capable of conducting warrantless surveillance at will.”
Tech
Democrats Did Much Better Than Expected
If you’re like me, Steve Kornacki is just as adored by your aunt as he is in your group chats. He’s become a staple of Election Day coverage, putting in long hours at the big board and copious amounts of prep beforehand.
His granular knowledge of key counties and voter turnout trends made him not just indispensable for many Americans on election night, but also a full-blown celebrity. I caught up with him bright and early this morning to talk about Tuesday night’s election results.
We broke down what the returns mean heading into the 2026 midterm elections, where Democrats currently hold an 8 percentage point advantage over Republicans in the latest NBC News poll, and what they say about President Donald Trump’s second-term agenda. We also spoke about what surprised him in the New Jersey governor’s race, whether Trump’s base is weakening, and, of course, New York mayor-elect Zohran Mamdani’s historic win. Heading into the midterms, Kornacki is taking on an expanded role at NBC News following parent company Comcast’s decision to spin off its cable TV properties, including a soon-to-be rebranded MSNBC.
Kornacki is not someone to put too much stock into an off-year election, but the breadth and depth of Democratic victories suggested a political environment that’s radically changed in the year since Trump’s election—and if anyone can find some important details to follow going forward, it’s Steve.
This interview has been edited for length and clarity.
WIRED: Steve, thanks for joining us after a long night. Before we get into the meat and potatoes here, let’s start with a quick lightning round: How many hours of sleep were you shooting for, how many did you get, and can you tell us if you have any election night superstitions?
Steve Kornacki: Well, I shoot for zero, so I’m not disappointed and therefore I’m pleasantly surprised with whatever I get, which I think was about two and a half last night.
There we go.
So that’s not too bad. Superstitions? I don’t know about that. My challenge is to just tune out all the anecdotal turnout data on Election Day. I just think it’s a ton of noise that starts messing with your head.
What surprised you from last night?
What surprised me was—it’s probably not the most original observation this morning—but New Jersey. [Representative Mikie Sherrill, the Democratic nominee, won with more than 56 percent of the vote.] The margin there for Sherrill, which is about 13 points, is much more than expected. I mean, I was talking to Democrats right up through Election Day who were telling me some version of: “She’s run a terrible campaign, she’s not been a good candidate. Maybe she’ll still win because of Trump, but this is going to be closer than it should be.” I mean, that was a widely shared view between the two parties, that Sherrill had run a bad campaign and was in danger of even losing, and that was not the case at all.
-
Tech1 week agoOpenAI says a million ChatGPT users talk about suicide
-
Tech1 week agoUS Ralph Lauren partners with Microsoft for AI shopping experience
-
Tech1 week agoHow digital technologies can support a circular economy
-
Sports1 week agoBilly Bob Thornton dishes on Cowboys owner Jerry Jones’ acting prowess after ‘Landman’ cameo
-
Tech1 week agoAI chatbots are becoming everyday tools for mundane tasks, use data shows
-
Fashion1 week agoITMF elects new board at 2025 Yogyakarta conference
-
Business1 week agoLucid targets industry-first self-driving car technology with Nvidia
-
Fashion1 week agoCalvin Klein launches Re-Calvin take-back programme across the US
