Tech
OpenAI Locks Down San Francisco Offices Following Alleged Threat From Activist
OpenAI employees in San Francisco were told to stay inside the office on Friday afternoon after the company purportedly received a threat from an individual who was previously associated with the Stop AI activist group.
“Our information indicates that [name] from StopAI has expressed interest in causing physical harm to OpenAI employees,” a member of the internal communications team wrote on Slack. “He has previously been on site at our San Francisco facilities.”
Just before 11 am, San Francisco police received a 911 call about a man allegedly making threats and intending to harm others at 550 Terry Francois Boulevard, which is near OpenAI’s offices in the Mission Bay neighborhood, according to data tracked by the crime app Citizen. A police scanner recording archived on the app describes the suspect by name and alleges he may have purchased weapons with the intention of targeting additional OpenAI locations.
Hours before the incident on Friday, the individual who police flagged as allegedly making the threat said he was no longer part of Stop AI in a post on social media.
WIRED reached out to the man in question but did not immediately receive a response. San Francisco police also did not immediately respond to a request for comment. OpenAI did not provide a statement prior to publication.
On Slack, the internal communications team provided three images of the man suspected of making the threat. Later, a high-ranking member of the global security team said “At this time, there is no indication of active threat activity, the situation remains ongoing and we’re taking measured precautions as the assessment continues.” Employees were told to remove their badges when exiting the building and to avoid wearing clothing items with the OpenAI logo.
Over the past couple of years, protestors affiliated with groups calling themselves Stop AI, No AGI, and Pause AI have held demonstrations outside the San Francisco offices of several AI companies, including OpenAI and Anthropic, over concerns that the unfettered development of advanced AI could harm humanity. In February, protestors were arrested for locking the front doors to OpenAI’s Mission Bay office. Earlier this month, StopAI claimed its public defender was the man who jumped onstage to subpoena OpenAI CEO Sam Altman during an onstage interview in San Francisco.
In a Pause AI press release from last year, the individual who police said was alleged to have made the threat against OpenAI staffers is described as an organizer and quoted as saying that he would find “life not worth living” if AI technologies were to replace humans in making scientific discoveries and taking over jobs. “Pause AI may be viewed as radical amongst AI people and techies,” he said. “But it is not radical amongst the general public, and neither is stopping AGI development altogether.”
Tech
Dark Matter May Be Made of Black Holes From Another Universe
A recent cosmological model combines two of the most eccentric ideas in contemporary physics to explain the nature of dark matter, the invisible substance that makes up about 85 percent of all matter in the universe. To understand it, it’s necessary to look beyond the Big Bang we all know and consider two concepts that rarely intersect: cyclic universes and primordial black holes.
A Different Kind of Multiverse
There are different versions of the “multiverse.” The most popular model—that of the Marvel Cinematic Universe—proposes that there are as many universes as there are possibilities and that these versions of reality are parallel. Physics proposes something more sober and mathematically consistent: the cosmic bounce.
In this model, the universe is not born from a singularity, but expands, contracts, and expands again in an endless cycle. Each “universe” is not parallel, but sequential—that is, one arises from the ashes of the previous one.
Is it possible for something to survive the end of its universe and endure into the next? According to a paper published in Physical Review D, yes. Author Enrique Gaztanaga, a research professor at the Institute of Space Sciences in Barcelona, shows that any structure larger than about 90 meters could pass through the final collapse of a universe and survive the rebound. These “relics” would not only persist, but could also seed the formation of giant, unexplained structures observed in the early stages of the present-day universe. Moreover, they could be the key to understanding dark matter.
For decades, the dominant explanation for dark matter has been that it is an unknown particle or particles. But after years of experiments without direct detections, physicists have begun to explore alternatives. One of them proposes that dark matter is not an exotic particle, but an abundant population of small black holes that we overlook.
The idea is appealing, but it has a serious problem. For these black holes to explain dark matter, they would have to exist from the earliest moments of the universe, long before the first stars could collapse. There are indications that these objects could exist, but a convincing physical mechanism to explain their origin is lacking.
A Universe Born With Black Holes
This is where Gaztanaga’s newly proposed model shines. If cosmic bouncing allows compact structures to survive the collapse of the previous universe, then the current universe would have already been born with pre-existing black holes. They would not have to have been generated by extreme fluctuations or finely tuned inflationary processes, but would simply have been there from the first instant.
The assumption has the potential to solve two riddles at once: the origin of black holes and the nature of dark matter. If this model is correct, dark matter would not be a mystery of the early universe but rather a legacy of a cosmos that predates our own.
“Much work remains to be done,” Gaztanaga, also a researcher at the Institute of Cosmology and Gravitation at the University of Portsmouth, said in an article for The Conversation. “These ideas must be tested against data—from gravitational-wave backgrounds to galaxy surveys and precision measurements of the cosmic microwave background.”
“But the possibility is profound,” he added. “The universe may not have begun once, but may have rebounded. And the dark structures shaping galaxies today could be relics from a time before the Big Bang.”
This story originally appeared in WIRED en Español and has been translated from Spanish.
Tech
Europe’s Online Age Verification App Is Here
The European online age verification app is ready.
The app works with passports or ID cards, is built to be “completely anonymous” for the people who use it, works on any device (smartphones, tablets, and PCs), and is open source. “Best of all, online platforms can easily rely on our age verification app, so there are no more excuses,” said European Commission president Ursula von der Leyen at a press conference on Wednesday. “Europe offers a free and easy-to-use solution that can protect our children from harmful and illegal content.”
High Expectations
“It is our duty to protect our children in the online world just as we do in the offline world. And to do that effectively, we need a harmonized European approach,” von der Leyen said at Wednesday’s press conference. “And one of the central issues is the question, how can we ensure a technical solution for age verification that is valid throughout Europe? Today, I can announce that we have the answer.”
This answer takes the form of an open source app that any private company can repurpose, as long as it complies with European privacy standards and offers the same technical solution throughout the European Union. The user downloads the app, agrees to the terms and conditions, sets up a pin or biometric access, and proves their age through an electronic identification system, or by showing a passport or ID card (in which case biometric verification is also provided). The app does not store your name, date of birth, ID number, or any other personal information, according to the European Commission—only the fact that you are over a certain age.
After that, when a person using the app wants to access a social network (minimum age: 13), pornographic site (minimum age: 18), or any other age-protected content, if they are logged in from a computer, they need only scan the QR code shown on the site they want to visit. If, on the other hand, the person logs in from a smartphone, the app sends the proof of age directly. The platform does not access the document with which the user proved it in the first place.
Adoption Event
The need to introduce a common system for the entire European Union has been discussed for some time, and according to commission technicians, the technical work is now complete. Of course, it will still be possible to circumvent the system—all it takes is for an adult to lend their phone to a younger friend—but the technological architecture exists, and it will be up to EU member states to decide whether to integrate it into national digital wallets or develop independent apps.
“No More Excuses”
For the app to really be effective, platforms must be obligated to verify the age of their users—that’s where things get tricky. The Digital Services Act, which went into effect in 2024, requires “very large online platforms”—those with more than 45 million monthly users in the European Union—to take concrete steps to mitigate systemic risks related to child protection, with heavy penalties for noncompliance.
“And that’s why Europe has the DSA: to call online platforms to their responsibilities. Because Europe will not tolerate platforms making money at the expense of our children,” European Commission executive vice president Henna Virkkunen told a press conference. She added that after an investigation into TikTok, the European institutions plan to take similar action against Facebook, Instagram, and Snapchat, as well as four porn sites. “Since the platforms do not have adequate age verification tools, we developed the solution ourselves,” he concluded. In short, as von der Leyen also remarked, “there are no more excuses.”
Bare Minimum
So far, this is the European framework that sets the general rules. On this basis, member states can consider more restrictive measures. Italy was among the first to discuss how to regulate the use of social media by minors but has so far not landed on anything concrete. Elsewhere in the EU, France’s Emmanuel Macron has been a trailblazer on the issue, pushing France to discuss a rule to ban social networks for minors under the age of 15 entirely. So far, this measure has received broad political support—but the outcome depends largely on compatibility with the Digital Services Act and the availability of effective age verification systems like the app the European Commission just released.
This article originally appeared on WIRED Italia and has been translated.
Tech
Anthropic Plots Major London Expansion
Anthropic is moving into a new London office as it seeks to expand its research and commercial footprint in Europe, setting up a scrap between the leading AI labs for talent emerging from British universities.
The company, which opened its first London office in 2023, is moving to the same neighborhood as Google DeepMind, OpenAI, Meta, Wayve, Isomorphic Labs, Synthesia, and various AI research institutions.
Anthropic’s new, 158,000-square-foot office footprint will have space enough for 800 people—four times its current head count—giving it room to potentially outscale OpenAI, which itself recently announced an expansion in London.
“Europe’s largest businesses and fastest-growing startups are choosing Claude, and we’re scaling to match,” says Pip White, head of EMEA North at Anthropic. “The UK combines ambitious enterprises and institutions that understand what’s at stake with AI safety with an exceptional pool of AI talent—we want to be where all of that comes together.
UK government officials had reportedly attempted to coax Anthropic into expanding its presence in London after the company recently fell out with the US administration. Anthropic refused to allow its models to be used in mass surveillance and autonomous weapon systems, leading to an ongoing legal battle between the AI lab and the Pentagon.
As part of the expansion, Anthropic says it will deepen its work with the UK’s AI Security Institute, a government body that this week published a risk evaluation of its latest model, Claude Mythos Preview. According to Politico, the UK government is one of few across Europe to have been granted access to the model, which Anthropic has released to only select parties, citing concerns over the potential for its abuse by cybercriminals.
The increasing concentration of AI companies in the same London district is an important step in creating a pathway for research to translate into AI products, says Geraint Rees, vice-provost at University College London, whose campus is around the corner from Anthropic’s new office.
“This cluster didn’t emerge from a planning document. It grew because serious researchers and companies understand that proximity isn’t a nice-to-have,” he said last month, speaking at an event attended by WIRED. “That’s how the innovation system actually works. It’s not a clean, linear transfer from lab to market. It’s messier, richer, more human than that.”
-
Entertainment1 week agoQueen Elizabeth II emotional message for Archie, Lilibet sparks speculation
-
Tech1 week agoAzure customers up in arms over ‘full’ UK South region | Computer Weekly
-
Tech1 week agoAs the Strait of Hormuz Reopens, Global Shipping Will Take Months to Recover
-
Fashion1 week agoCII submits 20-pt agenda to Indian govt to back firms hit by Iran war
-
Tech1 week agoThis AI Button Wearable From Ex-Apple Engineers Looks Like an iPod Shuffle
-
Politics7 days agoIndian airlines hit hardest after Dubai limits foreign flights until May 31
-
Entertainment4 days agoPalace left in shock as Prince William cancels grand ceremony
-
Politics7 days agoChinese, Taiwanese will unite, Xi tells Taiwan opposition leader
