Tech
This Startup Wants to Build Self-Driving Car Software—Super Fast
For the last year and a half, two hacked white Tesla Model 3 sedans each loaded with five extra cameras and one palm-sized supercomputer have quietly cruised around San Francisco. In a city and era swarming with questions about the capabilities and limits of artificial intelligence, the startup behind the modified Teslas is trying to answer what amounts to a simple question: How quickly can a company build autonomous vehicle software today?
The startup, which is making its activities public for the first time today, is called HyprLabs. Its 17-person team (just eight of them full-time) is divided between Paris and San Francisco, and the company is helmed by an autonomous vehicle company veteran, Zoox cofounder Tim Kentley-Klay, who suddenly exited the now Amazon-owned firm in 2018. Hypr has taken in relatively little funding, $5.5 million since 2022, but its ambitions are wide-ranging. Eventually, it plans to build and operate its own robots. “Think of the love child of R2-D2 and Sonic the Hedgehog,” Kentley-Klay says. “It’s going to define a new category that doesn’t currently exist.”
For now, though, the startup is announcing its software product called Hyprdrive, which it bills as a leap forward in how engineers train vehicles to pilot themselves. These sorts of leaps are all over the robotics space, thanks to advances in machine learning that promise to bring down the cost of training autonomous vehicle software, and the amount of human labor involved. This training evolution has brought new movement to a space that for years suffered through a “trough of disillusionment,” as tech builders failed to meet their own deadlines to operate robots in public spaces. Now, robotaxis pick up paying passengers in more and more cities, and automakers make newly ambitious promises about bringing self-driving to customers’ personal cars.
But using a small, agile, and cheap team to get from “driving pretty well” to “driving much more safely than a human” is its own long hurdle. “I can’t say to you, hand on heart, that this will work,” Kentley-Klay says. “But what we’ve built is a really solid signal. It just needs to be scaled up.”
Old Tech, New Tricks
HyprLabs’ software training technique is a departure from other robotics’ startups approaches to teaching their systems to drive themselves.
First, some background: For years, the big battle in autonomous vehicles seemed to be between those who used just cameras to train their software—Tesla!—and those who depended on other sensors, too—Waymo, Cruise!—including once-expensive lidar and radar. But below the surface, larger philosophical differences churned.
Camera-only adherents like Tesla wanted to save money while scheming to launch a gigantic fleet of robots; for a decade, CEO Elon Musk’s plan has been to suddenly switch all of his customers’ cars to self-driving ones with the push of a software update. The upside was that these companies had lots and lots of data, as their not-yet self-driving cars collected images wherever they drove. This information got fed into what’s called an “end-to-end” machine learning model through reinforcement. The system takes in images—a bike—and spits out driving commands—move the steering wheel to the left and go easy on the acceleration to avoid hitting it. “It’s like training a dog,” says Philip Koopman, an autonomous vehicle software and safety researcher at Carnegie Mellon University. “At the end, you say, ‘Bad dog,” or ‘Good dog.’”
Tech
‘Orbs,’ ‘Saucers,’ and ‘Flashes’ on the Moon: Pentagon Drops New UFO Files
Trump first teased the release in February in a Truth Social post. The Pentagon coordinated the release in partnership with the White House, Director of National Intelligence Tulsi Gabbard, the Energy Department, NASA, and the FBI. Many of the files in this new drop contain documents that are already publicly available. However, some versions of these known documents in the new files contain more pages, or fewer redactions, than previously released versions.
More than 60 percent of Americans believe that the government is concealing information about UAP, according to YouGov, while 40 percent think UAP are likely alien in origin, according to Gallup. Congress has held hearings into whether there’s been a decades-long program to recover “non-human” technologies, yet evidence remains elusive.
Courtesy of the US Department of Defense
“If it’s just more blobby photos or redacted documents that don’t have any details in them, it’s more of the same,” Adam Frank, an astrophysicist at the University of Rochester who studies the search for alien life, says of the new files. “What we need are actual scientific results from the investigations that should have been done if the most extraordinary claims being made are true.”
The document drop follows a week of high-profile discussions of aliens, including Stephen Colbert’s interview with former President Barack Obama, released on Wednesday. Obama cast doubt on government cover-ups about aliens by joking that “some guy guarding the installation would have taken a selfie with the alien and sent it to his girlfriend.”
Courtesy of the US Department of Defense
Members of the Artemis II crew also second-guessed the idea of a vast government-wide conspiracy to hide the discovery of extraterrestrial life in a discussion with The Daily this week.
“Do you realize that if we found alien life out there, and we came back and reported on it, NASA would never have a budget issue for the rest of eternity?” said Reid Weisman, the commander of Artemis II. “So trust me.”
Victor Glover, the astronaut who piloted the mission, added: “Why would we hide that from you?”
Tech
Nick Bostrom Has a Plan for Humanity’s ‘Big Retirement’
Philosopher Nick Bostrom recently posted a paper, where he postulated that a small chance of AI annihilating all humans might be worth the risk, because advanced AI might relieve humanity of “its universal death sentence.” That upbeat gamble is quite a leap from his previous dark musings on AI, which made him a doomer godfather. His 2014 book Superintelligence was an early examination of AI’s existential risk. One memorable thought experiment: An AI tasked with making paper clips winds up destroying humanity because all those resource-needy people are an impediment to paper clip production. His more recent book, Deep Utopia, reflects a shift in his focus. Bostrom, who leads Oxford’s Future of Humanity Institute, dwells on the “solved world” that comes if we get AI right.
STEVEN LEVY: Deep Utopia is more optimistic than your previous book. What changed for you?
NICK BOSTROM: I call myself a fretful optimist. I am very excited about the potential for radically improving human life and unlocking possibilities for our civilization. That’s consistent with the real possibility of things going wrong.
You wrote a paper with a striking argument: Since we’re all going to die anyway, the worst that can happen with AI is that we die sooner. But if AI works out, it might extend our lives, maybe indefinitely.
That paper explicitly looks at only one aspect of this. In any given academic paper, you can’t address life, the universe, and the meaning of everything. So let’s just look at this little issue and try to nail that down.
That isn’t a little issue.
I guess I’ve been irked by some of the arguments made by doomers who say that if you build AI, you’re going to kill me and my children and how dare you. Like the recent book If Anyone Builds It, Everyone Dies. Even more probable is that if nobody builds it, everyone dies! That’s been the experience for the last several 100,000 years.
But in the doomer scenario everybody dies and there’s no more people being born. Big difference.
I have obviously been very concerned with that. But in this paper, I’m looking at a different question, which is, what would be best for the currently existing human population like you and me and our families and the people in Bangladesh? It does seem like our life expectancy would go up if we develop AI, even if it is quite risky.
In Deep Utopia you speculate that AI could create incredible abundance, so much that humanity might have a huge problem with finding purpose. I live in the United States. We’re a very rich country, but our government, ostensibly with support of the people, has policies that deny services to the poor and distribute rewards to the rich. I think that even if AI was able to provide abundance for everyone, we would not supply it to everyone.
You might be right. Deep Utopia takes as its starting point the postulation that everything goes extremely well. If we do a reasonably good job on governance, everybody gets a share. There is quite a deep philosophical question of what a good human life would look like under these ideal circumstances.
The meaning of life is something you hear a lot about in Woody Allen movies and maybe in the philosophers community. I’m worried more about the wherewithal to support oneself and get a stake in this abundance.
The book is not only about meaning. That’s one out of a bunch of different values that it considers. This could be a wonderful emancipation from the drudgery that humans have been subjected to. If you have to give up, say, half of your waking hours as an adult just to make ends meet, doing some work you don’t enjoy and that you don’t believe in, that’s a sad condition. Society is so used to it that we’ve invented all kinds of rationalizations around it. It’s like a partial form of slavery.
Tech
There’s a Long-Shot Proposal to Protect California Workers From AI
Billionaire California gubernatorial candidate Tom Steyer is rolling out a new proposal that would guarantee jobs with benefits for workers displaced by artificial intelligence. He’s the first state-wide candidate to make such a pledge.
The plan, which builds on a broader AI policy framework Steyer released in March, promises to make California “the first major economy in the world” to ensure “good-paying” jobs to workers impacted by AI. To do so, Steyer tells WIRED he plans to build off a previous proposal to introduce a “token tax” which would tax big tech companies “a fraction of a cent for every unit of data processed” for AI. The funding generated by that tax would go to what Steyer has called the Golden State Sovereign Wealth Fund, with some of that money being earmarked for jobs building housing, health care, and modernizing California’s energy infrastructure.
“The aim of the initiative will be to strengthen the foundation of the state’s economy, invest in our communities, and create beautiful, vibrant public spaces,” states a campaign memo viewed by WIRED. “To support these efforts, Tom will also invest heavily in training and apprenticeship programs across the state.”
The new plan also intends to expand unemployment insurance and establish a new agency called the AI Worker Protection Administration that would include union leaders, academics, and technologists that would adopt rules to protect workers’ rights, the memo says.
“People all over this state are terrified that AI is going to hollow out this whole economy and they’re going to lose their jobs. Young people are worried they’ll never get a job,” Steyer tells WIRED. “We believe this can be an amazing transformational technology in many ways, but we’re not in the business of leaving people in California behind.”
Steyer’s job guarantee comes as lawmakers across the state and federal levels—and even some AI executives—scramble to address the ramifications of widespread AI adoption across the US workforce. In New Jersey, state senator Troy Singleton recently put out a bill that would require companies that replace workers with AI to contribute to a fund that would pay to retrain those workers. In Congress, there are a handful of proposals for grants and tax credits for companies to provide AI training to existing employees.
Dario Amodei, CEO of Anthropic, has previously suggested the concept of a token tax that is now being proposed by Steyer. “Obviously, that’s not in my economic interest,” Amodei told Axios last year. “But I think that would be a reasonable solution to the problem.” In April, OpenAI proposed a similar public wealth fund to what Steyer has rolled out.
Steyer’s announcement comes days after Democratic primary opponent Xavier Becerra—former Health and Human Services secretary under president Joe Biden—offered his own AI plan. In that proposal, Becerra calls for “workforce investment and transition support” but doesn’t provide a specific funding mechanism.
“Displacement without support is abandonment,” Becerra said in a Monday memo outlining his plan. “I will work with the Legislature, the California public education system and industry partners to build accessible, stackable workforce programs that prepare Californians for the AI economy and support workers navigating role changes.”
Over the past few months, the White House has threatened to go after states that choose to regulate AI. In December, President Donald Trump signed an executive order that could revoke federal broadband funding from states that approve “onerous” AI laws. This is happening in local races as well: In New York, a super PAC backed by a number of Silicon Valley powerhouses, including OpenAI cofounder Greg Brockman, has targeted Alex Bores, a Manhattan congressional candidate who has made AI regulation the centerpiece of his campaign.
“Not regulating AI doesn’t seem remotely reasonable,” Steyer says. “But if California wants to lead, we’ve got to have a vision for the future that includes something that is not just about letting entrepreneurs get rich at the expense of everybody else.”
-
Politics5 days agoIran weighs US reply delivered via Pakistan as Trump signals opposition to deal terms
-
Business1 week agoPSX plunges over 4,800 points | The Express Tribune
-
Fashion7 days agoUS’ J.Jill, Inc. appoints Kimberly Wallengren as CMO
-
Tech1 week agoAlmost half of UK businesses hit by cyber attacks | Computer Weekly
-
Tech1 week agoThis Indigenous Language Survived Russian Occupation. Can It Survive YouTube?
-
Fashion1 week agoCanada’s Lululemon appoints Esi Eggleston Bracey to board of directors
-
Entertainment1 week agoDavid Allan Coe, country singer who wrote “Take This Job and Shove It,” dies at age 86
-
Business1 week agoGovernment hikes jet fuel prices by 5% for international airlines – The Times of India

