Tech
Robots that spare warehouse workers the heavy lifting
There are some jobs human bodies just weren’t meant to do. Unloading trucks and shipping containers is a repetitive, grueling task — and a big reason warehouse injury rates are more than twice the national average.
The Pickle Robot Company wants its machines to do the heavy lifting. The company’s one-armed robots autonomously unload trailers, picking up boxes weighing up to 50 pounds and placing them onto onboard conveyor belts for warehouses of all types.
The company name, an homage to The Apple Computer Company, hints at the ambitions of founders AJ Meyer ’09, Ariana Eisenstein ’15, SM ’16, and Dan Paluska ’97, SM ’00. The founders want to make the company the technology leader for supply chain automation.
The company’s unloading robots combine generative AI and machine-learning algorithms with sensors, cameras, and machine-vision software to navigate new environments on day one and improve performance over time. Much of the company’s hardware is adapted from industrial partners. You may recognize the arm, for instance, from car manufacturing lines — though you may not have seen it in bright pickle-green.
The company is already working with customers like UPS, Ryobi Tools, and Yusen Logistics to take a load off warehouse workers, freeing them to solve other supply chain bottlenecks in the process.
“Humans are really good edge-case problem solvers, and robots are not,” Paluska says. “How can the robot, which is really good at the brute force, repetitive tasks, interact with humans to solve more problems? Human bodies and minds are so adaptable, the way we sense and respond to the environment is so adaptable, and robots aren’t going to replace that anytime soon. But there’s so much drudgery we can get rid of.”
Finding problems for robots
Meyer and Eisenstein majored in computer science and electrical engineering at MIT, but they didn’t work together until after graduation, when Meyer started the technology consultancy Leaf Labs, which specializes in building embedded computer systems for things like robots, cars, and satellites.
“A bunch of friends from MIT ran that shop,” Meyer recalls, noting it’s still running today. “Ari worked there, Dan consulted there, and we worked on some big projects. We were the primary software and digital design team behind Project Ara, a smartphone for Google, and we worked on a bunch of interesting government projects. It was really a lifestyle company for MIT kids. But 10 years go by, and we thought, ‘We didn’t get into this to do consulting. We got into this to do robots.’”
When Meyer graduated in 2009, problems like robot dexterity seemed insurmountable. By 2018, the rise of algorithmic approaches like neural networks had brought huge advances to robotic manipulation and navigation.
To figure out what problem to solve with robots, the founders talked to people in industries as diverse as agriculture, food prep, and hospitality. At some point, they started visiting logistics warehouses, bringing a stopwatch to see how long it took workers to complete different tasks.
“In 2018, we went to a UPS warehouse and watched 15 guys unloading trucks during a winter night shift,” Meyer recalls. “We spoke to everyone, and not a single person had worked there for more than 90 days. We asked, ‘Why not?’ They laughed at us. They said, ‘Have you tried to do this job before?’”
It turns out warehouse turnover is one of the industry’s biggest problems, limiting productivity as managers constantly grapple with hiring, onboarding, and training.
The founders raised a seed funding round and built robots that could sort boxes because it was an easier problem that allowed them to work with technology like grippers and barcode scanners. Their robots eventually worked, but the company wasn’t growing fast enough to be profitable. Worse yet, the founders were having trouble raising money.
“We were desperately low on funds,” Meyer recalls. “So we thought, ‘Why spend our last dollar on a warm-up task?’”
With money dwindling, the founders built a proof-of-concept robot that could unload trucks reliably for about 20 seconds at a time and posted a video of it on YouTube. Hundreds of potential customers reached out. The interest was enough to get investors back on board to keep the company alive.
The company piloted its first unloading system for a year with a customer in the desert of California, sparing human workers from unloading shipping containers that can reach temperatures up to 130 degrees in the summer. It has since scaled deployments with multiple customers and gained traction among third-party logistics centers across the U.S.
The company’s robotic arm is made by the German industrial robotics giant KUKA. The robots are mounted on a custom mobile base with an onboard computing systems so they can navigate to docks and adjust their positions inside trailers autonomously while lifting. The end of each arm features a suction gripper that clings to packages and moves them to the onboard conveyor belt.
The company’s robots can pick up boxes ranging in size from 5-inch cubes to 24-by-30 inch boxes. The robots can unload anywhere from 400 to 1,500 cases per hour depending on size and weight. The company fine tunes pre-trained generative AI models and uses a number of smaller models to ensure the robot runs smoothly in every setting.
The company is also developing a software platform it can integrate with third-party hardware, from humanoid robots to autonomous forklifts.
“Our immediate product roadmap is load and unload,” Meyer says. “But we’re also hoping to connect these third-party platforms. Other companies are also trying to connect robots. What does it mean for the robot unloading a truck to talk to the robot palletizing, or for the forklift to talk to the inventory drone? Can they do the job faster? I think there’s a big network coming in which we need to orchestrate the robots and the automation across the entire supply chain, from the mines to the factories to your front door.”
“Why not us?”
The Pickle Robot Company employs about 130 people in its office in Charlestown, Massachusetts, where a standard — if green — office gives way to a warehouse where its robots can be seen loading boxes onto conveyor belts alongside human workers and manufacturing lines.
This summer, Pickle will be ramping up production of a new version of its system, with further plans to begin designing a two-armed robot sometime after that.
“My supervisor at Leaf Labs once told me ‘No one knows what they’re doing, so why not us?’” Eisenstein says. “I carry that with me all the time. I’ve been very lucky to be able to work with so many talented, experienced people in my career. They all bring their own skill sets and understanding. That’s a massive opportunity — and it’s the only way something as hard as what we’re doing is going to work.”
Moving forward, the company sees many other robot-shaped problems for its machines.
“We didn’t start out by saying, ‘Let’s load and unload a truck,’” Meyers says. “We said, ‘What does it take to make a great robot business?’ Unloading trucks is the first chapter. Now we’ve built a platform to make the next robot that helps with more jobs, starting in logistics but then ultimately in manufacturing, retail, and hopefully the entire supply chain.”
Tech
Yann LeCun Raises $1 Billion to Build AI That Understands the Physical World
Advanced Machine Intelligence (AMI), a new Paris-based startup cofounded by Meta’s former chief AI scientist Yann LeCun, announced Monday it has raised more than $1 billion to develop AI world models.
LeCun argues that most human reasoning is grounded in the physical world, not language, and that AI world models are necessary to develop true human-level intelligence. “The idea that you’re going to extend the capabilities of LLMs [large language models] to the point that they’re going to have human-level intelligence is complete nonsense,” he said in an interview with WIRED.
The financing, which values the startup at $3.5 billion, was co-led by investors such as Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Bezos Expeditions. Other notable backers include Mark Cuban, former Google CEO Eric Schmidt, and French billionaire and telecommunications executive Xavier Niel.
AMI (pronounced like the French word for friend) aims to build “a new breed of AI systems that understand the world, have persistent memory, can reason and plan, and are controllable and safe,” the company says in a press release. The startup says it will be global from day one, with offices in Paris, Montreal, Singapore, and New York, where LeCun will continue working as a New York University professor in addition to leading the startup. AMI will be the first commercial endeavor for LeCun since his departure from Meta in November 2025.
LeCun’s startup represents a bet against many of the world’s biggest AI labs like OpenAI, Anthropic, and even his former workplace, Meta, which believe that scaling up LLMs will eventually deliver AI systems with human-level intelligence or even superintelligence. LLMs have powered viral products such as ChatGPT and Claude Code, but LeCun has been one of the AI industry’s most prominent researchers speaking out about the limitations of these AI models. LeCun is well known for being outspoken, but as a pioneer of modern AI that won a Turing award back in 2018, his skepticism carries weight.
LeCun says AMI aims to work with companies in manufacturing, biomedical, robotics, and other industries that have lots of data. For example, he says AMI could build a realistic world model of an aircraft engine and work with the manufacturer to help them optimize for efficiency, minimize emissions, or ensure reliability.
AMI was cofounded by LeCun and several leaders he worked with at Meta, including the company’s former director of research science, Michael Rabbat; former vice president of Europe, Laurent Solly; and former senior director of AI research, Pascale Fung. Other cofounders include Alexandre LeBrun, former CEO of the AI health care startup Nabla, who will serve as AMI’s CEO, and Saining Xie, a former Google DeepMind researcher who will be the startup’s chief science officer.
The Case for World Models
LeCun does not dismiss the overall utility of LLMs. Rather, in his view, these AI models are simply the tech industry’s latest promising trend, and their success has created a “kind of delusion” among the people who build them. “It’s true that [LLMs] are becoming really good at generating code, and it’s true that they are probably going to become even more useful in a wide area of applications where code generation can help,” says LeCun. “That’s a lot of applications, but it’s not going to lead to human-level intelligence at all.”
LeCun has been working on world models for years inside of Meta, where he founded the company’s Fundamental AI Research lab, FAIR. But he’s now convinced his research is best done outside the social media giant. He says it’s become clear to him that the strongest applications of world models will be selling them to other enterprises, which doesn’t fit neatly into Meta’s core consumer business.
As AI world models like Meta’s Joint-Embedding Predictive Architecture (JEPA) became more sophisticated, “there was a reorientation of Meta’s strategy where it had to basically catch up with the industry on LLMs and kind of do the same thing that other LLM companies are doing, which is not my interest,” says LeCun. “So sometime in November, I went to see Mark Zuckerberg and told him. He’s always been very supportive of [world model research], but I told him I can do this faster, cheaper, and better outside of Meta. I can share the cost of development with other companies … His answer was, OK, we can work together.”
Tech
Nvidia Is Planning to Launch an Open-Source AI Agent Platform
Nvidia is planning to launch an open-source platform for AI agents, people familiar with the company’s plans tell WIRED.
The chipmaker has been pitching the product, referred to as NemoClaw, to enterprise software companies. The platform will allow these companies to dispatch AI agents to perform tasks for their own workforces. Companies will be able to access the platform regardless of whether their products run on Nvidia’s chips, sources say.
The move comes as Nvidia prepares for its annual developer conference in San Jose next week. Ahead of the conference, Nvidia has reached out to companies including Salesforce, Cisco, Google, Adobe, and CrowdStrike to forge partnerships for the agent platform. It’s unclear whether these conversations have resulted in official partnerships. Since the platform is open source, it’s likely that partners would get free, early access in exchange for contributing to the project, sources say. Nvidia plans to offer security and privacy tools as part of this new open-source agent platform.
Nvidia did not respond to a request for comment. Representatives from Cisco, Google, Adobe, and CrowdStrike also did not respond to requests for comment. Salesforce did not provide a statement prior to publication.
Nvidia’s interest in agents comes as people are embracing “claws,” or open-source AI tools that run locally on a user’s machine and perform sequential tasks. Claws are often described as self-learning, in that they’re supposed to automatically improve over time. Earlier this year, an AI agent known as OpenClaw—which was first called Clawdbot, then Moltbot—captivated Silicon Valley due to its ability to run autonomously on personal computers and complete work tasks for users. OpenAI ended up acquiring the project and hiring the creator behind it.
OpenAI and Anthropic have made significant improvements in model reliability in recent years, but their chatbots still require hand-holding. Purpose-built AI agents or claws, on the other hand, are designed to execute multiple steps without as much human supervision.
The usage of claws within enterprise environments is controversial. WIRED previously reported that some tech companies, including Meta, have asked employees to refrain from using OpenClaw on their work computers, due to the unpredictability of the agents and potential security risks. Last month a Meta employee who oversees safety and alignment for the company’s AI lab publicly shared a story about an AI agent going rogue on her machine and mass deleting her emails.
For Nvidia, NemoClaw appears to be part of an effort to court enterprise software companies by offering additional layers of security for AI agents. It’s also another step in the company’s embrace of open-source AI models, part of a broader strategy to maintain its dominance in AI infrastructure at a time when leading AI labs are building their own custom chips. Nvidia’s software strategy until now has been heavily reliant on its CUDA platform, a famously proprietary system that locks developers into building software for Nvidia’s GPUs and has created a crucial “moat” for the company.
Last month The Wall Street Journal reported that Nvidia also plans to reveal a new chip system for inference computing at its developer conference. The system will incorporate a chip designed by the startup Groq, which Nvidia entered into a multibillion-dollar licensing agreement with late last year.
Paresh Dave and Maxwell Zeff contributed to this report.
Tech
Anthropic Claims Pentagon Feud Could Cost It Billions
Anthropic executives allege that current customers and prospective ones have been demanding new terms and even backing out of negotiations since the US Department of Defense labeled the AI startup a supply-chain risk late last month, according to court papers that also revealed new financial details about the company.
Hundreds of millions of dollars in expected revenue this year from work tied to the Pentagon is already at risk for Anthropic, the company’s chief financial officer, Krishna Rao, wrote in a court filing on Monday. But if the government has its way and pressures a broad range of companies from doing business with the AI startup, regardless of any ties to the military, Anthropic could ultimately lose billions of dollars in sales, he stated. Its all-time sales, since commercializing its technology in 2023, exceed $5 billion, according to Rao.
Anthropic’s revenue exploded as its Claude models began outperforming rivals and showing advanced capabilities in areas such as generating software code. But the company spends heavily on computing infrastructure and remains deeply unprofitable. Rao specified that Anthropic has spent over $10 billion to train and deploy its models.
Anthropic chief commercial officer Paul Smith provided several examples of partners who have privately raised concerns to the AI startup in recent days. He said a financial services customer paused negotiations over a $15 million deal because of the supply-chain label, and two leading financial services companies have refused to close deals valued together at $80 million unless they gain the right to unilaterally cancel their contracts for any reason. A grocery store chain canceled a sales meeting, citing the supply-chain-risk designation, Smith added.
“All have taken steps that reflect deep distrust and a growing fear of associating with Anthropic,” Smith wrote.
The executives’ comments are part of statements from six Anthropic leaders in support of a preliminary order that would allow the San Francisco company to continue doing business with the Department of Defense until lawsuits about the supply-chain-risk issue are resolved.
Anthropic has sued the Trump administration in two courts. A lawsuit filed in San Francisco federal court on Monday alleges the government violated the company’s free speech rights. A separate case filed Monday in the federal appeals court in Washington, DC, accuses the Defense Department of unfairly discriminating and retaliating against Anthropic.
The company is seeking a hearing as soon as Friday in San Francisco for a temporary reprieve. The legal battle and sales fallout follows a weeks-long dispute between Anthropic and the Pentagon over the potential use of AI technologies for mass domestic surveillance and autonomous lethal weapons. Anthropic contends AI is not yet capable of safely undertaking the tasks, while the Pentagon wants the right to make that judgment on its own.
By law, the supply-chain designation prevents a narrow set of companies that do business with the Pentagon from incorporating Anthropic into their systems. But Defense secretary Pete Hegseth has cast a wider net. He posted on X late last month that “effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”
Rao wrote that the Pentagon reinforced the message by reaching out to several startups about their use of Claude, which he said he learned had happened from speaking with an investor that Anthropic and the smaller companies all share. They “have grown worried and uncertain about their ability to use Claude,” Rao wrote.
The Pentagon declined to comment on the lawsuits and did not immediately respond to a request for comment about Rao’s allegation about the outreach.
-
Politics3 days agoIndia let Iran warship dock the day US sank another off Sri Lanka, say officials
-
Sports3 days agoPakistan set for FIH Pro League debut | The Express Tribune
-
Entertainment3 days agoHarry Styles kicks off new era with ‘One Night Only’ comeback show
-
Business1 week agoLabour parliamentarians urge UK Government to oppose Rosebank oil field
-
Business3 days agoHome heating oil: ‘Most of my pension has gone on home heating oil’
-
Business3 days agoRestaurant group changes name after bid to buys pubs across the UK
-
Tech1 week agoThe 5 Big ‘Known Unknowns’ of Donald Trump’s New War With Iran
-
Sports1 week agoUSA vs. Argentina (Mar 1, 2026) Live Score – ESPN
