Connect with us

Tech

Spit On, Sworn At, and Undeterred: What It’s Like to Own a Cybertruck

Published

on

Spit On, Sworn At, and Undeterred: What It’s Like to Own a Cybertruck


Then I got my wife the Model S for Christmas. I started driving that around, and I’m like, I kind of like this. I put an order back in for the Cybertruck and I started building the excitement after that.

How do you feel about it now?

Oh, I love it. Now, everything else to me—and I’m not talking down on anybody else’s stuff, I still love a lot of other vehicles out there—but everything else, to me, those soft lines and everything, it all kind of blends together. And the Cybertruck obviously stands out. I mean, we take the trailer out a lot, and I can go to a campground and there’s 50 kids that come out: “Cybertruck, Cybertruck, Cybertruck.” I carry little toys inside the frunk so I can pass them out and give them to kids, and they love it. It’s a lot of fun.

Anything you don’t like about it?

I can’t really see the front out of the windshield because it’s so long.

What’s the Cybertruck community like?

When I had my Bentley and I met other people with Bentleys or Rolls-Royces, it was exclusive. They were a little standoffish to other people with other vehicles. I’ve learned that with Cybertruck owners, it’s like, “Hey, you want to see it? Come on. You want to test drive? Come on.” They’re more inclusive.

What’s the biggest reaction you’ve gotten from someone while driving it?

A couple of months ago, I think it was in Idaho, my son and I stopped at this place where they had a bunch of bears. It was almost infested with bears, it was kind of gross. This person literally drove through the grass, through the bears, and cut off the other cars and was behind me following me out. And I’m like, dude, who the heck is this?



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Method teaches generative AI models to locate personalized objects

Published

on

Method teaches generative AI models to locate personalized objects


In-context personalized localization involves localizing object instances present in a scene (or query image) similar to the object presented as an in-context example. In this setting, the input to the model is a category name, in-context image, bounding box coordinates, and a query image. The model is tasked with localizing the same category of interest (presented as an in-context example) in the query image. Here, we visualize a few inputs and outputs from various VLMs highlighting that our fine-tuned model better captures the information in the in-context image. Credit: arXiv (2024). DOI: 10.48550/arxiv.2411.13317

Say a person takes their French Bulldog, Bowser, to the dog park. Identifying Bowser as he plays among the other canines is easy for the dog owner to do while onsite.

But if someone wants to use a generative AI model like GPT-5 to monitor their pet while they are at work, the model could fail at this basic task. Vision-language models like GPT-5 often excel at recognizing general objects, like a dog, but they perform poorly at locating personalized objects, like Bowser the French Bulldog.

To address this shortcoming, researchers from MIT and the MIT-IBM Watson AI Lab have introduced a new training method that teaches vision-language models to localize personalized objects in a scene.

Their method uses carefully prepared video-tracking data in which the same object is tracked across multiple frames. They designed the dataset so the model must focus on contextual clues to identify the personalized object, rather than relying on knowledge it previously memorized.

When given a few example images showing a personalized object, like someone’s pet, the retrained model is better able to identify the location of that same pet in a new image.

Models retrained with their method outperformed state-of-the-art systems at this task. Importantly, their technique leaves the rest of the model’s general abilities intact.

This new approach could help future AI systems track specific objects across time, like a child’s backpack, or localize objects of interest, such as a species of animal in ecological monitoring. It could also aid in the development of AI-driven assistive technologies that help visually impaired users find certain items in a room.

“Ultimately, we want these models to be able to learn from context, just like humans do. If a model can do this well, rather than retraining it for each new task, we could just provide a few examples and it would infer how to perform the task from that context. This is a very powerful ability,” says Jehanzeb Mirza, an MIT postdoc and senior author of a paper on this technique posted to the arXiv preprint server.

Mirza is joined on the paper by co-lead authors Sivan Doveh, a graduate student at Weizmann Institute of Science; and Nimrod Shabtay, a researcher at IBM Research; James Glass, a senior research scientist and the head of the Spoken Language Systems Group in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL); and others. The work will be presented at the International Conference on Computer Vision (ICCV 2025), held Oct 19–23 in Honolulu, Hawai’i.

An unexpected shortcoming

Researchers have found that large language models (LLMs) can excel at learning from context. If they feed an LLM a few examples of a task, like addition problems, it can learn to answer new addition problems based on the context that has been provided.

A vision-language model (VLM) is essentially an LLM with a visual component connected to it, so the MIT researchers thought it would inherit the LLM’s in-context learning capabilities. But this is not the case.

“The has not been able to find a black-and-white answer to this particular problem yet. The bottleneck could arise from the fact that some is lost in the process of merging the two components together, but we just don’t know,” Mirza says.

The researchers set out to improve VLMs abilities to do in-context localization, which involves finding a specific object in a new image. They focused on the data used to retrain existing VLMs for a new task, a process called fine-tuning.

Typical fine-tuning data are gathered from random sources and depict collections of everyday objects. One image might contain cars parked on a street, while another includes a bouquet of flowers.

“There is no real coherence in these data, so the model never learns to recognize the same object in multiple images,” he says.

To fix this problem, the researchers developed a new dataset by curating samples from existing video-tracking data. These data are video clips showing the same object moving through a scene, like a tiger walking across a grassland.

They cut frames from these videos and structured the dataset so each input would consist of multiple images showing the same object in different contexts, with example questions and answers about its location.

“By using multiple images of the same object in different contexts, we encourage the model to consistently localize that object of interest by focusing on the context,” Mirza explains.

Forcing the focus

But the researchers found that VLMs tend to cheat. Instead of answering based on context clues, they will identify the object using knowledge gained during pretraining.

For instance, since the model already learned that an image of a tiger and the label “tiger” are correlated, it could identify the tiger crossing the grassland based on this pretrained knowledge, instead of inferring from context.

To solve this problem, the researchers used pseudo-names rather than actual object category names in the dataset. In this case, they changed the name of the tiger to “Charlie.”

“It took us a while to figure out how to prevent the model from cheating. But we changed the game for the model. The model does not know that ‘Charlie’ can be a tiger, so it is forced to look at the context,” he says.

The researchers also faced challenges in finding the best way to prepare the data. If the frames are too close together, the background would not change enough to provide data diversity.

In the end, finetuning VLMs with this new dataset improved accuracy at personalized localization by about 12% on average. When they included the dataset with pseudo-names, the performance gains reached 21%.

As model size increases, their technique leads to greater performance gains.

In the future, the researchers want to study possible reasons VLMs don’t inherit in-context learning capabilities from their base LLMs. In addition, they plan to explore additional mechanisms to improve the performance of a VLM without the need to retrain it with new data.

“This work reframes few-shot personalized object localization—adapting on the fly to the same object across new scenes—as an instruction-tuning problem and uses video-tracking sequences to teach VLMs to localize based on visual context rather than class priors. It also introduces the first benchmark for this setting with solid gains across open and proprietary VLMs.

“Given the immense significance of quick, instance-specific grounding—often without finetuning—for users of real-world workflows (such as robotics, augmented reality assistants, creative tools, etc.), the practical, data-centric recipe offered by this work can help enhance the widespread adoption of vision-language foundation models,” says Saurav Jha, a postdoc at the Mila-Quebec Artificial Intelligence Institute, who was not involved with this work.

More information:
Sivan Doveh et al, Teaching VLMs to Localize Specific Objects from In-context Examples, arXiv (2025). DOI: 10.48550/arxiv.2411.13317

Journal information:
arXiv


This story is republished courtesy of MIT News (web.mit.edu/newsoffice/), a popular site that covers news about MIT research, innovation and teaching.

Citation:
Method teaches generative AI models to locate personalized objects (2025, October 16)
retrieved 16 October 2025
from https://techxplore.com/news/2025-10-method-generative-ai-personalized.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Tech

Breakthrough quantum-secure link protects data using the laws of physics

Published

on

Breakthrough quantum-secure link protects data using the laws of physics


CSIRO’s Dr Seyit Camtepe (left) and Dr Sebastian Kish (right) with the live quantum-secure key distribution system. Credit: CSIRO

Australian technology has delivered a live quantum-secure link, a breakthrough that promises to future-proof critical data against tomorrow’s cyber threats.

The project brings together QuintessenceLabs, Australia’s national science agency CSIRO, and AARNet, the national research and education network. By combining local expertise in quantum cyber security, digital science and advanced fiber infrastructure, the partners have successfully demonstrated a quantum (QKD) system running over standard optical fiber.

Together, these organizations are building sovereign quantum capability to protect Australia’s most valuable data.

Today’s digital world runs on long-lived data: health records, , research findings and personal files stored in the cloud. Criminals can already copy encrypted data and wait, hoping future computers will eventually break today’s codes.

QKD stops that long-game by generating unbreakable encryption keys rooted in the laws of physics. Put simply, it uses tiny signals of light to create secret codes between two points; if anyone tries to listen in, the system takes protective action.

When deployed more widely, QKD could provide a new layer of tamper-evident security across optical fiber, complementing existing cyber-defense tools.

Using a new AARNet fiber loop at CSIRO’s Marsfield site in Sydney, QuintessenceLabs deployed its qOptica continuous variable QKD system, or CV-QKD.

Although the current system supports experiments and research, at 12.7 kilometers long, the link produced strong secret key rates despite real-world fiber losses, demonstrating its readiness for practical use. The team’s next step is to extend the live link to longer distances to hopefully cover cities, states and partnering countries.

Vikram Sharma, Founder and CEO of QuintessenceLabs explains how this deployment showcases the strength of Australian collaboration in advancing quantum cybersecurity.

“Integrating CSIRO’s research expertise, AARNet’s network infrastructure, and QuintessenceLabs’ quantum technology, we have demonstrated that quantum-secure communications are practical on today’s networks,” Sharma said.

Breakthrough quantum-secure link protects data using the laws of physics
Two parties, Alice and Bob, exchange security keys over a quantum channel on AARNet’s operational fiber network. In an operational setting, each party would be located at a geographically distinct location. Credit: CSIRO

“It’s a vital step toward protecting Australia’s most critical data and strengthening resilience against emerging threats.”

CSIRO quantum cryptography research scientist Dr. Sebastian Kish said the unique feature of QKD is that it makes fiber connections like the NBN inherently secure.

“If someone tries to tap the line, the quantum signals change and the alarms go off. It’s like giving Australia’s everyday internet an in-built security alarm, powered by the laws of physics,” Dr. Kish said.

Dr. Seyit Camtepe, CSIRO cyber and quantum security research scientist, explains this was a proud first step.

“Our ambition was to enable the nation to develop and test future-proof cybersecurity innovations using the laws of physics—and we’ve achieved an important milestone,” Dr. Camtepe said.

Chief Technology Officer for AARNet David Wilde said this marks the first publicly documented deployment of quantum key distribution over telecom-grade dark fiber in Sydney, and among the first in Australia.

“Demonstrations like this show how Australia’s research network can lead the way in trialing quantum-secure communications, building the foundations for protecting critical research and education data across our wider national infrastructure,” Wilde said.

Next, the partners will expand the link across a longer AARNet fiber route and test it under real-world conditions.

They will also explore an inter-city route between Canberra and Sydney and pilot integrations with VPNs and cloud key-management. Together, these efforts mark a major step toward embedding quantum-secure infrastructure across essential services and building a resilient, sovereign cyber capability.

The team is inviting inquiries from researchers, and industry to expand this technology further in Australia.

Citation:
Breakthrough quantum-secure link protects data using the laws of physics (2025, October 16)
retrieved 16 October 2025
from https://techxplore.com/news/2025-10-breakthrough-quantum-link-laws-physics.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Tech

3D-printed microrobots adapt to diverse environments with modular design

Published

on

3D-printed microrobots adapt to diverse environments with modular design


Head modules for real-time interaction. Credit: Advanced Materials (2025). DOI: 10.1002/adma.202507503

Microrobots, small robotic systems that are less than 1 centimeter (cm) in size, could tackle some real-world tasks that cannot be completed by bigger robots. For instance, they could be used to monitor confined spaces and remote natural environments, to deliver drugs or to diagnose diseases or other medical conditions.

Researchers at Seoul National University recently introduced new modular and durable microrobots that can adapt to their surroundings, effectively navigating a range of environments. These , introduced in a paper published in Advanced Materials, can be fabricated using 3D .

“Microrobots, with their insect-like size, are expected to make contributions in fields where conventional robots have struggled to operate,” Won Jun Song, first author of the paper, told Tech Xplore. “However, most microrobots developed to date have been highly specialized, tailored for very specific purposes, making them difficult to deploy across diverse environments and applications. Our goal was to present a new approach toward creating general-purpose microrobots.”

New adaptive and 3D-printed microrobots that can move in various environments
Fully 3D-printed modular microrobots capable of performing a broad range of tasks across diverse environments are demonstrated. The authors propose modular design as an approach for the development of general-purpose microrobots. Credit: Won Jun Song

While developing their microrobots, Song and his colleagues drew inspiration from drones—unmanned aerial vehicles (UAVs)—which can be tailored for a wide range of applications (e.g., photography/videography, package delivery, defense, etc.). Their objective was thus to develop adaptive microrobots that could be applied to different real-world problems.

“Our microrobot is composed of a main body and three types of modules,” explained Song. “The main body serves as the hub where all other modules are attached and is responsible for controlling the overall movement of the robot. The foot modules allow the microrobot to walk, not only on flat surfaces, but also on sand and even across water. The head modules enable real-time interaction with nearby robots or humans. Finally, the connecting modules make it possible for multiple microrobots to collaborate and operate together as if they were a single unit.”

To fabricate their microrobots’ individual components, the researchers used a custom-made multi-material 3D printer that they had created as part of their earlier studies. Notably, this 3D printer would enable the efficient mass-production of microrobot modules, allowing manufacturers to print up to eight identical units in a single run.







Credit: Advanced Materials (2025). DOI: 10.1002/adma.202507503

The team’s 3D-printing approach also makes it easy to tailor robots for specific tasks, by enabling the fabrication of specific modules or components on-demand to broaden their functionalities. In initial tests, the microrobots created by the researchers were found to reliably move in different settings, walking on smooth, rough and granular terrains, but also swimming in aquatic environments.

“Many researchers have focused on developing microrobots optimized for very specific purposes, and this approach has greatly contributed to creating highly efficient robots with excellent performance,” said Song. “However, for microrobots to reach commercialization—similar to how drones or Boston Dynamics’ Spot are now widely used in daily life—they must be capable of operating across a broader range of environments and applications.”

In the future, the modular microrobot design introduced by Song and his colleagues and their 3D printing strategy could contribute to the large-scale fabrication of tiny robotic systems tailored for specific purposes. Meanwhile, other research groups could draw inspiration from the team’s paper to develop other customizable that can operate in different environments.

“We now aim to use our newly developed multi-material printing technology and high-performance photocurable materials to develop other advanced robots and devices,” added Song.

Written for you by our author Ingrid Fadelli, edited by Stephanie Baum, and fact-checked and reviewed by Robert Egan—this article is the result of careful human work. We rely on readers like you to keep independent science journalism alive.
If this reporting matters to you,
please consider a donation (especially monthly).
You’ll get an ad-free account as a thank-you.

More information:
Won Jun Song et al, All‐3D‐Printed Multi‐Environment Modular Microrobots Powered by Large‐Displacement Dielectric Elastomer Microactuators, Advanced Materials (2025). DOI: 10.1002/adma.202507503

© 2025 Science X Network

Citation:
3D-printed microrobots adapt to diverse environments with modular design (2025, October 16)
retrieved 16 October 2025
from https://techxplore.com/news/2025-10-3d-microrobots-diverse-environments-modular.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Trending