In the unlikely event that 2 terabytes is not enough, you can increase your storage. The option to upgrade to an even larger plan is available only for current subscribers and in select countries.
5-TB Plan: For $25 per month or $250 per year (£20 or £200 in the UK), you get 5 TB with family sharing and the same perks as the Premium Plan.
10-TB Plan: For $50 per month (no annual plan) (£40 in the UK), you get 10 TB with family sharing and the same perks as the 5-TB plan.
Google One Benefits
The main benefit of a Google One plan is the extra cloud storage you can share with up to five family members. While families can share the same space, personal photos and files are accessible only to each owner unless you specifically choose to share them. Everyone in the family can also share the additional benefits (provided you all live in the same country). Let’s take a closer look at those benefits:
Unlimited Magic Editor Saves in Google Photos
Courtesy of Simon Hill
Magic Editor enables you to delete unwanted people or objects from the background of your photos, tweak the look of the sky, change the position of people and objects, and more with the help of generative AI. All features work with eligible shots in your Google Photos app. Without a subscription, you are limited to 10 saves per month. These features are available on Google Pixel phones, even if you don’t subscribe to Google One.
Cash Back on Purchases
The 2-TB plan nets you 10 percent back in Google Store credit for any purchases. This could prove useful if you’re thinking about buying multiple Google devices. The credit can take up to one month to get after your purchase, and it will have an expiry date attached.
Google Workspace Premium
The Premium plan includes Google Workspace Premium, which gives you enhanced features in Google Meet and Google Calendar. For example, you can have longer meetings with background noise cancellation or create a professional booking page to enable other people to make appointments with you.
Gemini Pro
Offering access to Google’s “most capable AI models,” Gemini Pro offers help with logical reasoning, coding, creative collaboration, and more. You can also create eight-second videos from text prompts using Veo 2, access more features like Deep Research for your projects, and upload 1,500 pages of research, textbooks, or industry reports with a 1 million token context window for analysis.
Flow Pro
This AI filmmaking tool employs Google’s AI video model, Veo, to enable you to generate stories, craft a cohesive narrative, find a consistent voice, and realize your imagination on the screen. You get 1,000 monthly AI credits to generate videos across Flow and Whisk.
Whisk Pro
You can use Whisk to turn still images into eight-second video clips using the Veo 2 model. You get 1,000 monthly AI credits to generate videos across Flow and Whisk.
NotebookLM Pro
This offers more audio overviews, notebooks, and sources per notebook to make information more digestible, allows you to customize the tone and style of your notebooks, and enables you to share and collaborate on notebooks with family and friends.
Gemini in Gmail, Docs, Vids & More
In Gmail and Docs, Gemini can help you write invites, resumes, and more, helping you brainstorm ideas, strike the right tone, and polish your missives. Gemini can also create relevant imagery for presentations in Slides, enhance the quality of video calls in Meet, and produce video clips based on your text prompts.
Project Mariner
This agentic research prototype is in early access and only part of the AI Ultra plan for now. Google says it can assist in managing up to 10 tasks simultaneously, handling things like research, bookings, and purchases from a single dashboard.
Gemini in Chrome
AI Ultra subscribers get early access to Gemini in the Chrome browser, which can understand the context of the current webpage, summarize and explain, or even complete tasks and fill out forms for you.
YouTube Premium
Subscribers get access to Google’s music streaming service, YouTube videos are ad-free, and you can save videos for offline viewing, among other YouTube Premium perks. Included as part of the AI Ultra plan, this perk is for an individual YouTube Premium plan.
Nest Aware
Only included in the UK so far, a Nest Aware subscription that includes extended storage of video from home security cameras is now part of the 2-TB Premium plan and above, starting from £8 per month or £80 per year. Considering Nest Aware costs £6 per month or £60 per year on its own, this seems like a great deal.
Fitbit Premium
Again, only included in the UK so far, Fitbit Premium is now included as part of the 2-TB Premium plan and above, starting from £8 per month or £80 per year. Considering that Fitbit Premium currently costs £8 per month or £80 per year on its own in the UK, this deal is too good to pass up.
Extra Benefits
A couple of things fall into this category:
Google Play Credits: You will occasionally get credits to redeem in the Play Store for books, movies, apps, or games. The amount and frequency vary.
Discounts, Trials, and Other Perks: You may get offers for discounted Google services or hardware, extended free trials of Google services, and other perks (for example, Google offered everyone upgrading to a 2-TB plan a free Nest Mini). These offers pop up and disappear seemingly at random.
How to Subscribe to Google One
If you want to sign up, it’s easy. Create or log in to a Google account, then visit the Google One website or install the Android or iOS app.
Power up with unlimited access to WIRED.Get best-in-class reporting that’s too important to ignore for just $2.50 $1 per month for 1 year. Includes unlimited digital access and exclusive subscriber-only content. Subscribe Today.
In-context personalized localization involves localizing object instances present in a scene (or query image) similar to the object presented as an in-context example. In this setting, the input to the model is a category name, in-context image, bounding box coordinates, and a query image. The model is tasked with localizing the same category of interest (presented as an in-context example) in the query image. Here, we visualize a few inputs and outputs from various VLMs highlighting that our fine-tuned model better captures the information in the in-context image. Credit: arXiv (2024). DOI: 10.48550/arxiv.2411.13317
Say a person takes their French Bulldog, Bowser, to the dog park. Identifying Bowser as he plays among the other canines is easy for the dog owner to do while onsite.
But if someone wants to use a generative AI model like GPT-5 to monitor their pet while they are at work, the model could fail at this basic task. Vision-language models like GPT-5 often excel at recognizing general objects, like a dog, but they perform poorly at locating personalized objects, like Bowser the French Bulldog.
To address this shortcoming, researchers from MIT and the MIT-IBM Watson AI Lab have introduced a new training method that teaches vision-language models to localize personalized objects in a scene.
Their method uses carefully prepared video-tracking data in which the same object is tracked across multiple frames. They designed the dataset so the model must focus on contextual clues to identify the personalized object, rather than relying on knowledge it previously memorized.
When given a few example images showing a personalized object, like someone’s pet, the retrained model is better able to identify the location of that same pet in a new image.
Models retrained with their method outperformed state-of-the-art systems at this task. Importantly, their technique leaves the rest of the model’s general abilities intact.
This new approach could help future AI systems track specific objects across time, like a child’s backpack, or localize objects of interest, such as a species of animal in ecological monitoring. It could also aid in the development of AI-driven assistive technologies that help visually impaired users find certain items in a room.
“Ultimately, we want these models to be able to learn from context, just like humans do. If a model can do this well, rather than retraining it for each new task, we could just provide a few examples and it would infer how to perform the task from that context. This is a very powerful ability,” says Jehanzeb Mirza, an MIT postdoc and senior author of a paper on this technique posted to the arXiv preprint server.
Mirza is joined on the paper by co-lead authors Sivan Doveh, a graduate student at Weizmann Institute of Science; and Nimrod Shabtay, a researcher at IBM Research; James Glass, a senior research scientist and the head of the Spoken Language Systems Group in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL); and others. The work will be presented at the International Conference on Computer Vision (ICCV 2025), held Oct 19–23 in Honolulu, Hawai’i.
An unexpected shortcoming
Researchers have found that large language models (LLMs) can excel at learning from context. If they feed an LLM a few examples of a task, like addition problems, it can learn to answer new addition problems based on the context that has been provided.
A vision-language model (VLM) is essentially an LLM with a visual component connected to it, so the MIT researchers thought it would inherit the LLM’s in-context learning capabilities. But this is not the case.
“The research community has not been able to find a black-and-white answer to this particular problem yet. The bottleneck could arise from the fact that some visual information is lost in the process of merging the two components together, but we just don’t know,” Mirza says.
The researchers set out to improve VLMs abilities to do in-context localization, which involves finding a specific object in a new image. They focused on the data used to retrain existing VLMs for a new task, a process called fine-tuning.
Typical fine-tuning data are gathered from random sources and depict collections of everyday objects. One image might contain cars parked on a street, while another includes a bouquet of flowers.
“There is no real coherence in these data, so the model never learns to recognize the same object in multiple images,” he says.
To fix this problem, the researchers developed a new dataset by curating samples from existing video-tracking data. These data are video clips showing the same object moving through a scene, like a tiger walking across a grassland.
They cut frames from these videos and structured the dataset so each input would consist of multiple images showing the same object in different contexts, with example questions and answers about its location.
“By using multiple images of the same object in different contexts, we encourage the model to consistently localize that object of interest by focusing on the context,” Mirza explains.
Forcing the focus
But the researchers found that VLMs tend to cheat. Instead of answering based on context clues, they will identify the object using knowledge gained during pretraining.
For instance, since the model already learned that an image of a tiger and the label “tiger” are correlated, it could identify the tiger crossing the grassland based on this pretrained knowledge, instead of inferring from context.
To solve this problem, the researchers used pseudo-names rather than actual object category names in the dataset. In this case, they changed the name of the tiger to “Charlie.”
“It took us a while to figure out how to prevent the model from cheating. But we changed the game for the model. The model does not know that ‘Charlie’ can be a tiger, so it is forced to look at the context,” he says.
The researchers also faced challenges in finding the best way to prepare the data. If the frames are too close together, the background would not change enough to provide data diversity.
In the end, finetuning VLMs with this new dataset improved accuracy at personalized localization by about 12% on average. When they included the dataset with pseudo-names, the performance gains reached 21%.
As model size increases, their technique leads to greater performance gains.
In the future, the researchers want to study possible reasons VLMs don’t inherit in-context learning capabilities from their base LLMs. In addition, they plan to explore additional mechanisms to improve the performance of a VLM without the need to retrain it with new data.
“This work reframes few-shot personalized object localization—adapting on the fly to the same object across new scenes—as an instruction-tuning problem and uses video-tracking sequences to teach VLMs to localize based on visual context rather than class priors. It also introduces the first benchmark for this setting with solid gains across open and proprietary VLMs.
“Given the immense significance of quick, instance-specific grounding—often without finetuning—for users of real-world workflows (such as robotics, augmented reality assistants, creative tools, etc.), the practical, data-centric recipe offered by this work can help enhance the widespread adoption of vision-language foundation models,” says Saurav Jha, a postdoc at the Mila-Quebec Artificial Intelligence Institute, who was not involved with this work.
More information:
Sivan Doveh et al, Teaching VLMs to Localize Specific Objects from In-context Examples, arXiv (2025). DOI: 10.48550/arxiv.2411.13317
This story is republished courtesy of MIT News (web.mit.edu/newsoffice/), a popular site that covers news about MIT research, innovation and teaching.
Citation:
Method teaches generative AI models to locate personalized objects (2025, October 16)
retrieved 16 October 2025
from https://techxplore.com/news/2025-10-method-generative-ai-personalized.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
CSIRO’s Dr Seyit Camtepe (left) and Dr Sebastian Kish (right) with the live quantum-secure key distribution system. Credit: CSIRO
Australian technology has delivered a live quantum-secure link, a breakthrough that promises to future-proof critical data against tomorrow’s cyber threats.
The project brings together QuintessenceLabs, Australia’s national science agency CSIRO, and AARNet, the national research and education network. By combining local expertise in quantum cyber security, digital science and advanced fiber infrastructure, the partners have successfully demonstrated a quantum key distribution (QKD) system running over standard optical fiber.
Together, these organizations are building sovereign quantum capability to protect Australia’s most valuable data.
Today’s digital world runs on long-lived data: health records, financial transactions, research findings and personal files stored in the cloud. Criminals can already copy encrypted data and wait, hoping future computers will eventually break today’s codes.
QKD stops that long-game by generating unbreakable encryption keys rooted in the laws of physics. Put simply, it uses tiny signals of light to create secret codes between two points; if anyone tries to listen in, the system takes protective action.
When deployed more widely, QKD could provide a new layer of tamper-evident security across optical fiber, complementing existing cyber-defense tools.
Using a new AARNet fiber loop at CSIRO’s Marsfield site in Sydney, QuintessenceLabs deployed its qOptica continuous variable QKD system, or CV-QKD.
Although the current system supports experiments and research, at 12.7 kilometers long, the link produced strong secret key rates despite real-world fiber losses, demonstrating its readiness for practical use. The team’s next step is to extend the live link to longer distances to hopefully cover cities, states and partnering countries.
Vikram Sharma, Founder and CEO of QuintessenceLabs explains how this deployment showcases the strength of Australian collaboration in advancing quantum cybersecurity.
“Integrating CSIRO’s research expertise, AARNet’s network infrastructure, and QuintessenceLabs’ quantum technology, we have demonstrated that quantum-secure communications are practical on today’s networks,” Sharma said.
Two parties, Alice and Bob, exchange security keys over a quantum channel on AARNet’s operational fiber network. In an operational setting, each party would be located at a geographically distinct location. Credit: CSIRO
“It’s a vital step toward protecting Australia’s most critical data and strengthening resilience against emerging threats.”
CSIRO quantum cryptography research scientist Dr. Sebastian Kish said the unique feature of QKD is that it makes fiber connections like the NBN inherently secure.
“If someone tries to tap the line, the quantum signals change and the alarms go off. It’s like giving Australia’s everyday internet an in-built security alarm, powered by the laws of physics,” Dr. Kish said.
Dr. Seyit Camtepe, CSIRO cyber and quantum security research scientist, explains this was a proud first step.
“Our ambition was to enable the nation to develop and test future-proof cybersecurity innovations using the laws of physics—and we’ve achieved an important milestone,” Dr. Camtepe said.
Chief Technology Officer for AARNet David Wilde said this marks the first publicly documented deployment of quantum key distribution over telecom-grade dark fiber in Sydney, and among the first in Australia.
“Demonstrations like this show how Australia’s research network can lead the way in trialing quantum-secure communications, building the foundations for protecting critical research and education data across our wider national infrastructure,” Wilde said.
Next, the partners will expand the link across a longer AARNet fiber route and test it under real-world conditions.
They will also explore an inter-city route between Canberra and Sydney and pilot integrations with VPNs and cloud key-management. Together, these efforts mark a major step toward embedding quantum-secure infrastructure across essential services and building a resilient, sovereign cyber capability.
The team is inviting inquiries from researchers, government agencies and industry to expand this technology further in Australia.
Citation:
Breakthrough quantum-secure link protects data using the laws of physics (2025, October 16)
retrieved 16 October 2025
from https://techxplore.com/news/2025-10-breakthrough-quantum-link-laws-physics.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
It’s becoming clearer that we are in a perilous financial situation globally. Fears over an “AI bubble” are being cited by the Bank of England, the International Monetary Fund and the boss of JP Morgan, Jamie Dimon.
If you want a sense of how insane the narrative is around AI investments, consider this: Thinking Machines Lab, an AI startup, recently raised $2bn funding on a valuation of $10bn.
The company has zero products, zero customers and zero revenues. The only thing it made public to its investors was the resume of its founder, Mira Murati, formerly chief technology officer at OpenAI. If that’s not hubris meeting market exuberance, what is?
But narrative is crucial here because it’s what’s driving all this insane investment in the future of AI or so-called artificial general intelligence (AGI), and it’s important to examine which narrative you believe in if you are to protect yourself for what’s to come.
If I were to pick between the views of a politician such as UK prime minister Keir Starmer, and a writer such as Cory Doctorow, I’d put my bet on Doctorow. Contrast these two statements and see which you feel more comfortable with…
Doctorow suggests the AI bubble needs to be punctured as soon as possible to “halt this before it progresses any further and to head off the accumulation of social and economic debt”.
He suggests doing that by taking aim at the basis for the AI bubble – namely, “creating a growth story by claiming that AI can do your job”.
AI is the asbestos we are shovelling into the walls of our society and our descendants will be digging it out for generations Cory Doctorow
Claims about jobs disappearing to AI have been around since 2019 with Sam Altman, then leader of venture capital (VC) fund Y Combinator, speaking about radiology jobs disappearing in the future: “Human radiologists are already much worse than computer radiologists. If I had to pick a human or an AI to read my scan, I’d pick the AI.”
Fast forward six years to 2025 and look how that worked out. According to a recent report by Works in Progress, despite the fact that radiology combines digital images, clear benchmarks and repeatable tasks, demand for human radiologists is at an all-time high.
The report authors’ conclusions drive a horse and cart through the current AI/AGI narrative that if left unstopped will cause severe global economic pain: “In many jobs, tasks are diverse, stakes are high, and demand is elastic. When this is the case, we should expect software to initially lead to more human work, not less. The lesson from a decade of radiology models is neither optimism about increased output nor dread about replacement. Models can lift productivity, but their implementation depends on behaviour, institutions and incentives. For now, the paradox has held – the better the machines, the busier radiologists have become.”
Across other sectors too, the mythology around job losses is slowly being interrogated – for example, Yale University Budget Lab found no discernible disruption to labour markets since ChatGPT’s release 33 months ago.
The research goes on to state: “While this finding may contradict the most alarming headlines, it is not surprising given past precedents. Historically, widespread technological disruption in workplaces tends to occur over decades, rather than months or years. Computers didn’t become commonplace in offices until nearly a decade after their release to the public, and it took even longer for them to transform office workflows. Even if new AI technologies will go on to impact the labour market as much, or more dramatically, it is reasonable to expect that widespread effects will take longer than 33 months to materialise”.
Normal technology
In other words, AI is just, well, technology as we have always known it – or as experts Aryind Narayanan and Sayash Kapoor call AI, just “normal technology”.
Importantly in their paper, AI as normal technology – An alternative to the vision of AI as a potential superintelligence, they identify key lessons from past technological revolutions – the slow and uncertain nature of technology adoption and diffusion; continuity between the past and future trajectory of AI in terms of social impact; and the role of institutions in shaping this trajectory. They also “strongly disagree with the characterisation of generative AI adoption as rapid, which reinforces our assumption about the similarity of AI diffusion to past technologies”
A good example of AI as normal technology without all the hype, hyperbole and billion-dollar burn rate, is the City of Austin, Texas. Here, an on-premise AI system helped the local government process building permits in days instead of months.
According to David Stout, CEO of WebAI, this was done “with no spectacle. No headlines. Just efficiency gains that will outlast the market cycle. He said, “That’s the point too often missed in the frenzy. Mega-models attract headlines, consume billions in capital, and struggle to demonstrate sustainable economics. Meanwhile, smaller, domain-specific systems are already delivering efficiency gains, cost savings and productivity improvements. The smart play isn’t to abandon AI, but to pivot towards models and deployments that will endure”.
Technology like we have always known it to be – not the insane fantasy of “superintelligence” that is powering this dangerous bubble.
The question to ask is, given the prediction of at least a 33-month lag before any return on investment, however small, will the markets wait another 33 months for their returns to materialise?
Protracted crisis
A recent report on MarketWatch suggests the AI bubble is now ”seventeen times the size of the dot com frenzy and four times the sub-prime bubble”. MarketWatch quotes financial analyst Julien Garran, who previously led UBS’s commodities strategy team, who said “AI now accounts for over four times the wealth trapped in the 2008 sub-prime mortgage bubble, which resulted in years of protracted crisis across the globe”.
Warnings from the Bank of England in its semi-annual Financial Policy Committee report are equally stark: “Uncertainty around the global risk environment increases the risk that markets have not fully priced in possible adverse outcomes, and a sudden correction could occur should any of these risks crystallize.”
The bank also warned of “the risk of a sharp market for global financial markets amid AI bubble risks and political pressure on the Federal Reserve.”
What a sudden correction means is that a collapse of the AI investment bubble will take trillions of investment with it, impacting us all.
Even more worrying is the issue of debt financing among those competing in the AI race – that is, all the tech bros. It now appears, according to Axios, that these companies are turning to private debt markets and special purpose vehicles for cash, which means this kind of borrowing does not have to show on their balance sheets.
Meta, for example, recently sought $29bn from private capital firms for its AI datacentres. This off-book debt financing should ring more alarm bells that something is terribly wrong with the AI growth narrative.
After all, as pointed out by the Axios analysts, “If hugely profitable tech companies need to mask their borrowings to fund AI spending, it signals they’re not confident that they’ll soon get the returns needed to justify such investments. That suggests the very spending powering today’s earnings boom can’t last forever.”
Unit economics
To go back to Cory Doctorow’s argument, we are not in the early days of the web, or Amazon, or other dot com companies that lost money before becoming profitable: “Those were all propositions with excellent unit economics. They got cheaper with every successive technological generation and the more customers they added, the more profitable they became”.
AI companies do not have excellent unit economics – in fact they have the opposite, according to Doctorow: “Each generation of AI has been vastly more expensive than the previous one, and each new AI customer makes the AI companies lose more money”.
[Only] about 5% of tasks will be able to be profitably performed by AI within 10 years Daron Acemoglu
And if that’s not sobering enough for the VC and private equity firms, then the circular investing going on between these tech firms should be a huge concern.
Microsoft is investing $10bn in OpenAI by giving free access to its servers. OpenAI reports this as an “investment,” then redeems these tokens at Microsoft datacentres, which Microsoft books as $10bn in revenue.
Bain & Co says the only way to make today’s AI investments profitable “is for the sector to bring in $2tn by 2030,” which, according to the Wall Street Journal, is more than the revenue of Amazon, Google, Microsoft, Apple, Nvidia and Meta – combined.
Taking a closer look at US economic growth is surely more cause for concern.
According to Harvard economist Jason Furman’s analysis, GDP growth in the first half of 2025 was driven almost entirely by investment in information processing equipment and software. This spending was largely tied to the rapid expansion of AI infrastructure and datacentres.
While these tech sectors only made up 4% of total GDP, they contributed a staggering 92% of growth. Absent this investment, Furman estimates US GDP growth would have hovered around 0.1% on an annualised basis – barely above zero.
There is a lot riding on a technology that’s supposed to be godlike and all powerful but which, according to MIT Institute professor Daron Acemoglu, is far less likely to achieve the insane hyperbolic claims being made by the tech bros in an effort to win an unwinnable race.
Acemoglu estimates the 10-year effect of AI in the US will be that only “about 5% of tasks will be able to be profitably performed by AI within that timeframe,” with the GDP boost likely to be closer to 1% over that timespan. If that’s not a recipe for stock market collapse, what is?
Emperor’s new clothes
Going back to the AI booster narrative and how it’s driving things, Doctorow is again incisive: “The most important thing about AI isn’t its technical capabilities or limitations. The most important thing is the investor story and the ensuing mania that has teed up an economic catastrophe that will harm hundreds of millions or even billions of people. AI isn’t going to wake up, become super intelligent and turn you into paperclips – but rich people with AI investor psychosis are almost certainly going to make you much, much poorer”.
I’m not an economist, so I did what we are all supposed to do now for our enlightenment. I gave the machines built by the tech bros all the same prompt: “What fable best encapsulates the current AI bubble?”
Gemini, Perplexity and ChatGPT were all in agreement with nearly the same explanation of why they all picked the same story: “The emperor’s new clothes remains the best classic fable to explain the AI bubble, as it encapsulates the collective willingness to believe in – and profit from – an imagined reality, until facts and external shocks eventually break the spell.”