Tech
4 Ways to Sell or Trade In Your Old iPhone
Whether you’re in the market for the super-slim new iPhone Air or planning to go big with the iPhone 17 Pro Max, don’t just toss that old phone in a drawer when you upgrade. Sell your iPhone! That old hunk of glass is more valuable than you might think. A handful of services offer cash or store credit for used iPhones. We’ve compared some popular options for trading in an old device. Most of these services accept Samsung and Google devices, and all of them take iPhones.
Updated September 2025: We’ve updated phones and pricing.
Table of Contents
Tips to Get a High Resale Price
If you want the maximum resale value for your phone, make sure you take care of it. Buy a good case (check out our guide on picking a good phone case) and consider a screen protector. They’ll keep your device looking new, which is the best way to ensure you get the most money possible when you sell.
Always buy an unlocked phone. This not only gives you the freedom to switch carriers, but you’ll get more for it when you go to sell it. For the past decade, all iPhones in the US have worked on any wireless network. There’s no reason to chain yourself to one carrier. Generally, unless a carrier tells you a phone is unlocked, it probably isn’t, especially if you buy it on a payment plan.
The last thing to do before you run off to cash in your old phone is to back up all your data using iCloud. Be sure to check the option to back up your Messages so iCloud will store your text messages, which sometimes include photos and videos you haven’t saved to your Camera Roll. Remember to unpair your Apple Watch if you have one, and wipe your phone’s data as well.
Best for Reliable Cash
Gazelle is the old hand in the world of used phones. The company has been buying phones since 2006 and has the simplest process we’ve tested. It also doesn’t require you to create an account just to get a quote on your phone.
You fill out an online form and answer some questions about your device—whether it works, which carrier it’s tied to, and whether there’s any cosmetic damage. You’ll then get an offer based on the answers you give. If you accept the offer, Gazelle will send a box complete with a shipping label, and you’ll ship the phone in for inspection. Once the company has looked over your device and verified that it’s in the condition you said it was, you’ll be paid—usually in seven to 10 days. Payment can be in the form of a check, PayPal, or an Amazon gift card.
A factory-unlocked, 128-GB iPhone 16 in pristine condition will get you $469. A 128-GB unlocked iPhone 15 in lightly used condition will net you about $315. Gazelle sometimes runs promotional offers around new device launches, so keep an eye out to snag the best deal.
Best for Pristine iPhones
Swappa is an online auction house, something like eBay. It eliminates some of the problems associated with eBay, like high seller fees, poor seller-buyer communication tools, and too many poor-quality devices. You won’t be able to sell your iPhone here unless it’s in good shape, fully functional, and undamaged. You’ll also have to create an account and link it to your PayPal account before you can even see an offer.
So long as your phone meets Swappa’s listing criteria and you’re willing to put in a little effort, this is where you’ll get the most money for your old device. As you would on eBay, you’ll need to put together a listing with photos. Be sure to take the case off your phone, and be honest about the condition. Remember to factor in shipping when setting your sale price.
Tech
Fake or the real thing? How AI can make it harder to trust the pictures we see
A new study has revealed that artificial intelligence can now generate images of real people that are virtually impossible to tell apart from genuine photographs.
Using AI models ChatGPT and DALL·E, a team of researchers from Swansea University, the University of Lincoln and Ariel University in Israel, created highly realistic images of both fictional and famous faces, including celebrities.
They found that participants were unable to reliably distinguish them from authentic photos—even when they were familiar with the person’s appearance.
Across four separate experiments, the researchers noted that adding comparison photos or the participants’ prior familiarity with the faces provided only limited help.
The research has just been published in the journal Cognitive Research: Principles and Implications and the team say their findings highlight a new level of “deepfake realism,” showing that AI can now produce convincing fake images of real people which could erode trust in visual media.
Professor Jeremy Tree, from the School of Psychology, said, “Studies have shown that face images of fictional people generated using AI are indistinguishable from real photographs. But for this research we went further by generating synthetic images of real people.
“The fact that everyday AI tools can do this not only raises urgent concerns about misinformation and trust in visual media but also the need for reliable detection methods as a matter of urgency.”
One of the experiments, which involved participants from the US, Canada, the UK, Australia and New Zealand, saw subjects shown a series of facial images, both real and artificially generated, and they were asked to identify which was which. The team say the fact the participants mistook the AI-generated novel faces for real photos indicated just how plausible they were.
Another experiment saw participants asked if they could tell genuine pictures of Hollywood stars such as Paul Rudd and Olivia Wilde from computer-generated versions. Again, the study’s results showed just how difficult individuals can find it to spot the authentic version.
The researchers say AI’s ability to produce novel/synthetic images of real people opens up a number of avenues for use and abuse. For instance, creators might generate images of a celebrity endorsing a certain product or political stance, which could influence public opinion of both the identity and the brand/organization they are portrayed as supporting.
Professor Tree added, “This study shows that AI can create synthetic images of both new and known faces that most people can’t tell apart from real photos. Familiarity with a face or having reference images didn’t help much in spotting the fakes. That is why we urgently need to find new ways to detect them.
“While automated systems may eventually outperform humans at this task, for now, it’s up to viewers to judge what’s real.”
More information:
Robin S. S. Kramer et al, AI-generated images of familiar faces are indistinguishable from real photographs, Cognitive Research: Principles and Implications (2025). DOI: 10.1186/s41235-025-00683-w
Citation:
Fake or the real thing? How AI can make it harder to trust the pictures we see (2025, November 6)
retrieved 6 November 2025
from https://techxplore.com/news/2025-11-fake-real-ai-harder-pictures.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
Researchers propose a new model for legible, modular software
Coding with large language models (LLMs) holds huge promise, but it also exposes some long-standing flaws in software: code that’s messy, hard to change safely, and often opaque about what’s really happening under the hood. Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) are charting a more “modular” path ahead.
Their new approach breaks systems into “concepts,” separate pieces of a system, each designed to do one job well, and “synchronizations,” explicit rules that describe exactly how those pieces fit together. The result is software that’s more modular, transparent, and easier to understand.
A small domain-specific language (DSL) makes it possible to express synchronizations simply, in a form that LLMs can reliably generate. In a real-world case study, the team showed how this method can bring together features that would otherwise be scattered across multiple services. The paper is published in the Proceedings of the 2025 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software.
The team, including Daniel Jackson, an MIT professor of electrical engineering and computer science (EECS) and CSAIL associate director, and Eagon Meng, an EECS Ph.D. student, CSAIL affiliate, and designer of the new synchronization DSL, explore this approach in their paper “What You See Is What It Does: A Structural Pattern for Legible Software,” which they presented at the Splash Conference in Singapore in October.
The challenge, they explain, is that in most modern systems, a single feature is never fully self-contained. Adding a “share” button to a social platform like Instagram, for example, doesn’t live in just one service. Its functionality is split across code that handles posting, notification, authenticating users, and more. All these pieces, despite being scattered across the code, must be carefully aligned, and any change risks unintended side effects elsewhere.
Jackson calls this “feature fragmentation,” a central obstacle to software reliability. “The way we build software today, the functionality is not localized. You want to understand how ‘sharing’ works, but you have to hunt for it in three or four different places, and when you find it, the connections are buried in low-level code,” says Jackson.
Concepts and synchronizations are meant to tackle this problem. A concept bundles up a single, coherent piece of functionality, like sharing, liking, or following, along with its state and the actions it can take. Synchronizations, on the other hand, describe at a higher level how those concepts interact.
Rather than writing messy low-level integration code, developers can use a small domain-specific language to spell out these connections directly. In this DSL, the rules are simple and clear: one concept’s action can trigger another, so that a change in one piece of state can be kept in sync with another.
“Think of concepts as modules that are completely clean and independent. Synchronizations then act like contracts—they say exactly how concepts are supposed to interact. That’s powerful because it makes the system both easier for humans to understand and easier for tools like LLMs to generate correctly,” says Jackson.
“Why can’t we read code like a book? We believe that software should be legible and written in terms of our understanding: our hope is that concepts map to familiar phenomena, and synchronizations represent our intuition about what happens when they come together,” says Meng.
The benefits extend beyond clarity. Because synchronizations are explicit and declarative, they can be analyzed, verified, and of course generated by an LLM. This opens the door to safer, more automated software development, where AI assistants can propose new features without introducing hidden side effects.
In their case study, the researchers assigned features like liking, commenting, and sharing each to a single concept—like a microservices architecture, but more modular. Without this pattern, these features were spread across many services, making them hard to locate and test. Using the concepts-and-synchronizations approach, each feature became centralized and legible, while the synchronizations spelled out exactly how the concepts interacted.
The study also showed how synchronizations can factor out common concerns like error handling, response formatting, or persistent storage. Instead of embedding these details in every service, synchronization can handle them once, ensuring consistency across the system.
More advanced directions are also possible. Synchronizations could coordinate distributed systems, keeping replicas on different servers in step, or allow shared databases to interact cleanly. Weakening synchronization semantics could enable eventual consistency while still preserving clarity at the architectural level.
Jackson sees potential for a broader cultural shift in software development. One idea is the creation of “concept catalogs,” shared libraries of well-tested, domain-specific concepts. Application development could then become less about stitching code together from scratch and more about selecting the right concepts and writing the synchronizations between them.
“Concepts could become a new kind of high-level programming language, with synchronizations as the programs written in that language. It’s a way of making the connections in software visible,” says Jackson. “Today, we hide those connections in code. But if you can see them explicitly, you can reason about the software at a much higher level. You still have to deal with the inherent complexity of features interacting. But now it’s out in the open, not scattered and obscured.”
“Building software for human use on abstractions from underlying computing machines has burdened the world with software that is all too often costly, frustrating, even dangerous, to understand and use,” says University of Virginia Associate Professor Kevin Sullivan, who wasn’t involved in the research.
“The impacts (such as in health care) have been devastating. Meng and Jackson flip the script and insist on building interactive software on abstractions from human understanding, which they call ‘concepts.’ They combine expressive mathematical logic and natural language to specify such purposeful abstractions, providing a basis for verifying their meanings, composing them into systems, and refining them into programs fit for human use. It’s a new and important direction in the theory and practice of software design that bears watching.”
“It’s been clear for many years that we need better ways to describe and specify what we want software to do,” adds Thomas Ball, Lancaster University honorary professor and University of Washington affiliate faculty, who also wasn’t involved in the research. “LLMs’ ability to generate code has only added fuel to the specification fire. Meng and Jackson’s work on concept design provides a promising way to describe what we want from software in a modular manner. Their concepts and specifications are well-suited to be paired with LLMs to achieve the designer’s intent.”
Looking ahead, the researchers hope their work can influence how both industry and academia think about software architecture in the age of AI. “If software is to become more trustworthy, we need ways of writing it that make its intentions transparent,” says Jackson. “Concepts and synchronizations are one step toward that goal.”
More information:
Eagon Meng et al, What You See Is What It Does: A Structural Pattern for Legible Software, Proceedings of the 2025 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software (2025). DOI: 10.1145/3759429.3762628
Citation:
Researchers propose a new model for legible, modular software (2025, November 6)
retrieved 6 November 2025
from https://techxplore.com/news/2025-11-legible-modular-software.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
‘Vibe coding’ named word of the year by Collins dictionary
“Vibe coding,” a word that essentially means using artificial intelligence (AI) to tell a machine what you want instead of coding it yourself, was on Thursday named the Collins Word of the Year 2025.
Coined by OpenAI co-founder Andrej Karpathy, the word refers to “an emerging software development that turns natural language into computer code using AI,” according to Collins Dictionary.
“It’s programming by vibes, not variables,” said Collins.
“While tech experts debate whether it’s revolutionary or reckless, the term has resonated far beyond Silicon Valley, speaking to a broader cultural shift toward AI-assisted everything in everyday life,” it added.
Lexicographers at Collins Dictionary monitor the 24 billion-word Collins Corpus, which draws from a range of media sources including social media, to create the annual list of new and notable words that reflect our ever-evolving language.
The 2025 shortlist highlights a range of words that have emerged in the past year to pithily reflect the changing world around us.
“Broligarchy” made the list in a year that saw tech billionaire Elon Musk briefly at the heart of US President Donald Trump’s administration and Amazon founder Jeff Bezos cozying up to the president.
The word is defined as a small clique of very wealthy men who exert political influence.
‘Coolcation’
New words linked to work and technology include “clanker,” a derogatory term for a computer, robot or source of artificial intelligence, and “HENRY,” an acronym for high earner, not rich yet.
Another is “taskmasking,” the act of giving a false impression that one is being productive in the workplace, while “micro-retirement” refers to a break taken between periods of employment to pursue personal interests.
In the health and behavioral sphere, “biohacking” also gets a spot, meaning the activity of altering the natural processes of one’s body in an attempt to improve one’s health and longevity.
Also listed is “aura farming,” the deliberate cultivation of a distinctive and charismatic persona and the verb “to glaze,” to praise or flatter someone excessively or undeservedly.
Although the list is dominated by words linked to technology and employment, one from the world of leisure bags a spot—”coolcation,” meaning a holiday in a place with a cool climate.
Last year’s word of the year was “Brat,” the name of UK singer Charli XCX’s hit sixth album, signifying a “confident, independent, and hedonistic attitude” rather than simply a term for a badly-behaved child.
© 2025 AFP
Citation:
‘Vibe coding’ named word of the year by Collins dictionary (2025, November 6)
retrieved 6 November 2025
from https://techxplore.com/news/2025-11-vibe-coding-word-year-collins.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
-
Tech1 week agoUS Ralph Lauren partners with Microsoft for AI shopping experience
-
Tech1 week agoHow digital technologies can support a circular economy
-
Sports1 week agoBilly Bob Thornton dishes on Cowboys owner Jerry Jones’ acting prowess after ‘Landman’ cameo
-
Tech1 week agoAI chatbots are becoming everyday tools for mundane tasks, use data shows
-
Fashion1 week agoCalvin Klein launches Re-Calvin take-back programme across the US
-
Tech1 week agoNvidia, Cisco look to deepen AI innovation across 6G, telecoms | Computer Weekly
-
Fashion1 week agoSwarovski teams up with Erewhon for anniversary collaboration
-
Sports1 week agoGiannis savors beating Knicks after season sweep

