Tech
The 20 Settings You Need to Change on Your iPhone
Apple’s software design strives to be intuitive, but each iteration of iOS contains so many additions and tweaks that it’s easy to miss some useful iPhone settings. Apple focused on artificial intelligence when it unveiled iOS 18 in 2024 and followed it with Liquid Glass in iOS 26 (the name is now tied to the following year), but many intriguing customizations and lesser-known features lurk beneath the surface. Several helpful settings are turned off by default, and it’s not immediately obvious how to switch off some annoying features. We’re here to help you get the most out of your Apple phone.
Once you have things set up the way you want, it’s a breeze to copy everything, including settings, when you switch to a new iPhone. For more tips and recommendations, read our related Apple guides—like the Best iPhone, Best iPhone 16 Cases, Best MagSafe Accessories—and our explainers on How to Set Up a New iPhone, How to Back Up Your iPhone, and How to Fix Your iPhone.
How to Keep Your iPhone Updated
These settings are based on the latest version of iOS 26 and should be applicable for most recent iPhones. Some settings may not be available on older devices, or they may have different pathways depending on the model and the software version. Apple offers excellent software support for many years, so always make sure your device is up-to-date by heading to Settings > General > Software update. You can find the Settings app on your home screen.
Updated September 2025: We’ve added a few new iPhone tips and updated this guide for iOS 26.
Table of Contents
Enable Call Screening
Apple via Simon Hill
Make cold-calling pests a thing of the past with Apple’s new Call Screening feature. Go to Settings, Apps, and select Phone, then scroll down to Screen Unknown Callers and select Ask Reason for Calling. Now, your iPhone will automatically answer calls from unknown callers in the background without alerting you. After the caller gives a reason for their call, your phone will ring, and you’ll be able to see the response onscreen so you can decide whether to answer. You should also make sure Hold Assist Detection is toggled on, so your iPhone detects when you are placed on hold, allowing you to step away, then alerting you when the call has been picked up by a human.
Turn on RCS
The texting experience with Android owners (green bubbles) got seriously upgraded last year when Apple decided to finally support the RCS messaging standard (rich communication services). RCS has been around for several years on Android, and allows for a modernized texting experience with features like typing indicators, higher-quality photos and videos, and read receipts. Group chats may still be wonky, but they’re still a significant improvement. However, on a new iPhone, RCS is disabled by default (naturally).
Make sure you turn it on for the best messaging experience. Head to Settings > Apps > Messages > RCS Messaging and toggle it on.
Customize the Control Center
Apple via Simon Hill
Swipe down from the top right of the screen to open the Control Center, and you’ll see it’s more customizable than ever. You can tap the plus icon at the top left or tap and hold on an empty space to open the customization menu. Here you can move icons and widgets around, remove anything you don’t want, or tap Add a Control at the bottom for a searchable list of shortcut icons and widgets you can organize across multiple Control Center screens. You can also customize your home screen to change the color and size of app icons, rearrange them, and more.
Change Your Lock Screen Buttons
You know those lock screen controls that default to flashlight on the bottom left and camera on the bottom right? You can change them. Press and hold on an empty space on the lock screen and tap Customize. Tap the minus icon to remove an existing shortcut, and tap the plus icon to add a new one. You can also change the weather and date widgets, the font and color for the time, and pick a wallpaper. One of the clocks will even stretch to adapt to your wallpaper.
Extend Screen Time-Out
Apple via Simon Hill
While it’s good to have your screen timeout for battery saving and security purposes, I find it maddening when the screen goes off while I’m doing something. The default screen timeout is too short in my opinion, but thankfully, you can adjust it. Head into Settings, Display & Brightness, and select Auto-Lock to extend it. You have several options, including Never, which means you will have to manually push the power button to turn the screen off.
Turn Off Keyboard Sounds
Apple via Simon Hill
The iPhone’s keyboard clicking sound when you type is extremely aggravating. Trust me, even if you don’t hate it, everyone in your vicinity when you type sure does. You can turn it off in Settings, Sounds & Haptics by tapping Keyboard Feedback and toggling Sound off. I also advise toggling off the Lock Sound while you’re in Sound & Haptics.
Go Dark
Apple via Simon Hill
Protect yourself from eye-searing glare with dark mode. Go to Settings, pick Display & Brightness, and tap Dark. You may prefer to toggle on Automatic and have it change with the sun setting, but I prefer to be in Dark mode all the time.
Change Your Battery Charge Level
Apple via Simon Hill
If you’re determined to squeeze as many years out of your iPhone battery as possible, consider changing the charging limit. You can maximize your smartphone’s battery health if you avoid charging it beyond 80 percent. The iPhone’s default is now Optimized Battery Charging, which waits at 80 percent and then aims to hit 100 percent when you are ready to go in the morning. But there’s a slider you can set to a hard 80 percent limit in Settings, under Battery, and Charging. If it bugs you, this is also where you can turn Optimized Battery Charging off.
Turn On Adaptive Power Mode
Apple via Simon Hill
If you get worried about running out of battery, go to Settings, Battery, and scroll down to select Power Mode, where you can toggle on Adaptive Power. This mode will detect when you are using more battery life than normal and make little tweaks, like lowering display brightness or limiting performance, to try and get you through to the end of the day.
Set Up the Action Button
Folks with an iPhone 15 Pro model, any iPhone 16 model, or any iPhone 17 have an Action Button instead of the old mute switch. By default, it will silence your iPhone when you press and hold it, but you can change what it does by going to Settings, then Action Button. You can swipe through various basic options from Camera and Flashlight to Visual Intelligence, but select Shortcuts if you want it to do something more interesting. If you’re unfamiliar, check out our guide on How to Use the Apple Shortcuts App.
Customize Camera Control
Photograph: Julian Chokkattu
The iPhone 16 series debuted Camera Control, a physical button that sits below the power button and triggers the camera with a single press. When you’re in the camera app, pressing it will capture a photo, and a long-press will record a video. Pressing and holding Camera Control outside of the camera app triggers Apple’s Visual Intelligence feature (sort of like Google Lens). But what I find most annoying is Camera Control’s second layer of controls: swiping. You can swipe on the button in the camera app to slide between photography styles, zoom levels, or lenses. It’s neat in theory, but way too sensitive.
Tech
Zohran Mamdani Just Inherited the NYPD Surveillance State
Mamdani’s campaign did not respond to a request for comment.
The NYPD’s turn toward mass surveillance was begun in earnest by Commissioner Raymond Kelly during the immediate aftermath of September 11, buoyed by hundreds of millions of dollars in federal anti-terrorism grants. However, Ferguson says Kelly’s rival, former commissioner William Bratton, was a key architect behind the NYPD’s reliance on “big data,” by implementing the CompStat data analysis system to map and electronically collate crime data during the mid-1990s and again during his return to New York City in 2014 under Mayor Bill de Blasio. Bratton was also a mentor to Jessica Tisch and has spoken admiringly of her since leaving the NYPD.
Tisch was a main architect of the NYPD’s Domain Awareness System, an enormous, $3 billion, Microsoft-based surveillance network of tens of thousands of private and public surveillance cameras, license plate readers, gunshot detectors, social media feeds, biometric data, cryptocurrency analysis, location data, bodyworn and dashcam livestreams, and other technology that blankets the five boroughs’ 468-square-mile territory. Patterned off London’s 1990s CCTV surveillance network, the “ring of steel” was initially developed under Kelly as an anti-terrorism surveillance system for Lower and Midtown Manhattan before being rebranded as the DAS and marketed to other police departments as a potential for-profit tool. Several dozen of the 17,000 cameras in New York City public housing developments were also linked through backdoor methods by the Eric Adams administration last summer with thousands more in the pipeline, according to NY Focus.
Though the DAS has been operational for more than a decade and survived prior challenges over data retention and privacy violations from civil society organizations like the New York Civil Liberties Union, it remains controversial. In late October, a Brooklyn couple filed a civil suit along with Surveillance Technology Oversight Project (STOP), a local privacy watchdog, against the DAS, alleging violations of New York State’s constitutional right to privacy by the NYPD’s persistent mass surveillance and data retention. NYPD officers, the suit claims, can “automatically track an individual across the city using computer vision software, which follows a person from one camera to the next based on descriptors as simple as the color of a piece of clothing.” The technology, they allege, “transforms every patrol officer into a mobile intelligence unit, capable of conducting warrantless surveillance at will.”
Tech
Democrats Did Much Better Than Expected
If you’re like me, Steve Kornacki is just as adored by your aunt as he is in your group chats. He’s become a staple of Election Day coverage, putting in long hours at the big board and copious amounts of prep beforehand.
His granular knowledge of key counties and voter turnout trends made him not just indispensable for many Americans on election night, but also a full-blown celebrity. I caught up with him bright and early this morning to talk about Tuesday night’s election results.
We broke down what the returns mean heading into the 2026 midterm elections, where Democrats currently hold an 8 percentage point advantage over Republicans in the latest NBC News poll, and what they say about President Donald Trump’s second-term agenda. We also spoke about what surprised him in the New Jersey governor’s race, whether Trump’s base is weakening, and, of course, New York mayor-elect Zohran Mamdani’s historic win. Heading into the midterms, Kornacki is taking on an expanded role at NBC News following parent company Comcast’s decision to spin off its cable TV properties, including a soon-to-be rebranded MSNBC.
Kornacki is not someone to put too much stock into an off-year election, but the breadth and depth of Democratic victories suggested a political environment that’s radically changed in the year since Trump’s election—and if anyone can find some important details to follow going forward, it’s Steve.
This interview has been edited for length and clarity.
WIRED: Steve, thanks for joining us after a long night. Before we get into the meat and potatoes here, let’s start with a quick lightning round: How many hours of sleep were you shooting for, how many did you get, and can you tell us if you have any election night superstitions?
Steve Kornacki: Well, I shoot for zero, so I’m not disappointed and therefore I’m pleasantly surprised with whatever I get, which I think was about two and a half last night.
There we go.
So that’s not too bad. Superstitions? I don’t know about that. My challenge is to just tune out all the anecdotal turnout data on Election Day. I just think it’s a ton of noise that starts messing with your head.
What surprised you from last night?
What surprised me was—it’s probably not the most original observation this morning—but New Jersey. [Representative Mikie Sherrill, the Democratic nominee, won with more than 56 percent of the vote.] The margin there for Sherrill, which is about 13 points, is much more than expected. I mean, I was talking to Democrats right up through Election Day who were telling me some version of: “She’s run a terrible campaign, she’s not been a good candidate. Maybe she’ll still win because of Trump, but this is going to be closer than it should be.” I mean, that was a widely shared view between the two parties, that Sherrill had run a bad campaign and was in danger of even losing, and that was not the case at all.
Tech
Teaching robots to map large environments
A robot searching for workers trapped in a partially collapsed mine shaft must rapidly generate a map of the scene and identify its location within that scene as it navigates the treacherous terrain.
Researchers have recently started building powerful machine-learning models to perform this complex task using only images from the robot’s onboard cameras, but even the best models can only process a few images at a time. In a real-world disaster where every second counts, a search-and-rescue robot would need to quickly traverse large areas and process thousands of images to complete its mission.
To overcome this problem, MIT researchers drew on ideas from both recent artificial intelligence vision models and classical computer vision to develop a new system that can process an arbitrary number of images. Their system accurately generates 3D maps of complicated scenes like a crowded office corridor in a matter of seconds.
The AI-driven system incrementally creates and aligns smaller submaps of the scene, which it stitches together to reconstruct a full 3D map while estimating the robot’s position in real-time.
Unlike many other approaches, their technique does not require calibrated cameras or an expert to tune a complex system implementation. The simpler nature of their approach, coupled with the speed and quality of the 3D reconstructions, would make it easier to scale up for real-world applications.
Beyond helping search-and-rescue robots navigate, this method could be used to make extended reality applications for wearable devices like VR headsets or enable industrial robots to quickly find and move goods inside a warehouse.
“For robots to accomplish increasingly complex tasks, they need much more complex map representations of the world around them. But at the same time, we don’t want to make it harder to implement these maps in practice. We’ve shown that it is possible to generate an accurate 3D reconstruction in a matter of seconds with a tool that works out of the box,” says Dominic Maggio, an MIT graduate student and lead author of a paper on this method.
Maggio is joined on the paper by postdoc Hyungtae Lim and senior author Luca Carlone, associate professor in MIT’s Department of Aeronautics and Astronautics (AeroAstro), principal investigator in the Laboratory for Information and Decision Systems (LIDS), and director of the MIT SPARK Laboratory. The research will be presented at the Conference on Neural Information Processing Systems.
Mapping out a solution
For years, researchers have been grappling with an essential element of robotic navigation called simultaneous localization and mapping (SLAM). In SLAM, a robot recreates a map of its environment while orienting itself within the space.
Traditional optimization methods for this task tend to fail in challenging scenes, or they require the robot’s onboard cameras to be calibrated beforehand. To avoid these pitfalls, researchers train machine-learning models to learn this task from data.
While they are simpler to implement, even the best models can only process about 60 camera images at a time, making them infeasible for applications where a robot needs to move quickly through a varied environment while processing thousands of images.
To solve this problem, the MIT researchers designed a system that generates smaller submaps of the scene instead of the entire map. Their method “glues” these submaps together into one overall 3D reconstruction. The model is still only processing a few images at a time, but the system can recreate larger scenes much faster by stitching smaller submaps together.
“This seemed like a very simple solution, but when I first tried it, I was surprised that it didn’t work that well,” Maggio says.
Searching for an explanation, he dug into computer vision research papers from the 1980s and 1990s. Through this analysis, Maggio realized that errors in the way the machine-learning models process images made aligning submaps a more complex problem.
Traditional methods align submaps by applying rotations and translations until they line up. But these new models can introduce some ambiguity into the submaps, which makes them harder to align. For instance, a 3D submap of a one side of a room might have walls that are slightly bent or stretched. Simply rotating and translating these deformed submaps to align them doesn’t work.
“We need to make sure all the submaps are deformed in a consistent way so we can align them well with each other,” Carlone explains.
A more flexible approach
Borrowing ideas from classical computer vision, the researchers developed a more flexible, mathematical technique that can represent all the deformations in these submaps. By applying mathematical transformations to each submap, this more flexible method can align them in a way that addresses the ambiguity.
Based on input images, the system outputs a 3D reconstruction of the scene and estimates of the camera locations, which the robot would use to localize itself in the space.
“Once Dominic had the intuition to bridge these two worlds — learning-based approaches and traditional optimization methods — the implementation was fairly straightforward,” Carlone says. “Coming up with something this effective and simple has potential for a lot of applications.
Their system performed faster with less reconstruction error than other methods, without requiring special cameras or additional tools to process data. The researchers generated close-to-real-time 3D reconstructions of complex scenes like the inside of the MIT Chapel using only short videos captured on a cell phone.
The average error in these 3D reconstructions was less than 5 centimeters.
In the future, the researchers want to make their method more reliable for especially complicated scenes and work toward implementing it on real robots in challenging settings.
“Knowing about traditional geometry pays off. If you understand deeply what is going on in the model, you can get much better results and make things much more scalable,” Carlone says.
This work is supported, in part, by the U.S. National Science Foundation, U.S. Office of Naval Research, and the National Research Foundation of Korea. Carlone, currently on sabbatical as an Amazon Scholar, completed this work before he joined Amazon.
-
Tech1 week agoOpenAI says a million ChatGPT users talk about suicide
-
Tech1 week agoUS Ralph Lauren partners with Microsoft for AI shopping experience
-
Tech1 week agoHow digital technologies can support a circular economy
-
Sports1 week agoBilly Bob Thornton dishes on Cowboys owner Jerry Jones’ acting prowess after ‘Landman’ cameo
-
Tech1 week agoAI chatbots are becoming everyday tools for mundane tasks, use data shows
-
Fashion1 week agoITMF elects new board at 2025 Yogyakarta conference
-
Business1 week agoTransfer test: Children from Belfast low income families to be given free tuition
-
Business1 week agoLucid targets industry-first self-driving car technology with Nvidia







