Connect with us

Tech

3 Exciting Camera Features on Apple’s New iPhone 17 Lineup

Published

on

3 Exciting Camera Features on Apple’s New iPhone 17 Lineup


Apple says the camera “uses AI” to expand the field of view and adjust the orientation. It’s super convenient, especially considering that you don’t have to alter how you hold your phone, meaning no more precarious grip. It much more comfortable too, but it will probably still take time to retrain your muscle memory and stop yourself switching to landscape mode for selfies.

Don’t forget, all of the selfie cameras on these new devices are also getting a boost in image quality thanks to a new 18-megapixel sensor that can pack in more detail. If you shoot a lot of selfie videos, your clips will have much better stabilization in 4K HDR, too.

Dual Capture

You can finally record with both the front and rear cameras natively in the iPhone camera app! OK, this is technically not new. You’ve been able to do this via a third-party app for years, and a few Android phones have had this feature natively in the camera app for as long as a decade. Samsung calls it Dual Recording on its Galaxy phones, while HMD’s Nokia phones—when they were still a thing—called it a “bothie.” Now it’s native on the new iPhone.

Tap the overflow camera menu on the top right in video mode and choose Dual Capture. It works up to 4K 30 frames per second, and you’ll see a floating preview of the front camera—like when you’re on a video call—with the main viewfinder displaying the view from the rear camera. The placement of the floating front camera view seems to be important because it doesn’t look like you can change it post-capture, so you’ll want to make sure you flick it to a spot where it doesn’t block the action.

It’s not groundbreaking, but it’s a fun little capability I think a lot of people will take advantage of now that it’s natively built into the camera app.

8X Zoom

I test phones for a living, but I’m also a photographer, and the camera I use most often is the telephoto zoom. I find the main cameras on most phones these days a little too wide, so optical zoom options let me get closer to the subject.

Color me excited that the new iPhone 17 Pro models can go up to 8X zoom and retain optical-like quality! Apple has upgraded the telephoto camera to 48 megapixels, meaning you’ll be able to see more detail in your shots. It’s also a 4X optical zoom camera. That might sound like a step back, considering Pro iPhones have offered 5X optical zoom for several years. However, the upgrade in megapixel count and the larger sensor should offer better quality images overall, whether at 4X, 5X, or even up to 8X.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Software tool turns everyday objects into animated, eye-catching displays—without electronics

Published

on

Software tool turns everyday objects into animated, eye-catching displays—without electronics


FabObscura is a system for creating visually dynamic physical media based on the classic barrier-grid animation technique. We introduce a novel parameterization and computational design tool for systematically designing new barrier-grid animations without domain expertise. Our abstraction is expressive enough to support animations that respond to diverse user interactions, such as translations, rotations, and changes in viewpoint. Credit: Sethapakdi et al, FabObscura: Computational Design and Fabrication for Interactive Barrier-Grid Animations (2025)

Whether you’re an artist, advertising specialist, or just looking to spruce up your home, turning everyday objects into dynamic displays is a great way to make them more visually engaging. For example, you could turn a kids’ book into a handheld cartoon of sorts, making the reading experience more immersive and memorable for a child.

But now, thanks to MIT researchers, it’s also possible to make dynamic displays without using electronics, using barrier-grid animations (or scanimations), which use printed materials instead. This visual trick involves sliding a patterned sheet across an image to create the illusion of a moving image.

The secret of barrier-grid animations lies in its name: An overlay called a barrier (or grid) often resembling a picket fence moves across, rotates around, or tilts toward an image to reveal frames in an animated sequence. That underlying picture is a combination of each still, sliced and interwoven to present a different snapshot depending on the overlay’s position.

While tools exist to help artists create barrier-grid animations, they’re typically used to create barrier patterns that have straight lines. Building off of previous work in creating images that appear to move, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a tool that allows users to explore more unconventional designs. From zigzags to circular patterns, the team’s “FabObscura” software turns unique concepts into printable scanimations, helping users add dynamic animations to things like pictures, toys, and decor.

MIT Department of Electrical Engineering and Computer Science (EECS) Ph.D. student and CSAIL researcher Ticha Sethapakdi, a lead author on a paper presenting FabObscura, says that the system is a one-size-fits-all tool for customizing barrier-grid animations. This versatility extends to unconventional, elaborate overlay designs, like pointed, angled lines to animate a picture you might put on your desk, or the swirling, hypnotic appearance of a radial pattern you could spin over an image placed on a coin or a Frisbee.

“Our system can turn a seemingly static, abstract image into an attention-catching animation,” says Sethapakdi. “The tool lowers the barrier to entry to creating these barrier-grid animations, while helping users express a variety of designs that would’ve been very time-consuming to explore by hand.”

Behind these novel scanimations is a key finding: Barrier patterns can be expressed as any continuous mathematical function—not just straight lines. Users can type these equations into a text box within the FabObscura program, and then see how it graphs out the shape and movement of a barrier pattern.

If you wanted a traditional horizontal pattern, you’d enter in a constant function, where the output is the same no matter the input, much like drawing a straight line across a graph. For a wavy design, you’d use a sine function, which is smooth and resembles a mountain range when plotted out. The system’s interface includes helpful examples of these equations to guide users toward their preferred pattern.






The FabObscura tool transforms everyday objects into animated displays. Credit: MIT CSAIL

A simple interface for elaborate ideas

FabObscura works for all known types of barrier-grid animations, supporting a variety of user interactions. The system enables the creation of a display with an appearance that changes depending on your viewpoint. FabObscura also allows you to create displays that you can animate by sliding or rotating a barrier over an image.

To produce these designs, users can upload a folder of frames of an animation (perhaps a few stills of a horse running), or choose from a few preset sequences (like an eye blinking) and specify the angle your barrier will move. After previewing your design, you can fabricate the barrier and picture onto separate transparent sheets (or print the image on paper) using a standard 2D printer, such as an inkjet. Your image can then be placed and secured on flat, handheld items such as picture frames, phones, and books.

You can enter separate equations if you want two sequences on one surface, which the researchers call “nested animations.” Depending on how you move the barrier, you’ll see a different story being told. For example, CSAIL researchers created a car that rotates when you move its sheet vertically, but transforms into a spinning motorcycle when you slide the grid horizontally.

These customizations lead to unique household items, too. The researchers designed an interactive coaster that you can switch from displaying a “coffee” icon to symbols of a martini and a glass of water by pressing your fingers down on the edges of its surface. The team also spruced up a jar of sunflower seeds, producing a flower animation on the lid that blooms when twisted off.

Artists, including and printmakers, could also use this tool to make dynamic pieces without needing to connect any wires. The tool saves them crucial time to explore creative, low-power designs, such as a clock with a mouse that runs along as it ticks. FabObscura could produce animated food packaging, or even reconfigurable signage for places like construction sites or stores that notify people when a particular area is closed or a machine isn’t working.







“Our system can turn a seemingly static, abstract image into an attention-catching animation,” says MIT PhD student Ticha Sethapakdi, a lead researcher on the FabObscura project. “The tool lowers the barrier to entry to creating these barrier-grid animations, while helping users express a variety of designs that would’ve been very time-consuming to explore by hand.” Credit: Massachusetts Institute of Technology

Keep it crisp

FabObscura’s barrier-grid creations do come with certain trade-offs. While nested animations are novel and more dynamic than a single-layer scanimation, their visual quality isn’t as strong. The researchers wrote design guidelines to address these challenges, recommending users upload fewer frames for nested animations to keep the interlaced image simple and stick to high-contrast images for a crisper presentation.

In the future, the researchers intend to expand what users can upload to FabObscura, like being able to drop in a video file that the program can then select the best frames from. This would lead to even more expressive barrier-grid animations.

FabObscura might also step into a new dimension: 3D. While the system is currently optimized for flat, handheld surfaces, CSAIL researchers are considering implementing their work into larger, more complex objects, possibly using 3D printers to fabricate even more elaborate illusions.

Sethapakdi wrote the paper with several CSAIL affiliates: Zhejiang University Ph.D. student and visiting researcher Mingming Li; MIT EECS Ph.D. student Maxine Perroni-Scharf; MIT postdoc Jiaji Li; MIT associate professors Arvind Satyanarayan and Justin Solomon; and senior author and MIT Associate Professor Stefanie Mueller, leader of the Human-Computer Interaction (HCI) Engineering Group at CSAIL. Their work will be presented at the ACM Symposium on User Interface Software and Technology (UIST) this month.

More information:
Ticha Sethapackdi et al, FabObscura: Computational Design and Fabrication for Interactive Barrier-Grid Animations (2025)

This story is republished courtesy of MIT News (web.mit.edu/newsoffice/), a popular site that covers news about MIT research, innovation and teaching.

Citation:
Software tool turns everyday objects into animated, eye-catching displays—without electronics (2025, September 10)
retrieved 10 September 2025
from https://techxplore.com/news/2025-09-software-tool-everyday-animated-eye.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Tech

Where does your glass come from?

Published

on

Where does your glass come from?


Credit: CC0 Public Domain

The word “local” has become synonymous with sustainability, whether it’s food, clothes or the materials used to construct buildings. But while consumers can probably go to a local lumberyard to buy lumber from sustainably grown trees cut at nearby sawmills, no one asks for local glass.

If they did, it would be hard to give an answer.

The that go into silica sand, soda ash and limestone—are natural, but the sources of those materials are rarely known to the buyer.

The process by which becomes sheets of glass is often far from transparent. The sand, which makes up over 70% of glass, could come from a faraway riverbed, lakeshore or inland limestone outcrop. Sand with at least 95% silica content is called silica sand, and only the purest is suitable for architectural glass production. Such sand is found in limited areas.

If the glass is colorless, its potential sources are even more limited, because colorless low-iron glass—popularized by Apple’s flagship stores and luxury towers around the world—requires 99% pure silica sand.

Glass production in Venice

The mysteries of glass production have a historical precedent that can be traced back to trade secrets of the Venetian Empire.

Venice, particularly the island of Murano, became the center for glass production largely due to its strategic location for importing raw materials and production know-how and exporting coveted glass objects.

From the 11th to the 16th centuries, the secrets of glassmaking were protected by the Venetians until three glassmakers were smuggled out by King Louis XIV of France, who applied the technology to create the Palace of Versailles’ Hall of Mirrors.

Venice was an otherwise unlikely location for glassmaking.

Neither the primary materials of sand and soda ash (sodium carbonate) nor the firewood of the medieval Venetian glassmakers were found in the city’s immediate vicinity. They were transported from the riverbeds of the Ticino River in Switzerland and the Agide River, which flows from the Austria-Switzerland border to the Adriatic Sea south of Venice. Soda ash, which is needed to lower the melting point of silica sand, was brought from Syria and Egypt.

So Venetian glass production was not local; it was dependent on precious resources imported from afar on ships.

Rising demand for low-iron, seamless glass

In the past few decades, low-iron glass, known for its colorlessness, has become the contemporary symbol of high-end architecture. The glass appears to disappear.

Low-iron glass is made from ultrapure sand that is low in iron oxide. Iron causes the green tint seen in ordinary glass. In architecture, low-iron glass doesn’t affect the performance—only the appearance. But it is prized.

In the U.S., this type of sand is found in a few locations, primarily in Minnesota, Wisconsin, Illinois and Missouri, where sand as white and fine as sugar—thus called saccharoidal—is mined from St. Peter sandstone. Other locations where it can be found around the world include Queensland in Australia and parts of China. Less pure sand can be purified by methods such as acid washing or magnetic separation.

Perhaps no corporation has popularized low-iron and seamless glass in architecture more than the technology giant Apple.

Glass has become fundamentally linked with Apple’s products and architecture, including its flagship stores’ expensive and daring experiments in architectural uses of glass.

Apple’s first showroom, completed in Soho in New York in 2002, showcased all-glass stairs that were strengthened with hurricane- and bullet-resistant plastic interlayers sandwiched between five sheets of glass. The treads attach to all glass walls with hockey puck-sized titanium hardware, making both the glass stairs and the shoppers appear to float.

The company’s iconic flagship store near New York’s Central Park is an all-glass cube measuring 32.5 feet (10 meters) on each side and serving as a vestibule to the store below. The first version was completed in 2006 using 90 panels, which was a technical feat. Then, in 2011, Apple reconstructed the cube in the same location, same size, but with only 15 panels, minimizing the number of seams and hardware while maximizing transparency.

Today, low-iron glass has become the standard for high-profile architecture and those who can afford it, including the “pencil towers” in Manhattan’s Billionaires’ Row.

Glass’s climate impact

Glass walls common in high-rise buildings today have other drawbacks. They help to heat up the room during increasingly hot summers and contribute to heat loss in winter, increasing dependence on artificial cooling and heating.

The glassmaking process is energy intensive and relies on nonrenewable resources.

To bring sand to its molten state, the furnace must be heated to over 2,700 degrees Fahrenheit (1,500 degrees Celsius) for as long as 50 hours, which requires burning fossil fuels such as , releasing greenhouse gases. Once heated to that temperature, the furnace runs 24/7 and is rarely shut down.

The soda ash and limestone also release carbon dioxide during melting. Moreover, glass production requires mining or producing nonrenewable natural resources such as sand, soda ash, lime and fuel. Transporting them further increases emissions.

Production and fabrication of extra-large glass panels rely on specialized equipment and occur only at a limited number of plants in the world, meaning transportation increases the carbon footprint.

Architectural glass is also difficult to recycle, largely due to the labor involved in separating glass from the building assembly.

Although glass is touted as infinitely recyclable, only 6% of architectural glass is downcycled into glass products that require less purity and precision, and almost none is recycled into architectural glass. The rest ends up in landfills.

The increasing demand for glass that is colorless, extra large and seamless contributes to glass’s sustainability problem.

How can we make glass more sustainable?

There are ways to reduce glass’s environmental footprint.

Researchers and companies are working on new types of glass that could lower its climate impact, such as using materials that lower the amount of heat necessary to make glass. Replacing natural gas, typically used in glassmaking, with less-polluting power sources can also reduce emissions.

Low-e coatings, a thin coat of silver sprayed onto a glass surface, can help reduce the amount of heat that reaches a building’s interior by reflecting both the visible light and heat, but the coating can’t fully eliminate solar heat gain.

People can also alter their standards and accept smaller and less ultraclear panels. Think of the green tint not as impure but natural.

Provided by
The Conversation


This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation:
Where does your glass come from? (2025, September 10)
retrieved 10 September 2025
from https://techxplore.com/news/2025-09-glass.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Tech

Splunk.conf: Cisco and Splunk expand agentic SOC vision | Computer Weekly

Published

on

Splunk.conf: Cisco and Splunk expand agentic SOC vision | Computer Weekly


At Splunk’s annual .Conf event, the Cisco-backed observability and data security specialist made its first run at the agentic artificial intelligence (AI) enhanced security operations centre (SOC), unveiling two agent-powered security operations (SecOps) tools for users to explore.

In a Tuesday keynote address, Splunk security senior vice president and general manager Mike Horn said that SecOps must to evolve and the need to simplify workflows, accelerate and enhance SOC operations, and expand detection capabilities and threat visibility were clear.

Splunk Enterprise Security Essentials Edition and Splunk Enterprise Security Premier Edition – delivered within version 8.2 of the firm’s Enterprise Security (SEC) security information and event management (SIEM) solution – unify a number of security workflows in the threat detection, investigation and response (TDIR) sphere.

Essentials Edition unifies SEC 8.2 with Splunk AI Assistant in Security and is available today, while Premier goes a step further adding Splunk SOAR and Splunk UEBA, and enters controlled availability later in September.

Splunk and Cisco – which have made significant and speedy progress on technical integration since coming together in 2024 – claim that the new features will place agentic AI at the heart of the SOC in order to extend security intelligence across the network.

“Our security offerings unify detection, investigation, and response into a single, intuitive workspace, eliminating tool fragmentation and significantly boosting efficiency,” said Horn.

“Built-in AI can help cut alert noise and reduce investigation time from hours to minutes. Now every SOC can better position to stay ahead of advanced threats and empower analysts at every level.”

“With today’s increasingly sophisticated threats and sprawling attack surfaces, security teams can’t afford to waste time switching between fragmented tools and operating with siloed visibility,” added Michelle Abraham, research director for security and trust at IDC.

“By integrating multiple security capabilities into a single, cohesive environment, security platforms empower organisations to move from reactive to proactive security, streamlining workflows, improving detection and response, and ultimately reducing risk.”

In addition to this, parent Cisco plans to release a number of additional AI features to power the agentic SOC, with the intent of enabling cyber pros to keep focus on more strategic aspects of their roles while agent bots sift the raw security data and perform proactive, autonomous SecOps.

Some of the agentic capabilities in development include triaging to evaluate, prioritise and explain security alerts; malware reversal to explain malicious scripts; playbook authoring to translate natural language intent into functional SOAR playbooks; response importer, using multi-modal large language models (LLMs) to import standard operating procedures into security response plans; detection library to help turn detections from hypotheses to production, and personalised detection SPL generator to personalise detections within the library to align with customer SOC environments.

Additionally, Splunk expanded the integration of Cisco Isovalent Runtime Security (eBPF) into Splunk, enhancing workload visibility and better pinpointing issues, and announced that Splunk Cloud Platform’s Federated Search for Amazon S3 and Security Analytics and Logging (SAL) will allow cyber pros to run security analytics on Cisco firewall logs stored in SAL directly, without needed to ingest.

These features and capabilities will come on-stream within the next 12 months.

Era of simplification

Speaking to Computer Weekly at .Conf, James Hodge, Splunk GVP and chief strategy advisor for EMEA, said that the advent of the agentic SOC heralded an era of simplification for cyber security professionals, describing the underlying technology as “phenomenally complicated” in many ways.

“I was really encouraged, and really excited this week, because from a user perspective we’re simplifying all of that. We’re abstracting that complexity, and just surfacing what you need,” said Hodge.

“For anyone that works with it, the word I’d use is liberating, because you’re no longer battling with tools or techniques, you’re able to go and get that question answered so you can go and progress,” he added. “For people, it means they can get on with doing what they’re paid to do.”



Source link

Continue Reading

Trending