Tech
Expanding sensory experiences in virtual environments | Computer Weekly
Human understanding and interactions lean heavily on our experiences with the real world that we are most used to and perfectly suited for. However, skill levels in interacting with digital information are a different issue, and various segments of the population feature widely different degrees of knowledge of how to use digital devices and content.
Comprehensively multisensory engagements marry extended reality (XR) and in real life (IRL) to create genuinely immersive experiences that allow us to perceive them as a true amalgamation of virtual and real worlds. They can also find use to create inclusive interactions for users whose disabilities can make it difficult to access commonplace computing technologies.
Truly multisensory environments require comprehensive technology approaches to address a variety of senses. This requirement creates new opportunities for developers of devices and applications, but also faces substantial hurdles for technological reasons, cost considerations and consumers’ readiness to engage with such environments. Many researchers and developers are taking up the challenge, and are working on a wide range of offerings to address human senses in more comprehensive ways than current applications allow for.
Multisensory technologies are under investigation to expand the ways individuals can interact with digital and virtual applications. Multisensory interfaces and environments have been research and development topics for a long time. Olfactory, haptics and tactile interfaces are available, and even wind- and temperature-interface efforts exist.
Currently, these types of interfaces are relegated to niche applications or small market pockets. But virtual environments and novel technology approaches could result in diffusion of such applications to a wider range of users. Technological, cost and adoption issues exist, but first steps in creating multisensory engagements are under way.
Multisensory environments serve users by connecting in natural ways with virtual information and elements. But companies also can leverage multisensory approaches to create more meaningful – and impactful – connections with consumers and customers.
Paul Silcox, executive creative director at FutureBrand, a brand strategy and design company, believes that from gesture control to in-store design, and mixed, virtual and extended reality, “multisensory marketing is here to stay”. And a crucial aspect of multisensory engagement is the opportunity to make brands and products stand out in a world of visual overload.
Sound and scents for brands
Sonic branding has been around for some time. After more than a quarter-century in use, Intel’s sound logo – the “Intel Inside” musical notes – is perhaps the example that comes to mind most readily. The logo manages to reach consumers if their eyes are focused somewhere else, or even if they are in a different room when watching TV, for example.
Sonic engagement is not new, but there are many more approaches emerging. Recently, a number of experimental sound applications have been launched, and there are many more senses brands – and applications – can make use of.
Smell is another sense that brands frequently leverage. Hotels, shops and entire franchises use scents to evoke a branded experience. Mood Media, an experiential media company, for example, helps clients to create emotional connections with scent marketing. The company is also working with immersive audiovisual solutions, “connecting physical and digital with integrated media for a seamless customer journey”.
For some showings of the movie Heretic, entertainment company A24 partnered with Joya Studio, which researches, develops and produces fragrances and scented objects. During a pivotal scene in the film, selected screenings featured scents that were pumped into the auditorium.
Silcox highlights super-additivity as an important aspect of multisensory branding. Engaging multiple senses simultaneously “is exponentially more powerful than the sum of their individual effects”.
Despite clear benefits, he also points to challenges – challenges that will apply to the entire category of XR-enhanced environments as they become more common. Just randomly embedding sensory effects will not result in desired outcomes – instead, developers will need to focus on “defining individual sensorial assets and bringing them together as a powerful suite for a clear purpose”.
Excitement about multisensory engagements will inevitably lead to designers bundling a smorgasbord of technologies simply because they can. But “it’s important to show restraint and use these tools in deliberate ways in order to avoid an empty, gimmick effect”.
Venues as experiential landscapes
Perhaps expectedly, music venues and events are exploring the use of multisensory sensations to increase the entertainment value. Since the autumn of 2023, the Sphere close to Las Vegas, Nevada, has established a showcase in modern entertainment.
The giant spherical venue features advanced sound systems such as directional sound and virtual acoustic environments, as well as many motion and environmental technologies. The costs were also tremendous, with a price sticker of more than $2bn, providing an understanding of how high the hurdles for wide diffusion are.
However, according to Brian Mirakian, senior principal at Populous, a design firm that focuses on the creation of experiential environments: “Tomorrow’s concerts are more than just performances; they are moments that immerse audiences in environments that engage all five senses, transforming live shows into unforgettable, multi-sensory journeys.”
He adds that “advancements in technology are enabling venues to integrate sensory elements” that require a design and planning process that can be challenging to translate to the many types of venues artists perform in.
Mirakian also cautions that the introduction of advanced technologies, including scents, motion and wind, comes with additional considerations. Creating immersive experiences “necessitates fine-tuning, a process that requires the expertise of those who know the venue to meld the artist’s vision with the venue’s specifications”.
Touch and go on haptics
There exist a wide range of interface technologies, such haptics and selected interfaces, and an obvious market exists for gaming applications, but there are challenges to expanding the use of haptic sensations to create immersive environments in real-world locations that add digital experiences. Marrying haptic sensations of digital interfaces meaningfully with real-world situations and activities is not a trivial task. Nevertheless, new applications are slowly emerging.
“As digital devices evolve, we’re at an exciting inflection point, with the likes of gaming consoles, headphones, smartwatches, fitness trackers and headsets incorporating more features, which will allow brands to develop truly immersive experiences,” says FutureBrand’s Silcox.
FutureBrand created a haptic logo for Mastercard, which uses distinctive haptic vibrations combined with a sonic logo to let customers feel their smartphones when shopping online or paying at shops’ payment terminals with the firm’s credit cards.
Meanwhile, researchers at Northwestern University developed a wearable device to create a “sophisticated variety of haptic sensations”. The device connects wirelessly to VR headsets or smartphones and offers the sensations of “vibrations, stretching, pressure, sliding and twisting”.
The device has a small form factor, attaches to the skin, and can easily be worn on the move. The researchers “envision their device eventually could enhance virtual experiences, help individuals with visual impairments navigate their surroundings, reproduce the feeling of different textures on flat screens for online shopping, provide tactile feedback for remote health care visits, and even enable people with hearing impairments to ‘feel’ music”.
The researchers also mention the use in applications where touch supports users with visual or hearing impairments, and other companies are focusing on related applications. OneCourt has developed a device that enables tactile sports broadcasts. The device resembles a tablet that outlines the game courts, and is described as “transforming gameplay into trackable vibrations”. The entrepreneurs created the offering to help visually impaired sports fans experience games.
Jerred Mace, the CEO and founder of OneCourt, says: “We’ve essentially developed a laptop-sized haptic display that’s capable of communicating dynamic information like sporting events through touch.”
The device is focused on sporting events, but similar services may find use in many more commercial applications to enhance immersive environments – and could support individuals with and without visual impairments.
The long way for puzzle pieces to fall into place
New interface technologies can enable multisensory sensations that will elevate metaverse environments. Initial use cases exist in industrial, healthcare and entertainment markets, for instance. However, truly immersive environments will remain elusive in consumer markets for quite some time.
The long-term prospects look better. Touchscreens, app-supported stores and public venues also required time to diffuse, and are now almost ubiquitous.
For companies trying to leverage these new opportunities, the question remains what sensory technologies to bundle in what form factor. What is the right breadth and depth of multisensory sensations for what kind of applications, and for which consumer segments? Possible combinations are virtually limitless.
A better understanding of human interactions with virtual environments and digital objects will be crucial to drive commercial applications. As Silcox advises, “we need to ask what our end desired purpose or reaction is that we are looking to provoke”.
Martin Schwirn is a strategy and innovation consultant for Global 2000 companies, and the author of Small data, big disruptions: How to spot signals of change and manage uncertainty (ISBN 9781632651921).
Tech
Papa Johns Is Getting Into Drone Delivery—but Not for Pizza
Starting today, eager customers of the US pizza restaurant chain Papa Johns living in one corner of southern North Carolina will have the opportunity to receive their food from the sky, thanks to a new collaboration with Alphabet’s drone company, Wing. But Papa Johns’ signature pizzas won’t be on offer. Instead, drone-loving North Carolinians will have to choose between three kinds of sandwiches, a newer product for the fast-food chain: Philly cheesesteak, chicken bacon ranch, or steak and mushroom varieties.
Drone deliveries are popping up in more communities across the US and the world. Questions about the long-term economics and regulatory picture around unmanned aerial vehicles persist, but Wing boasts partnerships with Walmart, Panera, and DoorDash and is delivering through the sky to customers in four metro areas: Atlanta, Charlotte, Dallas-Fort Worth, and Houston. (In 2019, Wing received the US Federal Aviation Administration’s first certificate allowing a drone delivery company to operate in the country.) Competing drone companies, including Zipline, Amazon Prime Air, and Flytrex, fly packages, medical supplies, and Chipotle burritos in select communities across countries like Ghana, Japan, and the US.
But until very recently, drone operators have struggled to fly full-size pizzas. For companies hoping to break into the food delivery space, this is unfortunate: 11 percent of the US population eats a slice on any given day, according to the US Department of Agriculture. In a fast-diversifying restaurant industry, getting them to customers is still big business. But the realities of physics, engineering, and the restaurant business conspire to make pizzas a challenge for drones.
Flying Pizzas
Traditionally, pizza is the experimental tech delivery of choice. The familiar and cheap cheese-sauce-bread combo has been loaded onto self-driving cars and autonomous sidewalk delivery vehicles and has been assembled by robots. It’s a fast and satisfying option, especially for busy families tight on time. And theoretically, a great fit for automated drones, among one of the faster delivery options—people love fresh, piping-hot pizza.
But transporting one by drone requires some extra work, says Wing CEO Adam Woodworth. “Pizza comes in a very different box, with a big, flat surface area,” he says. They’re not naturally aerodynamic. Also, “you don’t want a pizza tilted.”
Wing’s relatively lightweight drones are engineered to carry three specific package sizes; right now, pizza boxes aren’t one of them. Woodworth says a new design is on the horizon. “I want to see pizzas coming at me from the sky,” he says.
Flytrex, an Israel-based drone delivery company, announced late last month that it had finally solved the problem. In collaboration with rival pizza chain Little Caesars, the company began delivering via drone up to two large pizzas (16 inches each), plus sodas and bread, in Wylie, Texas, a suburb of Dallas. The leap comes courtesy of a much bigger new drone, capable of carrying up to 8.8 pounds for four miles.
Courtesy of Flytrex
Tech
Chevron Wants a School District Tax Break for a Data Center Power Plant in Texas
A major oil company is seeking a state tax break in Texas worth hundreds of millions of dollars to build a massive power plant. The energy won’t be going to residential customers, though. Instead, the gas plant will be used to power a data center whose eventual tenant could be Microsoft.
Chevron subsidiary Energy Forge One has filed an application with the State Comptroller’s board to obtain a tax abatement for a power plant it’s building in West Texas. In late January, the comptroller’s office made a recommendation to support the application’s approval—the first such approval under the program for a power plant intended solely for data center use.
In March, following news reports that Microsoft was looking into purchasing power from the Energy Forge project, Chevron said that it had entered into an “exclusivity agreement” with Microsoft and Engine 1, an investment fund involved in the project. In January, Microsoft pledged to be a “good neighbor” in communities where it is building data centers, including promising to pay a “full and fair share of local property taxes.”
The potential tax abatement for the project comes as big tech companies are battling rising public fury about data centers and electricity costs. It also comes as lawmakers start to cast a more critical eye on ballooning incentives for data centers, some of which have cost some states—including Texas—$1 billion or more each year.
Chevron spokesperson Paula Beasley told WIRED in an email that all tax incentives under consideration for the Energy Forge project “apply solely to the power generation facility” to “support new energy infrastructure, and do not extend to any future data center facilities that may be served.” Beasley also said that there is currently “no definitive agreement” with Microsoft for this power plant.
“Microsoft is in discussions with Chevron,” Rima Alaily, Microsoft’s corporate vice president and general counsel for infrastructure, said in a statement to WIRED. “No commercial terms have been finalized, and there is no definitive agreement at this time.”
Chevron is applying for a tax abatement for the project under Texas’ Jobs, Energy, Technology, and Innovation (JETI) Act. Passed in 2023, the program is intended to incentivize businesses to build large infrastructure projects in the state in exchange for guarantees to bring jobs and revenue. Accepted projects get a cap set on the amount of taxable property they can be charged through local school district taxes.
The Pecos-Barstow-Toyah school board approved the project’s application at a meeting in February. The state pays for the tax abatement, so the school district itself does not lose out on any money.
According to documents from the state, the Chevron project could net more than $227 million in savings for the company over a 10-year period, depending on the eventual size of the project and investment. The application says the plant will provide “over 25 permanent, full-time jobs,” though there’s no requirement to do so because it’s considered an electricity generation facility.
The planned gas plant won’t connect to the grid, instead providing “electricity for direct consumption by a data center,” according to its application. So-called behind-the-meter gas plants have become increasingly popular for data center developers facing yearslong waits to connect to the grid. According to data from nonprofit Global Energy Monitor, the US at the start of the year had nearly 100 gigawatts of gas-fired power in the development pipeline solely to power data centers, with several more massive gas projects announced since the data was published.
A WIRED analysis of less than a dozen power plants being constructed to explicitly serve data centers, including the Chevron project, found that these power plants are permitted to emit more greenhouse gases than many small- to medium-size countries. The Energy Forge plant alone could emit more than 11.5 million tons of CO2 equivalent annually—more than the country of Jamaica emitted in 2024. Beasley told WIRED that the plant “is being designed to comply with applicable environmental regulations, including all applicable federal and state air quality standards.”
Tech
CUDA Proves Nvidia Is a Software Company
Forgive me for starting with a cliché, a piece of finance jargon that has recently slipped into the tech lexicon, but I’m afraid I must talk about “moats.” Popularized decades ago by Warren Buffett to refer to a company’s competitive advantage, the word found its way into Silicon Valley pitch decks when a memo purportedly leaked from Google, titled “We Have No Moat, and Neither Does OpenAI,” fretted that open-source AI would pillage Big Tech’s castle.
A few years on, the castle walls remain safe. Apart from a brief bout of panic when DeepSeek first appeared, open-source AI models have not vastly outperformed proprietary models. Still, none of the frontier labs—OpenAI, Anthropic, Google—has a moat to speak of.
The company that does have a moat is Nvidia. CEO Jensen Huang has called it his most precious “treasure.” It is not, as you might assume for a chip company, a piece of hardware. It’s something called CUDA. What sounds like a chemical compound banned by the FDA may be the one true moat in AI.
CUDA technically stands for Compute Unified Device Architecture, but much like laser or scuba, no one bothers to expand the acronym; we just say “KOO-duh.” So what is this all-important treasure good for? If forced to give a one-word answer: parallelization.
Here’s a simple example. Let’s say we task a machine with filling out a 9×9 multiplication table. Using a computer with a single core, all 81 operations are executed dutifully one by one. But a GPU with nine cores can assign tasks so that each core takes a different column—one from 1×1 to 1×9, another from 2×1 to 2×9, and so on—for a ninefold speed gain. Modern GPUs can be even cleverer. For example, if programmed to recognize commutativity—7×9 = 9×7—they can avoid duplicate work, reducing 81 operations to 45, nearly halving the workload. When a single training run costs a hundred million dollars, every optimization counts.
Nvidia’s GPUs were originally built to render graphics for video games. In the early 2000s, a Stanford PhD student named Ian Buck, who first got into GPUs as a gamer, realized their architecture could be repurposed for general high-performance computing. He created a programming language called Brook, was hired by Nvidia, and, with John Nickolls, led the development of CUDA. If AI ushers in the age of a permanent white-collar underclass and autonomous weapons, just know that it would all be because someone somewhere playing Doom thought a demon’s scrotum should jiggle at 60 frames per second.
CUDA is not a programming language in itself but a “platform.” I use that weasel word because, not unlike how The New York Times is a newspaper that’s also a gaming company, CUDA has, over the years, become a nested bundle of software libraries for AI. Each function shaves nanoseconds off single mathematical operations—added up, they make GPUs, in industry parlance, go brrr.
A modern graphics card is not just a circuit board crammed with chips and memory and fans. It’s an elaborate confection of cache hierarchies and specialized units called “tensor cores” and “streaming multiprocessors.” In that sense, what chip companies sell is like a professional kitchen, and more cores are akin to more grilling stations. But even a kitchen with 30 grilling stations won’t run any faster without a capable head chef deftly assigning tasks—as CUDA does for GPU cores.
To extend the metaphor, hand-tuned CUDA libraries optimized for one matrix operation are the equivalent of kitchen tools designed for a single job and nothing more—a cherry pitter, a shrimp deveiner—which are indulgences for home cooks but not if you have 10,000 shrimp guts to yank out. Which brings us back to DeepSeek. Its engineers went below this already deep layer of abstraction to work directly in PTX, a kind of assembly language for Nvidia GPUs. Let’s say the task is peeling garlic. An unoptimized GPU would go: “Peel the skin with your fingernails.” CUDA can instruct: “Smash the clove with the flat of a knife.” PTX lets you dictate every sub-instruction: “Lift the blade 2.35 inches above the cutting board, make it parallel to the clove’s equator, and strike downward with your palm at a force of 36.2 newtons.”
-
Politics1 week agoIran weighs US reply delivered via Pakistan as Trump signals opposition to deal terms
-
Fashion1 week agoUS cotton export sales show strong recovery, Upland rise 36%
-
Sports1 week agoSajid Ali Sadpara summits world’s fifth-highest peak
-
Tech7 days agoDHS Demanded Google Surrender Data on Canadian’s Activity, Location Over Anti-ICE Posts
-
Business1 week agoHeineken to invest £44.5m in hundreds of pubs creating 850 jobs
-
Business1 week agoGovernment notifies FDI changes on China funds – The Times of India
-
Business1 week agoUK airlines to be allowed to cancel flights in advance over fuel shortages
-
Politics1 week agoTwo women die on migrant boat seeking to reach UK
