Tech
How digital twins are helping people with motor neurone disease speak | Computer Weekly
An initiative by a UK-based charity, supported by technology companies and universities, has developed an artificial intelligence (AI)-powered digital twin that allows people with communications disabilities to speak in a natural way.
The technology, known as VoxAI, represents a step-change from the computer-assisted voice used by late physicist Stephen Hawking, one of the first well-known public figures with motor neurone disease (MND).
The Scott-Morgan Foundation was set up by its founder, roboticist Peter Scott-Morgan, to apply engineering principles to disability after he was diagnosed with MND.
A five-year project led by the trust has developed an AI-powered platform that is helping people with MND, also known as amyotrophic lateral sclerosis (ALS), to communicate in a natural way despite their disabilities.
It was developed by the foundation’s chief technologist, Bernard Muller, who is paralysed with MND and has learned to write code using eye-tracking technology.
The platform brings together AI technologies to create photo-realistic avatars that move in a natural way, with natural facial expressions, and can reproduce the voice of the person using it. It is able to listen to the conversation and offer disabled people a choice of three answers that they could select based on its understanding of the person.
One of the people testing the technology, Leah Stavenhagen, for example, worked as a consultant at McKinsey before she developed MND. The AI she uses has been trained on a book she wrote, along with 30 interviews in English and French.
LaVonne Roberts, CEO of the Scott-Morgan Foundation, told Computer Weekly that while people did not mind waiting to hear what Stephen Hawking had to say, delays in communication usually cause problems for both the speaker and the listener.
“When you have someone that is having to spell something out laboriously, they are fatiguing their eyes, which has been shown to further progression of MND, so we are trying to protect from that,” she said.
“The other thing that happens is people start giving much shorter answers because they don’t have the time to stay in a conversation,” added Roberts. “And, honestly, you end up with awkward pauses.”
The Scott-Morgan Foundation, which demonstrated the technology today at an AI Summit in New York, plans to make the software available free of charge, so that it can be used by as many people as possible. It will also offer a subscription version for more advanced features.
Many off-the-shelf computers and tablets now come with workable eye-tracking, and tracking devices provided by the NHS may also be able to use the technology, said Roberts.
“The idea was to democratise the technology by putting it on the web, giving the license keys, so that people have their voice back again,” she said.
More than 100 million people in the world who live with conditions that severely limit speech – including people recovering from a stroke, or living with cerebral palsy, a traumatic brain injury or non-verbal autism – could benefit from the technology.
The foundation plans to start a two-year trial of the platform, which will track some 20 participants using the technology, led by Mexican university Tecnológico de Monterrey, which will evaluate its impact.
It is also developing a simplified platform that could be used by people who do not have access to Wi-Fi.
Gil Perry, CEO of D-ID, which creates digital avatars for businesses, contributed to the project after the company helped a few people with MND/ALS in ways they found life-changing.
His company joined the project with the Scott-Morgan Foundation about two years ago, after meeting with Roberts. “I saw that LaVonne has the vision and can connect all the dots together, because she has a group of people who just sleep and dream that vision day and night,” said Perry.
The company has improved its technology so that it can create an avatar that shows facial expressions, even for someone whose condition means they are at an advanced stage of immobility.
Roberts said that one of the breakthrough moments came after a mother told the foundation that, although the technology was good, “You just didn’t capture my daughter’s smile”. That sparked work to make the avatars more lifelike. “I remember Erin’s mother crying when she saw Erin on a video, and she was like, ‘That’s her smile’,” she said. “And I knew we were onto something.”
Muller, who architected the platform, said that his avatar not only makes him visible, but also “present”. “When someone sees my avatar smile or shows concern, they are seeing me, not a disability,” he added. “That changes everything about how I connect with the world.”
Tech
How Taiwan Made Cashless Payments Cute
At a 7-Eleven convenience store in Taiwan, you can pick up a 4-inch plushie of Miffy, the bunny character from the Netherlands, a mini bento box charm complete with a realistic chicken drumstick, or a tiny plastic rotary phone. Produced by iCash Corporation (a 7-Eleven affiliate), these keychains are more than just trinkets: Each contains a contactless chip that connects it to Taiwan’s elaborate stored-value payment system.
iCash cards, along with those made by competitors like EasyCard and iPASS, can be used to ride the subway and buses, as well as to make purchases at convenience stores and other retailers in Taiwan. The over-the-top branded keychains, which cost anywhere from $10 to over $30, generate modest direct sales. But their real value lies in their marketing power, drawing shoppers deeper into 7-Eleven’s rewards ecosystem and keeping small payments inside its orbit.
Decentralized and Deeply Local
Over the past decade, iCash Corporation and its rivals have turned dozens of everyday products in Taiwan into limited-edition keychains. Many are miniature versions of snacks and household items available at 7-Eleven stores, such as a can of the sports drink Super Supau, a tube of Darlie toothpaste, and a cup of Uni-President’s classic yellow pudding dessert. Those who prefer something weirder can get a teeny package of toilet tissues, or a doll-sized Scotch-Brite kitchen sponge. When I lived in Taipei for a few months last year, I paid for things with a bag of crinkle-cut potato chips.
iCash Corporation has also licensed Sanrio characters like Hello Kitty and Cinnamoroll, as well as Pikachu from Pokémon and Stitch from Disney’s Lilo & Stitch. One of my favorite Taiwanese payment cards isn’t even a keychain at all—it’s a plastic version of Sailor Moon’s wand made by EasyCard, which (naturally) lights up when you complete a transaction.
I have been obsessed with these keychains and novelty toys since I began reporting on Taiwan several years ago. They’re the most delightful side effect of the island’s move toward cashless payments, and they demonstrate just how different Taiwan’s digital infrastructure is from China’s. Nearly every consumer transaction in China happens through either Alibaba or Tencent, two tech giants that have a near monopoly on payments. Whether you’re buying a bowl of noodles at a street stall or a designer purse in a Shanghai boutique, you will almost always find both an Alipay and WeChat Pay QR code.
In contrast, Taiwan has developed a pluralistic network of NFC cards and mobile wallets layered atop its dense transit system and network of convenience stores. The result is a cashless framework that is tactile, decentralized, and deeply local. In Taipei, people often “tap” to pay, while in Beijing, they “scan.” At least in some ways, Taiwan’s technology is arguably just as sophisticated as China’s. In fact, Alibaba followed the island’s lead last year and launched its own tap payment method.
Tech
The Disney-OpenAI Deal Redefines the AI Copyright War
On Thursday, Disney and OpenAI announced a deal that might have seemed unthinkable not so long ago. Starting next year, OpenAI will be able to use Disney characters like Mickey Mouse, Ariel, and Yoda in its Sora video-generation model. Disney will take a $1 billion stake in OpenAI, and its employees will get access to the firm’s APIs and ChatGPT. None of this makes much sense—unless Disney was fighting a battle it couldn’t win.
Disney has always been a notoriously aggressive litigant around its intellectual property. Alongside fellow IP powerhouse Universal, it sued Midjourney in June over outputs that allegedly infringed on classic film and TV characters. The night before the OpenAI deal was announced, Disney reportedly sent a cease-and-desist letter to Google alleging copyright infractions on a “massive scale.”
On the surface, there appears to be some dissonance with Disney embracing OpenAI while poking its rivals. But it’s more than likely that Hollywood is embarking down a similar path as media publishers when it comes to AI, signing licensing agreements where it can and using litigation when it can’t. (WIRED is owned by Condé Nast, which inked a deal with OpenAI in August 2024.)
“I think that AI companies and copyright holders are beginning to understand and become reconciled to the fact that neither side is going to score an absolute victory,” says Matthew Sag, a professor of law and artificial intelligence at Emory University. While many of these cases are still working their way through the courts, so far it seems like model inputs—the training data that these models learn from—are covered by fair use. But this deal is about outputs—what the model returns based on your prompt—where IP owners like Disney have a much stronger case
Coming to an output agreement resolves a host of messy, potentially unsolvable issues. Even if a company tells an AI model not to produce, say, Elsa at a Wendy’s drive-through, the model might know enough about Elsa to do so anyway—or a user might be able to prompt their way into making Elsa without asking for the character by name. It’s a tension that legal scholars call the “Snoopy problem,” but in this case you might as well call it the Disney problem.
“Faced with this increasingly clear reality, it makes sense for consumer-facing AI companies and entertainment giants like Disney to think about licensing arrangements,” says Sag.
Tech
Cursor Launches an AI Coding Tool For Designers
Cursor, the wildly popular AI coding startup, is launching a new feature that lets people design the look and feel of web applications with AI. The tool, Visual Editor, is essentially a vibe-coding product for designers, giving them access to the same fine-grained controls they’d expect from professional design software. But in addition to making changes manually, the tool lets them request edits from Cursor’s AI agent using natural language.
Cursor is best known for its AI coding platform, but with Visual Editor, the startup wants to capture other parts of the software creation process. “The core that we care about, professional developers, never changes,” Cursor’s head of design, Ryo Lu, tells WIRED. “But in reality, developers are not by themselves. They work with a lot of people, and anyone making software should be able to find something useful out of Cursor.”
Cursor is one of the fastest growing AI startups of all time. Since its 2023 debut, the company says it has surpassed $1 billion in annual recurring revenue and counts tens of thousands of companies, including Nvidia, Salesforce, and PwC, as customers. In November, the startup closed a $2.3 billion funding round that brought its valuation to nearly $30 billion.
Cursor was an early leader in the AI coding market, but it’s now facing more pressure than ever from larger competitors like OpenAI, Anthropic, and Google. The startup has historically licensed AI models from these companies, but now its rivals are investing heavily in AI coding products of their own. Anthropic’s Claude Code, for example, grew even faster than Cursor, reaching $1 billion in annual recurring revenue just six months after launch. In response, Cursor has started developing and deploying its own AI models.
Traditionally, building software applications has required many different teams working together across a wide range of products and tools. By integrating design capabilities directly into its coding environment, Cursor wants to show that it can bring these functions together into a single platform.
“Before, designers used to live in their own world of pixels and frames, and they don’t really translate to code. So teams had to build processes to hand off tasks back and forth between developers and designers, but there was a lot of friction,” says Lu. “We kind of melded the design world and the coding world together into one interface with one AI agent.”
AI-Powered Web Design
In a demo at WIRED’s San Francisco headquarters, Cursor’s product engineering lead Jason Ginsberg showcased how Visual Editor could modify the aesthetics of a webpage.
A traditional design panel on the right lets users adjust fonts, add buttons, create menus, or change backgrounds. On the left, a chat interface accepts natural-language requests, such as “make this button’s background color red.” Cursor’s agent then applies those changes directly into the code base.
Earlier this year, Cursor released its own web browser that works directly within its coding environment. The company argues the browser creates a better feedback loop when developing products, allowing engineers and designers to view requests from real users and access Chrome-style developer tools.
-
Politics5 days ago17 found dead in migrant vessel off Crete: coastguard
-
Uncategorized1 week ago
[CinePlex360] Your site has updated to WordPres
-
Sports6 days agoAustralia take control of second Ashes Test | The Express Tribune
-
Business1 week agoAsian stocks today: Markets trade mixed ahead of US economic data; HSI nears 1% loss; Nikkei adds over 800 points – The Times of India
-
Entertainment1 week agoSabrina Carpenter recalls ‘unbelievable’ experience with pal Taylor Swift
-
Fashion1 week agoBangladesh’s economic outlook cautiously optimistic: Govt
-
Fashion4 days agoGermany’s LuxExperience appoints Francis Belin as new CEO of Mytheresa
-
Tech1 week agoThe Trump Administration Wants Immigrants to Self-Deport. It’s a Shit Show
