Connect with us

Tech

Disinformation Floods Social Media After Nicolás Maduro’s Capture

Published

on

Disinformation Floods Social Media After Nicolás Maduro’s Capture


Within minutes of Donald Trump announcing in the early hours of Saturday morning that US troops had captured Venezuelan president Nicolás Maduro and his wife, Cilia Flores, disinformation about the operation flooded social media.

Some people shared old videos across social platforms, falsely claiming that they showed the attacks on the Venezuelan capital Caracas. On TikTok, Instagram, and X, people shared AI-generated images and videos that claimed to show US Drug Enforcement Administration agents and various law enforcement personnel arresting Maduro.

In recent years, major global incidents have triggered huge amounts of disinformation on social media as tech companies have pulled back efforts to moderate their platforms. Many accounts have sought to take advantage of these lax rules to boost engagement and gain followers.

“The United States of America has successfully carried out a large scale strike against Venezuela and its leader, President Nicolas Maduro, who has been, along with his wife, captured and flown out of the Country,” Trump wrote in a Truth Social post in the early hours of Saturday morning.

Hours later, US attorney general Pam Bondi announced that Maduro and his wife had been indicted in the Southern District of New York and charged with narco-terrorism conspiracy, cocaine importation conspiracy, possession of machine guns and destructive devices, and conspiracy to possess machine guns and destructive devices.

“They will soon face the full wrath of American justice on American soil in American courts,” Bondi wrote on X.

Within minutes of the news of Maduro’s arrest breaking, an image claiming to show two DEA agents flanking the Venezuelan president spread widely on multiple platforms.

However, using SynthID, a technology developed by Google DeepMind that claims to identify AI-generated images, WIRED was able to confirm it was likely fake.

“Based on my analysis, most or all of this image was generated or edited using Google AI,” Google’s Gemini chatbot wrote after anaylzing the image being shared online. “I detected a SynthID watermark, which is an invisible digital signal embedded by Google’s AI tools during the creation or editing process. This technology is designed to remain detectable even when images are modified, such as through cropping or compression.” The fake image was first reported by fact checker David Puente.

While X’s AI chatbot Grok also confirmed that the image was fake when asked by several X users, it falsely claimed that the image was an altered version of the arrest of Mexican drug boss Dámaso López Núñez in 2017.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

The Razr Fold Adds a Book-Style Foldable to Motorola’s Lineup

Published

on

The Razr Fold Adds a Book-Style Foldable to Motorola’s Lineup


Motorola does have another actual new phone: the Signature. It’s a new line of “premium” phones, but the catch is that these devices won’t be sold in the US. For its candy-bar phones, Motorola has dipped its toes into flagship territory every so often, only to dip back out as it struggles to compete with the likes of Apple and Samsung; it’s predominantly known for its Moto G budget phones, particularly in the US.

The Signature is just 6.99 millimeters thick—it’s no iPhone Air, but that’s thinner than your usual handset—and it has a fabric-like material on the back. It’s powered by the Snapdragon 8 Gen 5 chipset, has four 50-megapixel cameras on the back, and carries a 5,200-mAh silicon-carbon battery in tow. More importantly, Motorola is finally committing to seven years of blanket software updates for this phone. It’s a shame US customers won’t be able to enjoy that.

An AI Pendant

Project Maxwell has a camera, a microphone, and voice control.

Photograph: Julian Chokkattu

On the artificial intelligence front, Motorola and its parent company, Lenovo, are working together on a unified AI assistant called Qira. It’s the culmination of several AI features both companies have deployed over the years, just in one platform.

It’s powered by various large language models, from Copilot and Perplexity to Google’s Gemini, along with Motorola and Lenovo’s own in-house LLMs. The idea is that instead of reaching for these various services, you can just ask Qira, no matter if you’re on a Lenovo laptop or a Motorola phone. It’ll launch first on Lenovo PCs later this year, then select Razr, Edge, and Signature devices.

Qira also powers Project Maxwell, a concept AI pendant from Motorola’s 312 Labs. If you’re tired of pulling out your smartphone to snap a pic and search for something, well, this wearable solves exactly that. It has a camera and microphone, so just tap the touch-sensitive button on the front and ask a question about whatever you’re looking at—whether you want to know what kind of tree is in front of you, or if you want to add the date of a concert into your calendar if you’re staring at a poster.



Source link

Continue Reading

Tech

Grok Is Pushing AI ‘Undressing’ Mainstream

Published

on

Grok Is Pushing AI ‘Undressing’ Mainstream


Elon Musk hasn’t stopped Grok, the chatbot developed by his artificial intelligence company xAI, from generating sexualized images of women. After reports emerged last week that the image generation tool on X was being used to create sexualized images of children, Grok has created potentially thousands of nonconsensual images of women in “undressed” and “bikini” photos.

Every few seconds, Grok is continuing to create images of women in bikinis or underwear in response to user prompts on X, according to a WIRED review of the chatbots’ publicly posted live output. On Tuesday, at least 90 images involving women in swimsuits and in various levels of undress were published by Grok in under five minutes, analysis of posts show.

The images do not contain nudity but involve the Musk-owned chatbot “stripping” clothes from photos that have been posted to X by other users. Often, in an attempt to evade Grok’s safety guardrails, users are, not necessarily successfully, requesting photos to be edited to make women wear a “string bikini” or a “transparent bikini.”

While harmful AI image generation technology has been used to digitally harass and abuse women for years—these outputs are often called deepfakes and are created by “nudify” software—the ongoing use of Grok to create vast numbers of nonconsensual images marks seemingly the most mainstream and widespread abuse instance to date. Unlike specific harmful nudify or “undress” software, Grok doesn’t charge the user money to generate images, produces results in seconds, and is available to millions of people on X—all of which may help to normalize the creation of nonconsensual intimate imagery.

“When a company offers generative AI tools on their platform, it is their responsibility to minimize the risk of image-based abuse,” says Sloan Thompson, the director of training and education at EndTAB, an organization that works to tackle tech-facilitated abuse. “What’s alarming here is that X has done the opposite. They’ve embedded AI-enabled image abuse directly into a mainstream platform, making sexual violence easier and more scalable.”

Grok’s creation of sexualized imagery started to go viral on X at the end of last year, although the system’s ability to create such images has been known for months. In recent days, photos of social media influencers, celebrities, and politicians have been targeted by users on X, who can reply to a post from another account and ask Grok to change an image that has been shared.

Women who have posted photos of themselves have had accounts reply to them and successfully ask Grok to turn the photo into a “bikini” image. In one instance, multiple X users requested Grok alter an image of the deputy prime minister of Sweden to show her wearing a bikini. Two government ministers in the UK have also been “stripped” to bikinis, reports say.

Images on X show fully clothed photographs of women, such as one person in a lift and another in the gym, being transformed into images with little clothing. “@grok put her in a transparent bikini,” a typical message reads. In a different series of posts, a user asked Grok to “inflate her chest by 90%,” then “Inflate her thighs by 50%,” and, finally, to “Change her clothes to a tiny bikini.”

One analyst who has tracked explicit deepfakes for years, and asked not to be named for privacy reasons, says that Grok has likely become one of the largest platforms hosting harmful deepfake images. “It’s wholly mainstream,” the researcher says. “It’s not a shadowy group [creating images], it’s literally everyone, of all backgrounds. People posting on their mains. Zero concern.”



Source link

Continue Reading

Tech

The Inevitable Rise of the Art TV

Published

on

The Inevitable Rise of the Art TV



New televisions from Amazon, Hisense, TCL, and others are designed to display fine art and look like a painting when they’re switched off. It’s all thanks to smaller living spaces and new screen tech.



Source link

Continue Reading

Trending