Connect with us

Tech

Five ways to make AI more trustworthy

Published

on

Five ways to make AI more trustworthy


Credit: Unsplash/CC0 Public Domain

Self-driving taxis are sweeping the country and will likely start service in Colorado in the coming months. How many of us will be lining up to take a ride? That depends on our level of trust, says Amir Behzadan, a professor in the Department of Civil, Environmental and Architectural Engineering, and a fellow in the Institute of Behavioral Science (IBS) at CU Boulder.

He and his team of researchers in the Connected Informatics and Built Environment Research (CIBER) Lab at CU Boulder are unearthing new insights into how the artificial intelligence (AI) technology we might encounter in daily life can earn our confidence. They’ve created a framework for developing trustworthy AI tools that benefit people and society.

In a new paper in the journal AI and Ethics, Behzadan and his Ph.D. student Armita Dabiri drew on that framework to create a conceptual AI tool that incorporates the elements of trustworthiness.

“As a human, when you make yourself vulnerable to potential harm, assuming others have positive intentions, you’re trusting them,” said Behzadan. “And now you can bring that concept from human– to human–technology relationships.”

How trust forms

Behzadan studies the building blocks of human trust in AI systems that are used in the built environment, from self-driving cars and smart home security systems to mobile public transportation apps and systems that help people collaborate on group projects. He says trust has a critical impact on whether people will adopt and rely on them or not.

Trust is deeply embedded in human civilization, according to Behzadan. Since ancient times, trust has helped people cooperate, share knowledge and resources, form communal bonds and divvy up labor. Early humans began forming communities and trusting those within their inner circles.

Mistrust arose as a survival instinct, making people more cautious when interacting with people outside of their group. Over time, cross-group trade encouraged different groups to interact and become interdependent, but it didn’t eliminate mistrust.

“We can see echoes of this trust-mistrust dynamic in modern attitudes toward AI,” says Behzadan, “especially if it’s developed by corporations, governments or others we might consider ‘outsiders’.”

So what does trustworthy AI look like? Here are five main takeaways from Behzadan’s framework.

1. It knows its users

Many factors affect whether—and how much—we trust new AI technology. Each of us has our own individual inclination toward trust, which is influenced by our preferences, value system, cultural beliefs, and even the way our brains are wired.

“Our understanding of trust is really different from one person to the next,” said Behzadan. “Even if you have a very trustworthy system or person, our reaction to that system or person can be very different. You may trust them, and I may not.”

He said it’s important for developers to consider who the users are of an AI tool. What social or cultural norms do they follow? What might their preferences be? How technologically literate are they?

For instance, Amazon Alexa, Google Assistant and other voice assistants offer simpler language, larger text displays on devices and a longer response time for older adults and people who aren’t as technologically savvy, Behzadan said.

2. It’s reliable, ethical and transparent

Technical trustworthiness generally refers to how well an AI tool works, how safe and secure it is, and how easy it is for users to understand how it works and how their data is used.

An optimally trustworthy tool must do its job accurately and consistently, Behzadan said. If it does fail, it should not harm people, property or the environment. It must also provide security against unauthorized access, protect users’ privacy and be able to adapt and keep working amid unexpected changes. It should also be free from harmful bias and should not discriminate between different users.

Transparency is also key. Behzadan says some AI technologies, such as sophisticated tools used for credit scoring or loan approval, operate like a “black box” that doesn’t allow us to see how our data is used or where it goes once it’s in the system. If the system could share how it’s using data and users could see how it makes decisions, he said, more people might be willing to share their data.

In many settings, like medical diagnosis, the most trustworthy AI tools should complement human expertise and be transparent about their reasoning with expert clinicians, according to Behzadan.

AI developers should not only try to develop trustworthy, ethical tools, but also find ways to measure and improve their tools’ trustworthiness once they are launched for the intended users.

3. It takes context into account

There are countless uses for AI tools, but a particular tool should be sensitive to the context of the problem it’s trying to solve.

In the newest study, Behzadan and co-researcher Dabiri created a hypothetical scenario where a project team of engineers, urban planners, historic preservationists and government officials had been tasked with repairing and maintaining a historical building in downtown Denver. Such work can be complex and involve competing priorities, like cost effectiveness, energy savings, historical integrity and safety.

The researchers proposed a conceptual AI assistive tool called PreservAI that could be designed to balance competing interests, incorporate stakeholder input, analyze different outcomes and trade-offs, and collaborate helpfully with humans rather than replacing their expertise.

Ideally, AI tools should incorporate as much contextual information as possible so they can work reliably.

4. It’s easy to use and asks users how it’s doing

The AI tool should not only do its job efficiently, but also provide a good user experience, keeping errors to a minimum, engaging users and building in ways to address potential frustrations, Behzadan said.

Another key ingredient for building trust? Actually allowing people to use AI systems and challenge AI outcomes.

“Even if you have the most trustworthy system, if you don’t let people interact with it, they are not going to trust it. If very few people have really tested it, you can’t expect an entire society to trust it and use it,” he said.

Finally, stakeholders should be able to provide feedback on how well the tool is working. That feedback can be helpful in improving the tool and making it more trustworthy for future users.

5. When trust is lost, it adapts to rebuild it

Our trust in new technology can change over time. One person might generally trust new technology and be excited to ride in a self-driving taxi, but if they read news stories about the taxis getting into crashes, they might start to lose trust.

That trust can later be rebuilt, said Behzadan, although users can remain skeptical about the tool.

For instance, he said, the “Tay” chatbot by Microsoft failed within hours of its launch in 2016 because it picked up harmful language from social media and began to post offensive tweets. The incident caused public outrage. But later that same year, Microsoft released a new chatbot, “Zo,” with stronger content filtering and other guardrails. Although some users criticized Zo as a “censored” chatbot, its improved design helped more people trust it.

There’s no way to completely eliminate the risk that comes with trusting AI, Behzadan said. AI systems rely on people being willing to share data—the less data the system has, the less reliable it is. But there’s always a risk of data being misused or AI not working the way it’s supposed to.

When we’re willing to use AI systems and share our data with them, though, the systems become better at their jobs and more trustworthy. And while no system is perfect, Behzadan feels the benefits outweigh the downsides.

“When people trust AI systems enough to share their data and engage with them meaningfully, those systems can improve significantly, becoming more accurate, fair, and useful,” he said.

“Trust is not just a benefit to the technology; it is a pathway for people to gain more personalized and effective support from AI in return.”

More information:
Amir Behzadan et al, Factors influencing human trust in intelligent built environment systems, AI and Ethics (2025). DOI: 10.1007/s43681-025-00813-6

Citation:
Five ways to make AI more trustworthy (2025, October 22)
retrieved 22 October 2025
from https://techxplore.com/news/2025-10-ways-ai-trustworthy.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Tech

Our Favorite Affordable Air Purifier Is Temporarily Even Cheaper

Published

on

Our Favorite Affordable Air Purifier Is Temporarily Even Cheaper


Tired of the stale, fetid air looming over your apartment like a cloud? Check out the Coway Airmega Mighty, an already wallet-friendly home air purifier that’s even cheaper right now as part of the Amazon Big Spring Sale. It’s currently marked down to just $154, a $76 discount from its typical price, but you’ll want to move quickly if you’re interested, as the deal is only available for a limited time.

Coway

Airmega Mighty AP-1512HH

Despite its low price tag and squat stature, the Airmega Mighty is capable of cleaning a substantial amount of space. At full bore, it can handle a 361-square-foot space, although you’ll get the best performance, and save your ears, if you’re closer to a 200-square-foot room. If you don’t want it running constantly, there are built-in timers to automatically shut off after 1, 4, or 8 hours, or you can use Eco Mode, which will run until the Might doesn’t sense any dirty air for half an hour.

That’s right, the Airmega Mighty has a built-in air quality sensor, and it reflects the current state of the air quality using a colored light with three levels. It uses those readings to automatically adjust the fan speed and timing settings on the fly, as well as giving you a peak into how bad the air you’re breathing right now is for you. While it lacks integration with smart home setups like Google Home, it makes up for it by handling all of its own business without Wi-Fi or extra apps on your phone.

While the Coway Airmega Mighty is available in three colors, only the black and silver model is currently discounted, so you’ll have to pay full price if it doesn’t match your living room’s color scheme. We’ve put in the work testing every air purifier we could get our hands on, so make sure to check out the full guide if you’re trying to clean up your space. The Coway is discounted as part of Amazon’s Big Spring Sale, and we’ve got the best deals from products we’ve tested gathered in one place if you want to save some bucks.



Source link

Continue Reading

Tech

In a Big Reversal, Zohran Mamdani Tells NYC Agencies to Use TikTok

Published

on

In a Big Reversal, Zohran Mamdani Tells NYC Agencies to Use TikTok


New York City mayor Zohran Mamdani, who rode a social media-fueled campaign to Gracie Mansion, is reversing an Eric Adams–era directive barring TikTok from government-owned devices. Local agencies will now be able to post about their projects on the app, though with new guardrails to protect city networks.

“The Mamdani administration is committed to using every tool in our toolbox to communicate with New Yorkers,” says the email to agencies, obtained by WIRED. “At a moment when people are turning to city government for information about free services, emergency situations, upcoming events, and more, we want to open up new avenues of communication with the public and help deliver the information New Yorkers need.”

In August 2023, then-mayor Adams barred the use of TikTok on government devices, joining the ranks of other state and federal agencies that at the time deemed the app a major security risk. Adams spokesperson Jonah Allon said then that the city’s Cyber Command office had decided that TikTok, which was owned by the Chinese-based company ByteDance, “posed a security threat to the city’s technical networks and directed its removal from city-owned devices.”

The directive resulted in a number of popular city-run accounts shutting down, including accounts for the NYC Departments of Sanitation and Parks and Recreation. As of Tuesday morning, the accounts’ bios read, “This account was operated by NYC until August 2023. It’s no longer monitored.”

Now, these TikTok accounts will be allowed to reopen with a few new rules aimed at protecting the security of NYC’s networks and devices while allowing agencies to communicate with citizens on the popular app. In order to use TikTok, agencies will be required to use separate, government-issued devices for the app that “cannot contain sensitive or restricted data, and they cannot be used for email, internal systems, or privileged access,” according to the email to agencies. Agencies will designate specific staff from media and press offices to run the TikTok accounts with city government emails, not personal ones.

“In a fragmented media landscape, more and more people—especially younger people—are looking beyond the four corners of their television screen to stay informed,” Mamdani said in a statement to WIRED. “Our responsibility is simple: Meet people where they are. That means stepping outside our comfort zones and communicating in ways that reflect how New Yorkers actually live, work, and connect.”

Mamdani’s rule reversal comes after his November election that relied heavily on social media to conduct voter outreach. Mamdani leveraged TikTok to recruit volunteers and amplify his policy platform. Over his first few months in office, Mamdani has continued to leverage social media platforms, publishing a variety of public-service announcements related to city-run programs.

Ahead of dangerous winter weather in January, Mamdani published a video to the official @nycmayor account on Instagram asking New Yorkers to sign up for the city’s free emergency communications program, NotifyNYC. The program netted more than 32,000 new subscribers in the four days after the video was released, according to stats provided by Mamdani’s office. Last year, New York City Emergency Management ran a $240,000 advertising round for NotifyNYC, acquiring around 48,000 new subscribers. Mamdani also created a handful of videos asking New Yorkers to join a Department of Sanitation snow-shoveling program. Around 5,000 people signed up, tripling the number previously enrolled in the program.

The situation has also changed for the app. In January 2026, TikTok finalized a deal with the Trump administration to form a new US-based version of the company run by American investors, including Oracle. The consortium of American investors staved off a nationwide ban of the app.





Source link

Continue Reading

Tech

The $1 Million Aston Martin Valhalla Makes You Drive Better Than You Thought Possible

Published

on

The  Million Aston Martin Valhalla Makes You Drive Better Than You Thought Possible


Yes, it’s a supercar, but it’s also sold very much as a track and road car, one that accommodates a passenger, all of which means road trips and weekend-away stays are very much possible. Well, they would be if there were anywhere at all to store luggage. Lamborghini managed to find some luggage space in its Revuelto design, so there’s no excuse here, really.

The design department otherwise has had a field day. Top-mounted exhausts, dihedral doors, and even an F1-style roof snorkel to accompany that air-braking rear wing deliver an exterior that is nothing short of arresting. Somehow, none of this looks garish or out of place on the Valhalla in person. Everything has a purpose, and nothing seems to scream as flexing or showing off. There’s a cohesion to the Valhalla aesthetic that others might not manage.

Inside, it is much more comfortable than you would imagine. The one-piece carbon-fiber seats look like they are going to be tricky, but on my two-hour road drive, they were supportive and, yes, comfortable. Visibility is surprisingly good, but a camera system is required for the rear view mirror because there’s no rear window. The rest of the interior is minimal, but the steering wheel is excellent (which, as Jony Ive will tell you, is no mean feat) and neatly signals some motorsport cool.

Photograph: Jeremy White

The one gripe for the interior is the dash and center screens, which are clear and responsive, and offer up the usual smartphone mirroring options, but they aren’t luxurious. We’re seeing a lot more effort these days with screen design from Ferrari’s new Luce as well as BMW in the iX3 and i3, but here, Aston has decidedly functional, off-the-shelf-looking displays. If I were parting with a million dollars, I might want more consideration here.

Odin’s Beard

On the road and track is where the Valhalla excels. Impressive doesn’t come close, and, despite the delays, the patience shown by Aston has clearly paid dividends. The ride is superb, as well as being ridiculously quick. The chassis is exceptionally agile, making the car feel alert and light. There are enormous reserves of grip to match the formidable braking and acceleration, and as a result, this is a car that flatters you; it effortlessly seduces you into driving much harder and better than you think you can, all while giving you levels of confidence you wouldn’t think possible.

I’ve driven the Lamborghini Revuelto, and yes, it’s exciting, but also there’s a part of you that is wary—the part that knows that if you don’t keep your wits about you 100 percent of the time, things will go bad very quickly. The Valhalla offers up all of that fun and excitement, but almost none of the trepidation. It is gratifying and intuitive to drive. Anyone can fully enjoy this car, not merely those used to track days. Some will say the engine note is not as full-throated as might be expected in such a car, but others will be having so much fun they won’t care. Nor should they.



Source link

Continue Reading

Trending