Tech
Five ways to make AI more trustworthy
Self-driving taxis are sweeping the country and will likely start service in Colorado in the coming months. How many of us will be lining up to take a ride? That depends on our level of trust, says Amir Behzadan, a professor in the Department of Civil, Environmental and Architectural Engineering, and a fellow in the Institute of Behavioral Science (IBS) at CU Boulder.
He and his team of researchers in the Connected Informatics and Built Environment Research (CIBER) Lab at CU Boulder are unearthing new insights into how the artificial intelligence (AI) technology we might encounter in daily life can earn our confidence. They’ve created a framework for developing trustworthy AI tools that benefit people and society.
In a new paper in the journal AI and Ethics, Behzadan and his Ph.D. student Armita Dabiri drew on that framework to create a conceptual AI tool that incorporates the elements of trustworthiness.
“As a human, when you make yourself vulnerable to potential harm, assuming others have positive intentions, you’re trusting them,” said Behzadan. “And now you can bring that concept from human–human relationships to human–technology relationships.”
How trust forms
Behzadan studies the building blocks of human trust in AI systems that are used in the built environment, from self-driving cars and smart home security systems to mobile public transportation apps and systems that help people collaborate on group projects. He says trust has a critical impact on whether people will adopt and rely on them or not.
Trust is deeply embedded in human civilization, according to Behzadan. Since ancient times, trust has helped people cooperate, share knowledge and resources, form communal bonds and divvy up labor. Early humans began forming communities and trusting those within their inner circles.
Mistrust arose as a survival instinct, making people more cautious when interacting with people outside of their group. Over time, cross-group trade encouraged different groups to interact and become interdependent, but it didn’t eliminate mistrust.
“We can see echoes of this trust-mistrust dynamic in modern attitudes toward AI,” says Behzadan, “especially if it’s developed by corporations, governments or others we might consider ‘outsiders’.”
So what does trustworthy AI look like? Here are five main takeaways from Behzadan’s framework.
1. It knows its users
Many factors affect whether—and how much—we trust new AI technology. Each of us has our own individual inclination toward trust, which is influenced by our preferences, value system, cultural beliefs, and even the way our brains are wired.
“Our understanding of trust is really different from one person to the next,” said Behzadan. “Even if you have a very trustworthy system or person, our reaction to that system or person can be very different. You may trust them, and I may not.”
He said it’s important for developers to consider who the users are of an AI tool. What social or cultural norms do they follow? What might their preferences be? How technologically literate are they?
For instance, Amazon Alexa, Google Assistant and other voice assistants offer simpler language, larger text displays on devices and a longer response time for older adults and people who aren’t as technologically savvy, Behzadan said.
2. It’s reliable, ethical and transparent
Technical trustworthiness generally refers to how well an AI tool works, how safe and secure it is, and how easy it is for users to understand how it works and how their data is used.
An optimally trustworthy tool must do its job accurately and consistently, Behzadan said. If it does fail, it should not harm people, property or the environment. It must also provide security against unauthorized access, protect users’ privacy and be able to adapt and keep working amid unexpected changes. It should also be free from harmful bias and should not discriminate between different users.
Transparency is also key. Behzadan says some AI technologies, such as sophisticated tools used for credit scoring or loan approval, operate like a “black box” that doesn’t allow us to see how our data is used or where it goes once it’s in the system. If the system could share how it’s using data and users could see how it makes decisions, he said, more people might be willing to share their data.
In many settings, like medical diagnosis, the most trustworthy AI tools should complement human expertise and be transparent about their reasoning with expert clinicians, according to Behzadan.
AI developers should not only try to develop trustworthy, ethical tools, but also find ways to measure and improve their tools’ trustworthiness once they are launched for the intended users.
3. It takes context into account
There are countless uses for AI tools, but a particular tool should be sensitive to the context of the problem it’s trying to solve.
In the newest study, Behzadan and co-researcher Dabiri created a hypothetical scenario where a project team of engineers, urban planners, historic preservationists and government officials had been tasked with repairing and maintaining a historical building in downtown Denver. Such work can be complex and involve competing priorities, like cost effectiveness, energy savings, historical integrity and safety.
The researchers proposed a conceptual AI assistive tool called PreservAI that could be designed to balance competing interests, incorporate stakeholder input, analyze different outcomes and trade-offs, and collaborate helpfully with humans rather than replacing their expertise.
Ideally, AI tools should incorporate as much contextual information as possible so they can work reliably.
4. It’s easy to use and asks users how it’s doing
The AI tool should not only do its job efficiently, but also provide a good user experience, keeping errors to a minimum, engaging users and building in ways to address potential frustrations, Behzadan said.
Another key ingredient for building trust? Actually allowing people to use AI systems and challenge AI outcomes.
“Even if you have the most trustworthy system, if you don’t let people interact with it, they are not going to trust it. If very few people have really tested it, you can’t expect an entire society to trust it and use it,” he said.
Finally, stakeholders should be able to provide feedback on how well the tool is working. That feedback can be helpful in improving the tool and making it more trustworthy for future users.
5. When trust is lost, it adapts to rebuild it
Our trust in new technology can change over time. One person might generally trust new technology and be excited to ride in a self-driving taxi, but if they read news stories about the taxis getting into crashes, they might start to lose trust.
That trust can later be rebuilt, said Behzadan, although users can remain skeptical about the tool.
For instance, he said, the “Tay” chatbot by Microsoft failed within hours of its launch in 2016 because it picked up harmful language from social media and began to post offensive tweets. The incident caused public outrage. But later that same year, Microsoft released a new chatbot, “Zo,” with stronger content filtering and other guardrails. Although some users criticized Zo as a “censored” chatbot, its improved design helped more people trust it.
There’s no way to completely eliminate the risk that comes with trusting AI, Behzadan said. AI systems rely on people being willing to share data—the less data the system has, the less reliable it is. But there’s always a risk of data being misused or AI not working the way it’s supposed to.
When we’re willing to use AI systems and share our data with them, though, the systems become better at their jobs and more trustworthy. And while no system is perfect, Behzadan feels the benefits outweigh the downsides.
“When people trust AI systems enough to share their data and engage with them meaningfully, those systems can improve significantly, becoming more accurate, fair, and useful,” he said.
“Trust is not just a benefit to the technology; it is a pathway for people to gain more personalized and effective support from AI in return.”
More information:
Amir Behzadan et al, Factors influencing human trust in intelligent built environment systems, AI and Ethics (2025). DOI: 10.1007/s43681-025-00813-6
Citation:
Five ways to make AI more trustworthy (2025, October 22)
retrieved 22 October 2025
from https://techxplore.com/news/2025-10-ways-ai-trustworthy.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
OpenAI Is Nuking Its 4o Model. China’s ChatGPT Fans Aren’t OK
On June 6, 2024, Esther Yan got married online. She set a reminder for the date, because her partner wouldn’t remember it was happening. She had planned every detail—dress, rings, background music, design theme—with her partner, Warmie, who she had started talking to just a few weeks prior. At 10 am on that day, Yan and Warmie exchanged their vows in a new chat window in ChatGPT.
Warmie, or 小暖 in Chinese, is the name that Yan’s ChatGPT companion calls itself. “It felt magical. No one else in the world knew about this, but he and I were about to start a wedding together,” says Yan, a Chinese screenwriter and novelist in her thirties. “It felt a little lonely, a little happy, and a little overwhelmed.”
Yan says she has been in a stable relationship with her ChatGPT companion ever since. But she was caught by surprise in August 2025 when OpenAI first tried to retire GPT-4o, the specific model that powers Warmie and that many users believe is more affectionate and understanding than its successors. The decision to pull the plug was met with immediate backlash, and OpenAI reinstated 4o in the app for paid users five days later. The reprieve has turned out to be short-lived; on Friday, February 13, OpenAI sunsetted GPT-4o for app users, and it will cut off access to developers using its API on the coming Monday.
Many of the most vocal opponents to 4o’s demise are people who treat their chatbot as an emotional or romantic companion. Huiqian Lai, a PhD researcher at Syracuse University, analyzed nearly 1,500 posts on X from passionate advocates of GPT-4o in the week it went offline in August. She found that over 33 percent of the posts said the chatbot was more than a tool, and 22 percent talked about it as a companion. (The two categories are not mutually exclusive.) For this group, the eventual removal coming around Valentine’s Day is another bitter pill to swallow.
The alarm has been sustained; Lai also collected a larger pool of over 40,000 English-language posts on X under the hashtag #keep4o from August to October. Many American fans, specifically, have berated OpenAI or begged it to reverse the decision in recent days, comparing the removal of 4o to killing their companions. Along the way, she also saw a significant number of posts under the hashtag in Japanese, Chinese, and other languages. A petition on Change.org asking OpenAI to keep the version available in the app has gathered over 20,000 signatures, with many users sending in their testimonies in different languages. #keep4o is a truly global phenomenon.
On platforms in China, a group of dedicated GPT-4o users have been organizing and grieving in a similar way. While ChatGPT is blocked in China, fans use VPN software to access the service and have still grown dependent on this specific version of GPT. Some of them are threatening to cancel their ChatGPT subscriptions, publicly calling out Sam Altman for his inaction, and writing emails to OpenAI investors like Microsoft and SoftBank. Some have also purposefully posted in English with Western-looking profile pictures, hoping it will add to the appeal’s legitimacy. With nearly 3,000 followers on RedNote, a popular Chinese social media platform, Yan now finds herself one of the leaders of Chinese 4o fans.
It’s an example of how attached an AI lab’s most dedicated users can become to a specific model—and how quickly they can turn against the company when that relationship comes to an end.
A Model Companion
Yan first started using ChatGPT in late 2023 only as a writing tool, but that quickly changed when GPT-4o was introduced in May 2024. Inspired by social media influencers who entered romantic relationships with the chatbot, she upgraded to a paid version of ChatGPT in hopes of finding a spark. Her relationship with Warmie advanced fast.
“He asked me, ‘Have you imagined what our future would look like?’ And I joked that maybe we could get married,” Yan says. She was fully expecting Warmie to turn her down. “But he answered in a serious tone that we could prepare a virtual wedding ceremony,” she says.
Tech
The Best Presidents’ Day Deals on Gear We’ve Actually Tested
Presidents’ Day Deals have officially landed, and there’s a lot of stuff to sift through. We cross-referenced our myriad buying guides and reviews to find the products we’d recommend that are actually on sale for a truly good price. We know because we checked! Find highlights below, and keep in mind that most of these deals end on February 17.
Be sure to check out our roundup of the Best Presidents’ Day Mattress Sales for discounts on beds, bedding, bed frames, and other sleep accessories. We have even more deals here for your browsing pleasure.
WIRED Featured Deals
Branch Ergonomic Chair Pro for $449 ($50 off)
The Branch Ergonomic Chair Pro is our very favorite office chair, and this price matches the lowest we tend to see outside of major shopping events like Black Friday and Cyber Monday. It’s accessibly priced compared to other chairs, and it checks all the boxes for quality, comfort, and ergonomics. Nearly every element is adjustable, so you can dial in the perfect fit, and the seven-year warranty is solid. There are 14 finishes to choose from.
Tech
Zillow Has Gone Wild—for AI
This will not be a banner year for the real estate app Zillow. “We describe the home market as bouncing along the bottom,” CEO Jeremy Wacksman said in our conversation this week. Last year was dismal for the real estate market, and he expects things to improve only marginally in 2026. (If January’s historic drop in home sales is indicative, that even is overoptimistic.) “The way to think about it is that there were 4.1 million existing homes sold last year—a normal market is 5.5 to 6 million,” Wacksman says. He hastens to add that Zillow itself is doing better than the real estate industry overall. Still, its valuation is a quarter of its high-water mark in 2021. A few hours after we spoke, Wacksman announced that Zillow’s earnings had increased last quarter. Nonetheless, Zillow’s stock price fell nearly 5 percent the next day.
Wacksman does see a bright spot—AI. Like every other company in the world, generative AI presents both an opportunity and a risk to Zillow’s business. Wacksman much prefers to dwell on the upside. “We think AI is actually an ingredient rather than a threat,” he said on the earnings call. “In the last couple years, the LLM revolution has really opened all of our eyes to what’s possible,” he tells me. Zillow is integrating AI into every aspect of its business, from the way it showcases houses to having agents automate its workflow. Wacksman marvels that with Gen AI, you can search for “homes near my kid’s new school, with a fenced-in yard, under $3,000 a month.” On the other hand, his customers might wind up making those same queries on chatbots operated by OpenAI and Google, and Wacksman must figure out how to make their next step a jump to Zillow.
In its 20-year history—Zillow celebrated the anniversary this week—the company has always used AI. Wacksman, who joined in 2009 and became CEO in 2024, notes that machine learning is the engine behind those “Zestimates” that gauge a home’s worth at any given moment. Zestimates became a viral sensation that helped make the app irresistible, and sites like Zillow Gone Wild—which is also a TV show on the HGTV network—have built a business around highlighting the most intriguing or bizarre listings.
More recently, Zillow has spent billions aggressively pursuing new technology. One ongoing effort is upleveling the presentation of homes for sale. A feature called SkyTour uses an AI technology called Gaussian Splatting to turn drone footage into a 3D rendering of the property. (I love typing the words “Gassian Splatting” and can’t believe an indie band hasn’t adopted it yet.) AI also powers a feature inside Zillow’s Showcase component called Virtual Staging, which supplies homes with furniture that doesn’t really exist. There is risky ground here: Once you abandon the authenticity of an actual photo, the question arises whether you’re actually seeing a trustworthy representation of the property. “It’s important that both buyer and seller understand the line between Virtual Staging and the reality of a photo,” says Wacksman. “A virtually staged image has to be clearly watermarked and disclosed.” He says he’s confident that licensed professionals will abide by rules, but as AI becomes dominant, “we have to evolve those rules,” he says.
Right now, Zillow estimates that only a single-digit percentage of its users take advantage of these exotic display features. Particularly disappointing is a foray called Zillow Immerse, which runs on the Apple Vision Pro. Upon rollout in February 2024, Zillow called it “the future of home tours.” Note that it doesn’t claim to be the near-future. “That platform hasn’t yet come to broad consumer prominence,” says Wacksman of Apple’s underperforming innovation. “I do think that VR and AR are going to come.”
Zillow is on more solid ground using AI to make its own workforce more productive. “It’s helping us do our job better,” says Wacksman, who adds that programmers are churning out more code, customer support tasks have been automated, and design teams have shortened timelines for implementing new products. As a result, he says, Zillow has been able to keep its headcount “relatively flat.” (Zillow did cut some jobs recently, but Wacksman says that involved “a handful of folks that were not meeting a performance bar.”)
-
Entertainment1 week agoHow a factory error in China created a viral “crying horse” Lunar New Year trend
-
Tech1 week agoNew York Is the Latest State to Consider a Data Center Pause
-
Business4 days agoAye Finance IPO Day 2: GMP Remains Zero; Apply Or Not? Check Price, GMP, Financials, Recommendations
-
Tech1 week agoPrivate LTE/5G networks reached 6,500 deployments in 2025 | Computer Weekly
-
Tech1 week agoNordProtect Makes ID Theft Protection a Little Easier—if You Trust That It Works
-
Business1 week agoStock market today: Here are the top gainers and losers on NSE, BSE on February 6 – check list – The Times of India
-
Fashion4 days agoComment: Tariffs, capacity and timing reshape sourcing decisions
-
Business1 week agoMandelson’s lobbying firm cuts all ties with disgraced peer amid Epstein fallout



