Connect with us

Tech

Researchers explore how AI can strengthen, not replace, human collaboration

Published

on

Researchers explore how AI can strengthen, not replace, human collaboration


Ph.D. student Allen Brown is among the Tepper School of Business researchers investigating how AI can be most useful in a team dynamic. Credit: Carnegie Mellon University

Researchers from Carnegie Mellon University’s Tepper School of Business are learning how AI can be used to support teamwork rather than replace teammates.

Anita Williams Woolley is a professor of organizational behavior. She researches , or how well teams perform together, and how artificial intelligence could change workforce dynamics. Now, Woolley and her colleagues are helping to figure out exactly where and how AI can play a positive role.

“I’m always interested in technology that can help us become a better version of ourselves individually,” Woolley said, “but also collectively, how can we change the way we think about and structure work to be more effective?”

Woolley collaborated with technologists and others in her field to develop Collective HUman-MAchine INtelligence (COHUMAIN), a framework that seeks to understand where AI fits within the established boundaries of organizational social psychology.

The researchers behind the 2023 publication of COHUMAIN caution against treating AI like any other teammate. Instead, they see it as a partner that works under human direction, with the potential to strengthen existing capabilities or relationships. “AI agents could create the glue that is missing because of how our work environments have changed, and ultimately improve our relationships with one another,” Woolley said.

The research that makes up the COHUMAIN architecture emphasizes that while AI integration into the workplace may take shape in ways we don’t yet understand, it won’t change the fundamental principles behind organizational intelligence, and likely can’t fill in all of the same roles as humans.

For instance, while AI might be great at summarizing a meeting, it’s still up to people to sense the mood in the room or pick up on the wider context of the discussion.

Organizations have the same needs as before, including a structure that allows them to tap into each human team member’s unique expertise. Woolley said that may best serve in “partnership” or facilitation roles rather than managerial ones, like a tool that can nudge peers to check in with each other, or provide the user with an alternate perspective..

Safety and risk

With so much collaboration happening through screens, AI tools might help teams strengthen connections between coworkers. But those same tools also raise questions about what’s being recorded and why.

“People have a lot of sensitivity, rightly so, around privacy. Often you have to give something up to get something, and that is true here,” Wooley said.

The level of risk that users feel, both socially and professionally, can change depending on how they interact with AI, according to Allen Brown, a Ph.D. student who works closely with Woolley. Brown is exploring where this tension shows up and how teams can work through it. His research focuses on how comfortable people feel taking risks or speaking up in a group.

Brown said that, in the best case, AI could help people feel more comfortable speaking up and sharing new ideas that might not be heard otherwise. “In a classroom, we can imagine someone saying, “Oh, I’m a little worried. I don’t know enough for my professor, or how my peers are going to judge my question,” or, “I think this is a good idea, but maybe it isn’t.” We don’t know until we put it out there.”

Since AI relies on a digital record that might or might not be kept permanently, one concern is that a human might not know which interactions with an AI will be used for evaluation.

“In our increasingly digitally mediated workspaces, so much of what we do is being tracked and documented,” Brown said. “There’s a digital record of things, and if I’m made aware that, ‘Oh, all of a sudden our conversation might be used for evaluation,’ we actually see this significant difference in interaction.”

Even when they thought their comments might be monitored or professionally judged, people still felt relatively secure talking to another human being. “We’re talking together. We’re working through something together, but we’re both people. There’s kind of this mutual assumption of risk,” he explained.

The study found that people felt more vulnerable when they thought an AI system was evaluating them. Brown wants to understand how AI can be used to create the opposite effect—one that builds confidence and trust.

“What are those contexts in which AI could be a partner, could be part of this conversational communicative practice within a pair of individuals at work, like a supervisor-supervisee relationship, or maybe within a team where they’re working through some topic that might have task conflict or relationship conflict?” Brown said. “How does AI help resolve the decision-making process or enhance the resolution so that people actually feel increased psychological safety?”

Creating a more trustworthy AI

At the individual level, Tepper researchers are also learning how the way in which AI explains its reasoning affects how people use and trust it. Zhaohui (Zoey) Jiang and Linda Argote are studying how people react to different kinds of AI systems—specifically, ones that explain their reasoning (transparent AI) versus ones that don’t explain how they make decisions (black box AI).

“We see a lot of people advocating for transparent AI,” Jiang said, “but our research reveals an advantage of keeping the AI a black box, especially for a high ability participant.”

One of the reasons for this, she explained, is overconfidence and distrust in skilled decision-makers.

“For a participant who is already doing a good job independently at the task, they are more prone to the well-documented tendency of AI aversion. They will penalize the AI’s mistake far more than the humans making the same mistake, including themselves,” Jiang said. “We find that this tendency is more salient if you tell them the inner workings of the AI, such as its logic or decision rules.”

People who struggle with decision-making actually improve their outcomes when using transparent AI models that show off a moderate amount of complexity in their . “We find that telling them how the AI is thinking about this problem is actually better for less-skilled users, because they can learn from AI decision-making rules to help improve their own future independent decision-making.”

While transparency is proving to have its own use cases and benefits, Jiang said the most surprising findings are around how people perceive black box models. “When we’re not telling these participants how the model arrived at its answer, participants judge the model as the most complex. Opacity seems to inflate the sense of sophistication, whereas transparency can make the very same system seem simpler and less ‘magical,'” she said.

Both kinds of models vary in their use cases. While it isn’t yet cost‑effective to tailor an AI to each human partner, future systems may be able to self-adapt their representation to help people make better decisions, she said.

“It can be dynamic in a way that it can recognize the decision-making inefficiencies of that particular individual that it is assigned to collaborate with, and maybe tweak itself so that it can help complement and offset some of the decision-making inefficiencies.”

Citation:
Researchers explore how AI can strengthen, not replace, human collaboration (2025, November 1)
retrieved 1 November 2025
from https://techxplore.com/news/2025-10-explore-ai-human-collaboration.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Tech

The UK government’s AI skills programme betrays UK workers and our digital sovereignty | Computer Weekly

Published

on

The UK government’s AI skills programme betrays UK workers and our digital sovereignty | Computer Weekly


Last month, the UK government announced the AI Skills Boost programme, promising “free AI training for all” and claiming that the courses will give people the skills needed to use artificial intelligence (AI) tools effectively. There are multiple reasons why we don’t agree.

US dependency over UK sovereignty

The “AI Skills Boost” is the free, badged “foundation” element of the government’s AI Skills Hub which was launched with great fanfare. There are 14 courses, exclusively from big US organisations, promoting and training on their platforms. The initiative increases dependency on US big tech – the opposite of the government’s recent conclusion, in its new AI opportunities action plan, to position the UK “to be an AI maker, not an AI taker”. It is also not clear how increasing UK workers’ reliance and usage of US big tech tools and platforms is intended to increase the UK’s homegrown AI talent.

In stark contrast to President Macron’s announcement last week that the French government will phase out dependency on US-based big tech, by using local providers to enhance digital sovereignty and privacy, technology secretary Liz Kendall’s speech was a lesson in contradictions.

Right after affirming that AI is “far too important a technology to depend entirely on other countries, especially in areas like defence, financial services and healthcare”, the secretary of state went on that the country’s strategy is to adopt existing technologies based overseas.

Microsoft, one of the founding partners for this initiative, has already admitted that “US authorities can compel access to data held by American cloud providers, regardless of where that data physically resides”, further acknowledging that the company will honour any data requests from the US state, regardless of where in the world the data is housed. Is this the sovereignty and privacy the UK government is trying to achieve?

Commercial content rather than quality skills provision

The AI Skills Hub indexes hundreds of AI-related courses. That means the hub, which cost £4.1m to build, is simply a bookmark or affiliate list of online courses and resources that are already available, with seemingly no quality control or oversight. The decision to award the contract to a “Big Four” commercial consultancy, PwC, rather than the proven national data, AI and digital skills providers who tendered, needs to be investigated.

The press releases focus on the “free” element of the training, but 60% of the courses are paid, even some of those which are marked as free, providing a deceptive funnel for paid commercial training providers.

We need to have greater national ambition than simply providing skills training. That the only substandard skills provision available is provided by those with commercial interests in controlling how people think about and use AI is a further insult

The package launched includes 595 courses, but only 14 have been benchmarked by Skills England, and there has been a critical outcry over the dangerously poor quality of many courses, some of which are 10 years’ old, don’t exist, or are poor quality AI slop.

An example of why this is so concerning is that many courses are not relevant to the UK. One of the courses promoted has already been shown to misrepresent the UK law on intellectual property, with the course creators later denying they had any contractual arrangement with the site and admitting that they were “not consulted before our materials were posted and linked from there”.

Warnings on the need for public AI literacy provision ignored

Aside from concerns over the standards, safety, sovereignty and cost of the content offered, there is a much bigger issue, which we have been warning about.

Currently, 84% of the UK public feel disenfranchised and excluded from AI decision-making, and mistrust key institutions, and 91% prioritise safe and fair usage of AI over economic gain and adoption speed.

In 2021, the UK’s AI Council provided a roadmap for developing the UK’s National AI Strategy. It advised on programmes of public and educational AI literacy beyond teaching technical or practical skills. This call has been repeated, especially in the wake of greater public exposure to generative AI since 2023, which now requires the public to understand not just how to prompt or code, but to use critical thinking to navigate a number of related implications of the technology.

In July 2025, we represented a number of specialists, education experts and public representatives, and wrote an open letter calling for investment in the UK’s AI capabilities beyond being passive users of US tools. Despite initial agreements to meet and discuss from the Department for Education and Department for Science, Innovation and Technology, the offer was rescinded.

Without comprehensive public understanding and sustained engagement, developing AI for public good and maintaining public trust will be a significant challenge. By investing in independent AI literacy initiatives that are accessible to all and not just aimed at onboarding uncritical users and consumers, the UK can help to ensure that its AI future is shaped with the UK public’s benefit at the heart.

Wasted opportunity to develop a beneficial UK approach to AI

We need to have greater national ambition than simply providing skills training. That the only substandard skills provision available is provided – at great public cost – by those with commercial interests in controlling how people think about and use AI is a further insult.

Indeed, Kendall’s claim that AI has the potential to add £400bn to the economy by 2030 is lifted from a report built by a sector consultancy that only focuses on the positive impact of Google technologies in the UK. Her announcement leaned heavily on claims such as “AI is now the engine of economic power and of hard power”, which come from a Silicon Valley playbook.

The focus on practical skills undermines the nation’s AI and tech sovereignty, harms the economy, with money leaving the nation to fund big tech. It entrenches political disenfranchisement, with decisions about AI framed as too complex for the general population to meaningfully engage with. It stands on fictitious narratives about inevitable big tech AI futures, in which public voice and public good are irrelevant.

If you wish to sign a second version of the open letter, which we are currently drafting, or to submit a critical AI literacy resource to We and AI’s resource hub, contact us here

This article is co-authored by:

  • Tania Duarte, founder, We and AI
  • Bruna Martins, director at Tecer Digital
  • Dr. Elinor Carmi, senior lecturer in data politics and social justice, City St. George’s University of London
  • Dr. Mark Wong, head of social and urban policy, University of Glasgow
  • Dr Susan Oman, senior Lecturer, data, AI & society, The University of Sheffield
  • Ismael Kherroubi Garcia, founder & CEO, Kairoi
  • Cinzia Pusceddu, senior fellow of the Higher Education Academy, independent researcher
  • Dylan Orchard, postgraduate researcher, King’s College London
  • Tim Davies, director of research & practice, Connected by Data
  • Steph Wright, co-founder & managing director, Our AI Collective



Source link

Continue Reading

Tech

Epstein Files Reveal Peter Thiel’s Elaborate Dietary Restrictions

Published

on

Epstein Files Reveal Peter Thiel’s Elaborate Dietary Restrictions


Peter Thiel—the billionaire venture capitalist, PayPal and Palantir cofounder, and outspoken commentator on all matters relating to the “Antichrist”—appears at least 2,200 times in the latest batch of files released by the Department of Justice related to convicted sex offender and disgraced financier Jeffrey Epstein.

The tranche of records demonstrate how Epstein managed to cultivate an extensive network of wealthy and influential figures in Silicon Valley. A number of them, including Thiel, continued to interact with Epstein even after his 2008 guilty plea for solicitation of prostitution and of procurement of minors to engage in prostitution.

The new files show that Thiel arranged to meet with Epstein several times between 2014 and 2017. “What are you up to on Friday?” Thiel wrote to Epstein on April 5, 2016. “Should we try for lunch?” The bulk of the communications between the two men in the data dump concern scheduling meals, calls, and meetings with one another. Thiel did not immediately return a request for comment from WIRED.

One piece of correspondence stands out for being particularly bizarre. On February 3, 2016, Thiel’s former chief of staff and senior executive assistant, Alisa Bekins, sent an email with the subject line “Meeting – Feb 4 – 9:30 AM – Peter Thiel dietary restrictions – CONFIDENTIAL.” The initial recipient of the email is redacted, but it was later forwarded directly to Epstein.

The contents of the message are also redacted in at least one version of the email chain uploaded by the Justice Department on Friday. However, two other files from what appears to be the same set of messages have less information redacted.

In one email, Bekins listed some two dozen approved kinds of sushi and animal protein, 14 approved vegetables, and 0 approved fruits for Thiel to eat. “Fresh herbs” and “olive oil” were permitted, however, ketchup, mayonnaise, and soy sauce should be avoided. Only one actual meal was explicitly outlined: “egg whites or greens/salad with some form of protein,” such as steak, which Bekins included “in the event they eat breakfast.” It’s unclear if the February 4 meeting ultimately occurred; other emails indicate Thiel got stuck in traffic on his way to meet Epstein that day.

According to a recording of an undated conversation between Epstein and former Israeli Prime Minister Ehud Barak that was also part of the files the DOJ released on Friday, Epstein told Barak that he was hoping to meet Thiel the following week. He added that he was familiar with Thiel’s company Palantir, but proceeded to spell it out loud for Barak as “Pallentier.” Epstein speculated that Thiel may put Barak on the board of Palantir, though there’s no evidence that ever occurred.

“I’ve never met Peter Thiel, and everybody says he sort of jumps around and acts really strange, like he’s on drugs,” Epstein said at one point in the audio recording, referring to Thiel. The former prime minister expressed agreement with Epstein’s assessment.

In 2015 and 2016, Epstein put $40 million in two funds managed by one of Thiel’s investment firms, Valar Ventures, according to The New York Times. Epstein and Thiel continued to communicate and were discussing meeting with one another as recently as January 2019, according to the files released by the DOJ. Epstein committed suicide in his prison cell in August of that year.

Below are Thiel’s dietary restrictions as outlined in the February 2016 email. (The following list has been reformatted slightly for clarity.)

APPROVED SUSHI + APPROVED PROTEIN

  • Kaki Oysters
  • Bass
  • Nigiri
  • Beef
  • Octopus
  • Catfish
  • Sashimi
  • Chicken
  • Scallops
  • Eggs
  • Sea Urchin
  • Lamb
  • Seabass
  • Perch
  • Spicy Tuna w Avocado
  • Squid
  • Turkey
  • Sweet Shrimps
  • Whitefish
  • Tobiko
  • Tuna
  • Yellowtail
  • Trout

APPROVED VEGETABLES

  • Artichoke
  • Avocado
  • Beets
  • Broccoli
  • Brussels sprouts
  • Cabbage
  • Carrots
  • Cucumber
  • Garlic
  • Olives
  • Onions
  • Peppers
  • Salad greens
  • Spinach

APPROVED NUTS

  • Anything unsalted and unroasted
  • Peanuts
  • Pecans
  • Pistachios

CONDIMENTS

  • Most fresh herbs, and olive oil

AVOID

  • Dairy
  • Fruits
  • Gluten
  • Grains
  • Ketchup
  • Mayo
  • Mushroom
  • Processed foods
  • Soy Sauce
  • Sugar
  • Tomato
  • Vinegar

MEAL SUGGESTIONS

  • Breakfast Egg whites or greens/salad with some form of protein (Steak etc)



Source link

Continue Reading

Tech

Elon Musk Is Rolling xAI Into SpaceX—Creating the World’s Most Valuable Private Company

Published

on

Elon Musk Is Rolling xAI Into SpaceX—Creating the World’s Most Valuable Private Company


Elon Musk’s rocket and satellite company SpaceX is acquiring his AI startup xAI, the centibillionaire announced on Monday. In a blog post, Musk said the acquisition was warranted because global electricity demand for AI cannot be met with “terrestrial solutions,” and Silicon Valley will soon need to build data centers in space to power its AI ambitions.

“In the long term, space-based AI is obviously the only way to scale,” Musk wrote. “The only logical solution therefore is to transport these resource-intensive efforts to a location with vast power and space. I mean, space is called ‘space’ for a reason.”

The deal, which pulls together two of Musk’s largest private ventures, values the combined entity at $1.25 trillion, making it the most valuable private company in the world, according to a report from Bloomberg.

SpaceX was in the process of preparing to go public later this year before the xAI acquisition was announced. The space firm’s plans for an initial public offering are still on, according to Bloomberg.

In December, SpaceX told employees that it would buy insider shares in a deal that would value the rocket company at $800 billion, according to The New York Times. Last month, xAI announced that it had raised $20 billion from investors, bringing the company’s valuation to roughly $230 billion.

This isn’t the first time Musk has sought to consolidate parts of his vast business empire, which is largely privately owned and includes xAI, SpaceX, the brain interface company Neuralink, and the tunnel transportation firm the Boring Company.

Last year, xAI acquired Musk’s social media platform, X, formerly known as Twitter, in a deal that valued the combined entity at more than $110 billion. Since then, xAI’s core product, Grok, has become further integrated into the social media platform. Grok is featured prominently in various X features, and Musk has claimed the app’s content-recommendation algorithm is powered by xAI’s technology.

A decade ago, Musk also used shares of his electric car company Tesla to purchase SolarCity, a renewable energy firm that was run at the time by cousin Lyndon Rive.

The xAI acquisition demonstrates how Musk can use his expansive network of companies to help power his own often grandiose visions of the future. Elon Musk said in the blog post that SpaceX will immediately focus on launching satellites into space to power AI development on Earth, but eventually, the space-based data centers he envisions building could power civilizations on other planets, such as Mars.

“This marks not just the next chapter, but the next book in SpaceX and xAI’s mission: scaling to make a sentient sun to understand the Universe and extend the light of consciousness to the stars,” Musk said in the blog post.



Source link

Continue Reading

Trending