Tech
Inspired by the EU: Sweden eyes open standard for encrypted chat services | Computer Weekly
Government departments in Sweden are considering deploying “open network” encrypted messaging services as an alternative to proprietary collaboration tools.
Some 40 of Sweden’s government agencies are collaborating on a project that could see them rolling out a secure messaging service across government departments.
The initiative comes as European governments are accelerating the deployment of “sovereign” technologies that allow them to be less reliant on “siloed” software from technology suppliers.
The trend has been given new impetus by the war in Ukraine and growing political upheaval in the US.
A membership organisation for government agencies interested in digital technology, eSam has proposed developing a government messaging service based on Matrix, an open network offering secure decentralised messaging.
Replacing emails and phone calls
Kenneth Edwall, a government employee and member of the eSam working group on the project, told Computer Weekly that one of the aims of the proposal is to make it possible for government departments to communicate more efficiently.
“We as agencies need to collaborate with each other,” he said. “Having email is not the best tool, and having phone calls is not a good method either.”
When eSam first began evaluating collaboration technology in 2021, government departments in Sweden had standardised on Skype for Business as a collaboration tool across government.
The tool was easy to use, and it was possible for government employees to collaborate with colleagues by searching on their email and initiating a chat.
They deployed Skype in a decentralised way, giving agencies the freedom to buy the service from suppliers or deploy it on their own datacentres.
This created a robust, decentralised network, said Edwall. “If you have 100 different deployments of Skype, it’s hard to target them all in a cyber attack,” he added.
Multiple messaging services
Since then, partly as a result of Microsoft phasing out Skype in favour of its Teams software, government departments have taken up a range of incompatible messaging apps. They include Rocket.chat, Teams, Zoom, open source platform Mattermost, video platform Jitsi Meet, and Element.
“We are now seeing at least five or six messaging tools being chosen by authorities today, and if it continues, we are going to have a big mess of fragmented systems,” said Edwall. “There is no open protocol that allows them to interoperate with each other.”
Imagine taking email and splitting it among five or six different email suppliers, each of which was incompatible with the other. “That is what we have today with messaging,” he added.
This means government employees in Sweden are having to learn several collaboration tools so that they communicate with people in other parts of government.
The security risks
The apps pose security risks as collaboration tools fall outside security safeguards, and when people leave their jobs, they may still be connected to government-focused chat groups.
In January this year, eSam began a review to look at how to solve these problems. One option was to do nothing and leave it to technology providers to develop interoperable messaging services, but it ruled that out.
“We don’t believe that the entire market wants to be interoperable,” said Edwall. “We believe that some of the larger vendors have an incentive not to be interoperable with other vendors.”
Another idea was for Swedish government departments to standardise on a propriety platform, such as Zoom or Microsoft teams. However, under Swedish law, government departments can not legally chose to buy technology from a favoured supplier. Each contract has to go out to tender.
Federated open source messaging
Eventually, eSam settled on an open-source federated messaging standard that allows government departments to build interoperable collaboration platforms, either in-house, or bought in from a provider.
“The key is we are not taking sides in regards to public cloud, private cloud or on premise,” said Edwall. “We are not taking sides on proprietary or open source solutions, but we want them all to have the same open protocol that allows them to interact with each other.”
The eSam members looked at a variety of options, including the Matrix protocol, Signal, XMPP and others, before deciding on Matrix.
“We had meetings with other public sector authorities in the EU [European Union] and we realised that most of the authorities we talked to were looking at the Matrix protocol,” he said. “Some of them were already in it and others were evaluating it.”
For eSam, Matrix offers a number of advantages. First, it is federated, which means the Matrix network relies on decentralised nodes. If one fails, or is hit by a cyber attack, messages can still re-route to the right destination.
Second, different government agencies can chose to deploy the technology in different ways. “You can also decide who you want to deploy our setup,” said Edwall. “You could use public cloud services or private on-premise services.”
European governments are using Matrix
Matrix is widely used by the public sector in France, Switzerland – where it has been championed by Swiss Post – and Germany. The European Commission and the Netherlands also have plans to roll out the technology.
The team has prepared a report that it will present to the eSam board in November.
Its recommendations are to build on open standards and protocols to ensure government agencies can avoid being locked into one supplier, and to give organisations the ability to choose how they want to deliver technology, either through public cloud, private cloud, on-premise systems or third-party suppliers.
If the plan is approved, the move to Matrix-based messaging is likely to take years – or even decades.
“We don’t want authorities to just throw out their current communication, because they might have a five or 10-year contract,” said Edwall.
“We want the market to shift so the vendors understand what they gain from using an open standard, similar to the open standards we use in email,” he added. “We want the market to understand that they should start adapting their products.”
Tech
How Prankster Oobah Butler Convinced Venture Capitalists to Give Him Over $1 Million
Not long into his new documentary, Oobah Butler tells the cofounder of his newly minted company, Drops, that they should create a piece of luxury luggage that “looks like a bomb” and will sell for $200,000.
Immediately, I’m thinking his quest to get £1 million in 90 days might have come to an early end.
But I’m wrong.
Butler is a British prankster documentarian who is known for his stunts, like managing to get Amazon to sell its drivers’ urine as energy drinks or creating a fake restaurant called the Shed and gaming TripAdvisor to make it the top-rated London restaurant on the platform. His latest documentary, made for the UK’s Channel 4, is called How I Made £1 Million in 90 Days. Set in London and New York, it takes on the worlds of startups, venture capital, crypto, and what ultimately comes across as a lot of bullshitting, in the name of striking it rich quick.
Butler opens the film by saying, as someone who didn’t grow up with money and isn’t particularly motivated by it, he’s fascinated by the fact that people “idolize” wealthy entrepreneurs.
“It came from a place of wanting to understand why … everyone is so obsessed with money in this way,” he tells WIRED. “And I’m not talking about survival. I’m not talking about affording to exist. I’m talking about … being addicted to the making of money.”
His only rules for getting £1 million ($1.3 million USD) are that he’s not allowed to break the law and whatever costs he incurs trying to make it are his to bear. He employs several strategies to rack up the cash, including simply asking rich people for it (this doesn’t go well) and creating hype for crypto company UNFK by doing things like tricking bankers into committing crimes on camera. He also creates Drops, a company that makes news for its controversial stunts and then tries to capitalize on the attention by selling “very overpriced” items.
Butler seeks the advice of Venmo cofounder Iqram Magdon-Ismail, who quickly declares himself Butler’s cofounder on Drops and seems very enthusiastic at first, musing that the company is already “worth at least $10 million” just because the two of them are attached to it, and that they might be able sell out Madison Square Garden in a year’s time to tell their story. Their brainstorming session includes schemes for buying the first piece of land on Mars and selling the opportunity to name the “first branded species.” But after Butler suggests the bomb-like suitcase and a pair of “real life ad blocking sunglasses” that remove the wearer’s vision entirely, Magdon-Ismail temporarily ghosts him.
Butler then embarks on a memecoin adventure that goes south, before coming back to Drops and launching the “first legal child sweatshop in Britain in over a century.” He finds a loophole to avoid paying the child workers, reasoning that because he is filming the kids for the documentary, they are technically performers. His underage staff help him come up with marketing ideas to sell bespoke soccer jerseys featuring a fake religious cigarette brand called Holy Smokes. Though the clothing line gets coverage in GQ, Butler doesn’t sell anything close to £1 million worth of jerseys.
Tech
Anthropic inks multibillion-dollar deal with Google for AI chips
Artificial intelligence company Anthropic has signed a multibillion-dollar deal with Google to acquire more of the computing power needed for the startup’s chatbot, Claude.
Anthropic said Thursday the deal will give it access to up to 1 million of Google’s AI computer chips and is “worth tens of billions of dollars and is expected to bring well over a gigawatt of capacity online in 2026.”
A gigawatt, when used in reference to a power plant, is enough to power roughly 350,000 homes, according to the U.S. Energy Information Administration.
Google calls its specialized AI chips Tensor Processing Units, or TPUs. Anthropic’s AI systems also run on chips from Nvidia and the cloud computing division of Amazon, Anthropic’s first big investor and its primary cloud provider.
The privately held Anthropic, founded by ex-OpenAI leaders in 2021, last month put its value at $183 billion after raising another $13 billion in investments. Its AI assistant Claude competes with OpenAI’s ChatGPT and others in appealing to business customers using it to assist with coding and other tasks.
© 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.
Citation:
Anthropic inks multibillion-dollar deal with Google for AI chips (2025, October 24)
retrieved 24 October 2025
from https://techxplore.com/news/2025-10-anthropic-inks-multibillion-dollar-google.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
How to ensure youth, parents, educators and tech companies are on the same page on AI
Artificial intelligence is now part of everyday life. It’s in our phones, schools and homes. For young people, AI shapes how they learn, connect and express themselves. But it also raises real concerns about privacy, fairness and control.
AI systems often promise personalization and convenience. But behind the scenes, they collect vast amounts of personal data, make predictions and influence behavior, without clear rules or consent.
This is especially troubling for youth, who are often left out of conversations about how AI systems are built and governed.
Concerns about privacy
My research team conducted national research and heard from youth aged 16 to 19 who use AI daily—on social media, in classrooms and in online games.
They told us they want the benefits of AI, but not at the cost of their privacy. While they value tailored content and smart recommendations, they feel uneasy about what happens to their data.
Many expressed concern about who owns their information, how it is used and whether they can ever take it back. They are frustrated by long privacy policies, hidden settings and the sense that you need to be a tech expert just to protect yourself.
As one participant said, “I am mainly concerned about what data is being taken and how it is used. We often aren’t informed clearly.”
Uncomfortable sharing their data
Young people were the most uncomfortable group when it came to sharing personal data with AI. Even when they got something in return, like convenience or customization, they didn’t trust what would happen next. Many worried about being watched, tracked or categorized in ways they can’t see.
This goes beyond technical risks. It’s about how it feels to be constantly analyzed and predicted by systems you can’t question or understand.
AI doesn’t just collect data, it draws conclusions, shapes online experiences, and influences choices. That can feel like manipulation.
Parents and teachers are concerned
Adults (educators and parents) in our study shared similar concerns. They want better safeguards and stronger rules.
But many admitted they struggle to keep up with how fast AI is moving. They often don’t feel confident helping youth make smart choices about data and privacy.
Some saw this as a gap in digital education. Others pointed to the need for plain-language explanations and more transparency from the tech companies that build and deploy AI systems.
Professionals focus on tools, not people
The study found AI professionals approach these challenges differently. They think about privacy in technical terms such as encryption, data minimization and compliance.
While these are important, they don’t always align with what youth and educators care about: trust, control and the right to understand what’s going on.
Companies often see privacy as a trade-off for innovation. They value efficiency and performance and tend to trust technical solutions over user input. That can leave out key concerns from the people most affected, especially young users.
Power and control lie elsewhere
AI professionals, parents and educators influence how AI is used. But the biggest decisions happen elsewhere. Powerful tech companies design most digital platforms and decide what data is collected, how systems work and what choices users see.
Even when professionals push for safer practices, they work within systems they did not build. Weak privacy laws and limited enforcement mean that control over data and design stays with a few companies.
This makes transparency and holding platforms accountable even more difficult.
What’s missing? A shared understanding
Right now, youth, parents, educators and tech companies are not on the same page. Young people want control, parents want protection and professionals want scalability.
These goals often clash, and without a shared vision, privacy rules are inconsistent, hard to enforce or simply ignored.
Our research shows that ethical AI governance can’t be solved by one group alone. We need to bring youth, families, educators and experts together to shape the future of AI.
The PEA-AI model
To guide this process, we developed a framework called PEA-AI: Privacy–Ethics Alignment in Artificial Intelligence. It helps identify where values collide and how to move forward. The model highlights four key tensions:
- Control versus trust: Youth want autonomy. Developers want reliability. We need systems that support both.
- Transparency versus perception: What counts as “clear” to experts often feels confusing to users.
- Parental oversight versus youth voice: Policies must balance protection with respect for youth agency.
- Education versus awareness gaps: We can’t expect youth to make informed choices without better tools and support.
What can be done?
Our research points to six practical steps:
- Simplify consent. Use short, visual, plain-language forms. Let youth update settings regularly.
- Design for privacy. Minimize data collection. Make dashboards that show users what’s being stored.
- Explain the systems. Provide clear, non-technical explanations of how AI works, especially when used in schools.
- Hold systems accountable. Run audits, allow feedback and create ways for users to report harm.
- Teach privacy. Bring AI literacy into classrooms. Train teachers and involve parents.
- Share power. Include youth in tech policy decisions. Build systems with them, not just for them.
AI can be a powerful tool for learning and connection, but it must be built with care. Right now, our research suggests young people don’t feel in control of how AI sees them, uses their data or shapes their world.
Ethical AI starts with listening. If we want digital systems to be fair, safe and trusted, we must give youth a seat at the table and treat their voices as essential, not optional.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Citation:
How to ensure youth, parents, educators and tech companies are on the same page on AI (2025, October 23)
retrieved 24 October 2025
from https://techxplore.com/news/2025-10-youth-parents-tech-companies-page.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
-
Tech1 week agoWhy the F5 Hack Created an ‘Imminent Threat’ for Thousands of Networks
-
Tech5 days agoHow to Protect Yourself Against Getting Locked Out of Your Cloud Accounts
-
Tech1 week agoWhat Is Google One, and Should You Subscribe?
-
Fashion1 week agoSelf-Portrait unveils high-profile Apple Martin campaign
-
Business1 week agoBaroness Mone-linked PPE firm misses deadline to pay £122m
-
Fashion1 week agoItaly to apply extra levy on Chinese goods to safeguard its own fashion industry
-
Sports6 days agoPCB confirms Tri-nation T20 series to go ahead despite Afghanistan’s withdrawal – SUCH TV
-
Tech1 week agoSAP ECC customers bet on composable ERP to avoid upgrading | Computer Weekly
