Tech
Graffiti framework lets people personalize online social spaces while staying connected with others
Say a local concert venue wants to engage its community by giving social media followers an easy way to share and comment on new music from emerging artists. Rather than working within the constraints of existing social platforms, the venue might want to create its own social app with the functionality that would be best for its community. But building a new social app from scratch involves many complicated programming steps, and even if the venue can create a customized app, the organization’s followers may be unwilling to join the new platform because it could mean leaving their connections and data behind.
Now, researchers from MIT have launched a framework called Graffiti that makes building personalized social applications easier, while allowing users to migrate between multiple applications without losing their friends or data.
“We want to empower people to have control over their own designs rather than having them dictated from the top down,” says electrical engineering and computer science graduate student Theia Henderson.
Henderson and her colleagues designed Graffiti with a flexible structure so individuals have the freedom to create a variety of customized applications, from messenger apps like WhatsApp to microblogging platforms like X to location-based social networking sites like Nextdoor, all using only front-end development tools like HTML.
The protocol ensures all applications can interoperate, so content posted on one application can appear on any other application, even those with disparate designs or functionality. Importantly, Graffiti users retain control of their data, which is stored on a decentralized infrastructure rather than being held by a specific application.
While the pros and cons of implementing Graffiti at scale remain to be fully explored, the researchers hope this new approach can someday lead to healthier online interactions.
“We’ve shown that you can have a rich social ecosystem where everyone owns their own data and can use whatever applications they want to interact with whoever they want in whatever way they want. And they can have their own experiences without losing connection with the people they want to stay connected with,” says David Karger, professor of EECS and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL).
Henderson, the lead author, and Karger are joined by MIT Research Scientist David D. Clark on a paper about Graffiti, which will be presented at the ACM Symposium on User Interface Software and Technology.
Personalized, integrated applications
With Graffiti, the researchers had two main goals: to lower the barrier to creating personalized social applications and to enable those personalized applications to interoperate without requiring permission from developers.
To make the design process easier, they built a collective back-end infrastructure that all applications access to store and share content. This means developers don’t need to write any complex server code. Instead, designing a Graffiti application is more like making a website using popular tools like Vue.
Developers can also easily introduce new features and new types of content, giving them more freedom and fostering creativity.
“Graffiti is so straightforward that we used it as the infrastructure for the intro to web design class I teach, and students were able to write the front-end very easily to come up with all sorts of applications,” Karger says.
The open, interoperable nature of Graffiti means no one entity has the power to set a moderation policy for the entire platform. Instead, multiple competing and contradictory moderation services can operate, and people can choose the ones they like.
Graffiti uses the idea of “total reification,” where every action taken in Graffiti, such as liking, sharing, or blocking a post, is represented and stored as its own piece of data. A user can configure their social application to interpret or ignore those data using its own rules.
For instance, if an application is designed so a certain user is a moderator, posts blocked by that user won’t appear in the application. But for an application with different rules where that person isn’t considered a moderator, other users might just see a warning or no flag at all.
“Theia’s system lets each person pick their own moderators, avoiding the one-sized-fits-all approach to moderation taken by the major social platforms,” Karger says.
But at the same time, having no central moderator means there is no one to remove content from the platform that might be offensive or illegal.
“We need to do more research to understand if that is going to provide real, damaging consequences or if the kind of personal moderation we created can provide the protections people need,” he adds.
Empowering social media users
The researchers also had to overcome a problem known as context collapse, which conflicts with their goal of interoperation.
For instance, context collapse would occur if a person’s Tinder profile appeared on LinkedIn, or if a post intended for one group, like close friends, would create conflict with another group, such as family members. Context collapse can lead to anxiety and have social repercussions for the user and their different communities.
“We realize that interoperability can sometimes be a bad thing. People have boundaries between different social contexts, and we didn’t want to violate those,” Henderson says.
To avoid context collapse, the researchers designed Graffiti so all content is organized into distinct channels. Channels are flexible and can represent a variety of contexts, such as people, applications, locations, etc.
If a user’s post appears in an application channel but not their personal channel, others using that application will see the post, but those who only follow this user will not.
“Individuals should have the power to choose the audience for whatever they want to say,” Karger adds.
The researchers created multiple Graffiti applications to showcase personalization and interoperability, including a community-specific application for a local concert venue, a text-centric microblogging platform patterned off X, a Wikipedia-like application that enables collective editing, and a real-time messaging app with multiple moderation schemes patterned off WhatsApp and Slack.
“It also leaves room to create so many social applications people haven’t thought of yet. I’m really excited to see what people come up with when they are given full creative freedom,” Henderson says.
In the future, she and her colleagues want to explore additional social applications they could build with Graffiti. They also intend to incorporate tools like graphical editors to simplify the design process. In addition, they want to strengthen Graffiti’s security and privacy.
And while there is still a long way to go before Graffiti could be implemented at scale, the researchers are currently running a user study as they explore the potential positive and negative impacts the system could have on the social media landscape.
More information:
Theia Henderson et al, Graffiti: Enabling an Ecosystem of Personalized and Interoperable Social Applications, Proceedings of the 38th Annual ACM Symposium on User Interface Software and Technology (2025). DOI: 10.1145/3746059.3747627
This story is republished courtesy of MIT News (web.mit.edu/newsoffice/), a popular site that covers news about MIT research, innovation and teaching.
Citation:
Graffiti framework lets people personalize online social spaces while staying connected with others (2025, October 1)
retrieved 1 October 2025
from https://techxplore.com/news/2025-10-graffiti-framework-people-personalize-online.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
Conflicting Rulings Leave Anthropic in ‘Supply-Chain Risk’ Limbo
Anthropic “has not satisfied the stringent requirements” to temporarily lose the supply-chain-risk designation imposed by the Pentagon, a US appeals court in Washington, DC, ruled on Wednesday. The decision is at odds with one issued last month by a lower court judge in San Francisco, and it wasn’t immediately clear how the conflicting preliminary judgments would be resolved.
The government sanctioned Anthropic under two different supply-chain laws with similar effects, and the San Francisco and Washington, DC, courts are each ruling on only one of them. Anthropic has said it is the first US company to be designated under the two laws, which are typically used to punish foreign businesses that pose a risk to national security.
“Granting a stay would force the United States military to prolong its dealings with an unwanted vendor of critical AI services in the middle of a significant ongoing military conflict,” the three-judge appellate panel wrote on Wednesday in what they described as an unprecedented case. The panel said that while Anthropic may suffer financial harm from the ongoing designation, they did not want to risk “a substantial judicial imposition on military operations” or “lightly override” the military’s judgments on national security.
The San Francisco judge had found that the Department of Defense likely acted in bad faith against Anthropic, driven by frustration over the AI company’s proposed limits on how its technology could be used and its public criticism of those restrictions. The judge ordered the supply-chain risk label removed last week, and the Trump administration complied by restoring access to Anthropic AI tools inside the Pentagon and throughout the rest of the federal government.
Anthropic spokesperson Danielle Cohen says the company is grateful the Washington, DC, court “recognized these issues need to be resolved quickly” and remains confident “the courts will ultimately agree that these supply chain designations were unlawful.”
The Department of Defense did not immediately respond to a request for comment, but acting attorney general Todd Blanche posted a statement on X. “Today’s DC Circuit stay allowing the government to designate Anthropic as a supply-chain risk is a resounding victory for military readiness,” he wrote.
“Our position has been clear from the start—our military needs full access to Anthropic’s models if its technology is integrated into our sensitive systems.
Military authority and operational control belong to the Commander-in-Chief and Department of War, not a tech company.”
The cases are testing how much power the executive branch has over the conduct of tech companies. The battle between Anthropic and the Trump administration is also playing out as the Pentagon deploys AI in its war against Iran. The company has argued it is being illegally punished for insisting that its AI tool Claude lacks the accuracy needed for certain sensitive operations such as carrying out deadly drone strikes without human supervision.
Several experts in government contracting and corporate rights have told WIRED that Anthropic has a strong case against the government, but the courts sometimes refuse to overrule the White House on matters related to national security. Some AI researchers have said the Pentagon’s actions against Anthropic “chills professional debate” about the performance of AI systems.
Anthropic has claimed in court that it lost business because of the designation, which government lawyers contend bars the Pentagon and its contractors from using the company’s Claude AI as part of military projects. And as long as Trump remains in power, Anthropic may not be able to regain the significant foothold it held in the federal government.
Final decisions in the company’s two lawsuits could be months away. The Washington court is scheduled to hear oral arguments on May 19.
The parties have revealed minimal details so far about how exactly the Department of Defense has used Claude or how much progress it has made in transitioning staff to other AI tools from Google DeepMind, OpenAI, or others. The military, which under President Trump calls itself the Department of War, has said it has taken steps to ensure Anthropic can’t purposely try to sabotage its AI tools during the transition.
Update 4/8/26 7:27 EDT: This story has been updated to include a statement form acting attorney general Todd Blanche.
Tech
As the Strait of Hormuz Reopens, Global Shipping Will Take Months to Recover
As the world held its breath on Tuesday night, news of a ceasefire and the potential reopening of the Strait of Hormuz brought a collective sigh of relief. But with shipments stalled in the strait for over a month, the disruption to global shipping will not resolve immediately.
“Traffic through Hormuz dropped by about 95 percent [during this conflict]. As a result, prices surged, and not just for crude oil but also for refined products like jet fuel, diesel, and gas oil,” says Carsten Ladekjær, CEO at Glander International Bunkering, which specializes in supplying fuel and lubricants to the global shipping industry.
The impact has been uneven across regions. Countries heavily dependent on Middle Eastern energy—particularly in Asia—have been most affected. India sources around 55 percent of its energy imports from the region, China about 50 percent, Japan 93 percent, South Korea 67 percent, and Singapore 70 percent, according to Ladekjær.
While the ceasefire signals a possible reopening, key details remain unclear. “Even with a ceasefire, reopening won’t be immediate,” Ladekjær says. “There’s a backlog, with ships waiting to leave, and likely a controlled process for who gets out first. Iran still appears to be managing that.”
Energy markets reacted quickly. Brent crude fell to around $94 from $110 earlier in the week—a drop of roughly 15 percent.
“Refined products like diesel and jet fuel have dropped even more, because markets are forward-looking—they price in expectations,” says Arne Lohmann Rasmussen, chief analyst and head of research at Global Risk Management. “But we’re still well above prewar levels, which were around $60 to $70.”
A System Under Backlog
Around 1,000 ships remain in the Gulf, including hundreds of tankers awaiting passage.
As of this writing, more than 800 cargo ships and tankers are stuck inside the Persian Gulf, with over 1,000 additional vessels waiting on both sides of the Strait of Hormuz.
Under normal conditions, roughly 150 vessels pass through the strait daily. Experts say clearing the backlog will take time, as ships must be sequenced through, refueled, and repositioned.
“That’s a logistical nightmare. We don’t yet know what the current capacity will be, especially from a security standpoint,” says Lohmann Rasmussen. “It’s not something that can be solved overnight. There are logistical issues, security issues, and even communication challenges.”
Though the market has already seen a correction, that doesn’t mean prices at the pump or in storage will drop immediately.
Tech
Meta’s New AI Model Gives Mark Zuckerberg a Seat at the Big Kid’s Table
Meta on Wednesday announced its first major model since CEO Mark Zuckerberg rebooted the company’s AI efforts last year under a new division called Meta Intelligence Labs. The model, called Muse Spark, is a step toward Zuckerberg’s vision of “personal superintelligence,” the company says, and for now, it will remain closed source.
Zuckerberg said in a social media post that Meta’s goal is to build AI products that “don’t just answer your questions but act as agents that do things for you.” The billionaire added that he is “optimistic that this will support a wave of creativity, entrepreneurship, growth, and health.”
Muse Spark certainly appears to be a major upgrade over Meta’s last big release, Llama 4, which came out in April 2025 and was viewed in the tech industry as a disappointment with middling performance.
Meta is making Muse Spark available via meta.ai and through the Meta AI app. Unlike Llama, Muse Spark is not being released for others to download, though the company says it hopes to open-source future versions. Meta was previously seen as a leader in open source AI and made its Llama models available for researchers, startups, and hobbyists to download and customize.
“Looking ahead, we plan to release increasingly advanced models that push the frontier of intelligence and capabilities, including new open source models,” Zuckerberg wrote.
Meta’s self-reported benchmark scores for Muse Spark suggest the model is better at some tasks than the latest models from OpenAI, Anthropic, Google, and xAI. “Muse Spark is the first step on our scaling ladder,” Meta said in a blog post, referring to its goal of building AI that far outstrips human abilities.
Artificial Analysis, an AI benchmarking company that got early access to Muse Spark, said on social media that the new model is one of the best it has tested. “Muse Spark scores 52 on the Artificial Analysis Intelligence Index, placing it within the top 5 models we have benchmarked,” the company said in its post, citing its own rubric for scoring models that combines various third-party benchmarks.
Meta says the new model is natively multimodal, meaning that it has been trained to handle images, audio, and video as well as text. Muse Spark also features advanced reasoning capabilities, a key feature of the best AI models available today, and it was built from scratch to have strong coding capabilities. Meta described these features as the foundation for building ever-more capable models using modern machine-learning methods.
Meta says that it built Muse Spark to be especially good at providing medical advice. “To improve Muse Spark’s health reasoning capabilities, we collaborated with over 1,000 physicians to curate training data that enables more factual and comprehensive responses,” the company said in its blog post.
Zuckerberg has spent a small fortune overhauling Meta’s artificial intelligence efforts since Llama 4 came out. The tech giant poached top AI engineers from competing firms with compensation packages worth hundreds of millions. It also spent billions to acquire or make major investments in a number of AI startups. Meta recruited Alexandr Wang, the CEO of Scale, an AI training company, to lead its AI efforts after investing $14.3 billion in the company.
Meta also published a document outlining its vision for safely scaling AI models to superhuman levels of performance. The company’s Advanced AI Scaling Framework outlines safety checks that the company will perform as its models become increasingly advanced.
-
Uncategorized7 days ago
[CinePlex360] Please moderate: “Trump signals p
-
Business7 days agoJaguar Land Rover sees sales recover after cyber attack
-
Entertainment6 days agoJoe Jonas shares candid glimpse into parenthood with Sophie Turner
-
Tech6 days agoOur Favorite iPad Is $50 Off
-
Fashion1 week agoChina’s Anta Sports posts record $11.62 bn revenue in 2025
-
Politics6 days agoIran can sustain Strait of Hormuz closure for years, will cut US military logistics: Official
-
Sports5 days agoUConn Final Four run could trigger a $50M furniture giveaway for Massachusetts-based Jordan’s Furniture
-
Business6 days agoVideo: Why Is the Labor Market Stuck?
