Connect with us

Tech

Researchers propose a new model for legible, modular software

Published

on

Researchers propose a new model for legible, modular software


Credit: CC0 Public Domain

Coding with large language models (LLMs) holds huge promise, but it also exposes some long-standing flaws in software: code that’s messy, hard to change safely, and often opaque about what’s really happening under the hood. Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) are charting a more “modular” path ahead.

Their new approach breaks systems into “concepts,” separate pieces of a system, each designed to do one job well, and “synchronizations,” explicit rules that describe exactly how those pieces fit together. The result is software that’s more modular, transparent, and easier to understand.

A small domain-specific language (DSL) makes it possible to express synchronizations simply, in a form that LLMs can reliably generate. In a real-world case study, the team showed how this method can bring together features that would otherwise be scattered across multiple services. The paper is published in the Proceedings of the 2025 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software.

The team, including Daniel Jackson, an MIT professor of electrical engineering and computer science (EECS) and CSAIL associate director, and Eagon Meng, an EECS Ph.D. student, CSAIL affiliate, and designer of the new synchronization DSL, explore this approach in their paper “What You See Is What It Does: A Structural Pattern for Legible Software,” which they presented at the Splash Conference in Singapore in October.

The challenge, they explain, is that in most modern systems, a single feature is never fully self-contained. Adding a “share” button to a social platform like Instagram, for example, doesn’t live in just one service. Its functionality is split across code that handles posting, notification, authenticating users, and more. All these pieces, despite being scattered across the code, must be carefully aligned, and any change risks unintended side effects elsewhere.

Jackson calls this “feature fragmentation,” a central obstacle to software reliability. “The way we build software today, the functionality is not localized. You want to understand how ‘sharing’ works, but you have to hunt for it in three or four different places, and when you find it, the connections are buried in low-level code,” says Jackson.

Concepts and synchronizations are meant to tackle this problem. A concept bundles up a single, coherent piece of functionality, like sharing, liking, or following, along with its state and the actions it can take. Synchronizations, on the other hand, describe at a higher level how those concepts interact.

Rather than writing messy low-level integration code, developers can use a small domain-specific language to spell out these connections directly. In this DSL, the rules are simple and clear: one concept’s action can trigger another, so that a change in one piece of state can be kept in sync with another.

“Think of concepts as modules that are completely clean and independent. Synchronizations then act like contracts—they say exactly how concepts are supposed to interact. That’s powerful because it makes the system both easier for humans to understand and easier for tools like LLMs to generate correctly,” says Jackson.

“Why can’t we read code like a book? We believe that software should be legible and written in terms of our understanding: our hope is that concepts map to familiar phenomena, and synchronizations represent our intuition about what happens when they come together,” says Meng.

The benefits extend beyond clarity. Because synchronizations are explicit and declarative, they can be analyzed, verified, and of course generated by an LLM. This opens the door to safer, more automated software development, where AI assistants can propose new features without introducing hidden side effects.

In their , the researchers assigned features like liking, commenting, and sharing each to a single concept—like a microservices architecture, but more modular. Without this pattern, these features were spread across many services, making them hard to locate and test. Using the concepts-and-synchronizations approach, each feature became centralized and legible, while the synchronizations spelled out exactly how the concepts interacted.

The study also showed how synchronizations can factor out common concerns like error handling, response formatting, or persistent storage. Instead of embedding these details in every service, synchronization can handle them once, ensuring consistency across the system.

More advanced directions are also possible. Synchronizations could coordinate distributed systems, keeping replicas on different servers in step, or allow shared databases to interact cleanly. Weakening semantics could enable eventual consistency while still preserving clarity at the architectural level.

Jackson sees potential for a broader cultural shift in software development. One idea is the creation of “concept catalogs,” shared libraries of well-tested, domain-specific concepts. Application development could then become less about stitching code together from scratch and more about selecting the right concepts and writing the synchronizations between them.

“Concepts could become a new kind of high-level programming language, with synchronizations as the programs written in that language. It’s a way of making the connections in software visible,” says Jackson. “Today, we hide those connections in code. But if you can see them explicitly, you can reason about the software at a much higher level. You still have to deal with the inherent complexity of features interacting. But now it’s out in the open, not scattered and obscured.”

“Building software for on abstractions from underlying computing machines has burdened the world with software that is all too often costly, frustrating, even dangerous, to understand and use,” says University of Virginia Associate Professor Kevin Sullivan, who wasn’t involved in the research.

“The impacts (such as in health care) have been devastating. Meng and Jackson flip the script and insist on building interactive software on abstractions from human understanding, which they call ‘concepts.’ They combine expressive mathematical logic and natural language to specify such purposeful abstractions, providing a basis for verifying their meanings, composing them into systems, and refining them into programs fit for human use. It’s a new and important direction in the theory and practice of software design that bears watching.”

“It’s been clear for many years that we need better ways to describe and specify what we want software to do,” adds Thomas Ball, Lancaster University honorary professor and University of Washington affiliate faculty, who also wasn’t involved in the research. “LLMs’ ability to generate code has only added fuel to the specification fire. Meng and Jackson’s work on concept design provides a promising way to describe what we want from software in a modular manner. Their concepts and specifications are well-suited to be paired with LLMs to achieve the designer’s intent.”

Looking ahead, the researchers hope their work can influence how both industry and academia think about software architecture in the age of AI. “If software is to become more trustworthy, we need ways of writing it that make its intentions transparent,” says Jackson. “Concepts and synchronizations are one step toward that goal.”

More information:
Eagon Meng et al, What You See Is What It Does: A Structural Pattern for Legible Software, Proceedings of the 2025 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software (2025). DOI: 10.1145/3759429.3762628

Citation:
Researchers propose a new model for legible, modular software (2025, November 6)
retrieved 6 November 2025
from https://techxplore.com/news/2025-11-legible-modular-software.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Fake or the real thing? How AI can make it harder to trust the pictures we see

Published

on

Fake or the real thing? How AI can make it harder to trust the pictures we see


Top row features genuine pictures of people with AI generated versions underneath. Credit: Swansea University

A new study has revealed that artificial intelligence can now generate images of real people that are virtually impossible to tell apart from genuine photographs.

Using AI models ChatGPT and DALL·E, a team of researchers from Swansea University, the University of Lincoln and Ariel University in Israel, created highly realistic images of both fictional and famous faces, including celebrities.

They found that participants were unable to reliably distinguish them from authentic photos—even when they were familiar with the person’s appearance.

Across four , the researchers noted that adding comparison photos or the participants’ prior familiarity with the faces provided only limited help.

The research has just been published in the journal Cognitive Research: Principles and Implications and the team say their findings highlight a new level of “deepfake realism,” showing that AI can now produce convincing fake images of real people which could erode trust in visual media.

Professor Jeremy Tree, from the School of Psychology, said, “Studies have shown that face images of fictional people generated using AI are indistinguishable from real photographs. But for this research we went further by generating synthetic images of real people.

“The fact that everyday AI tools can do this not only raises urgent concerns about misinformation and trust in but also the need for reliable detection methods as a matter of urgency.”

One of the experiments, which involved participants from the US, Canada, the UK, Australia and New Zealand, saw subjects shown a series of facial images, both real and artificially generated, and they were asked to identify which was which. The team say the fact the participants mistook the AI-generated novel faces for real photos indicated just how plausible they were.

Another experiment saw participants asked if they could tell genuine pictures of Hollywood stars such as Paul Rudd and Olivia Wilde from computer-generated versions. Again, the study’s results showed just how difficult individuals can find it to spot the authentic version.

The researchers say AI’s ability to produce novel/synthetic images of real people opens up a number of avenues for use and abuse. For instance, creators might generate images of a celebrity endorsing a certain product or political stance, which could influence of both the identity and the brand/organization they are portrayed as supporting.

Professor Tree added, “This study shows that AI can create synthetic images of both new and known faces that most people can’t tell apart from real photos. Familiarity with a face or having reference images didn’t help much in spotting the fakes. That is why we urgently need to find new ways to detect them.

“While automated systems may eventually outperform humans at this task, for now, it’s up to viewers to judge what’s real.”

More information:
Robin S. S. Kramer et al, AI-generated images of familiar faces are indistinguishable from real photographs, Cognitive Research: Principles and Implications (2025). DOI: 10.1186/s41235-025-00683-w

Provided by
Swansea University


Citation:
Fake or the real thing? How AI can make it harder to trust the pictures we see (2025, November 6)
retrieved 6 November 2025
from https://techxplore.com/news/2025-11-fake-real-ai-harder-pictures.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Tech

‘Vibe coding’ named word of the year by Collins dictionary

Published

on

‘Vibe coding’ named word of the year by Collins dictionary


Credit: Darlene Alderson from Pexels

“Vibe coding,” a word that essentially means using artificial intelligence (AI) to tell a machine what you want instead of coding it yourself, was on Thursday named the Collins Word of the Year 2025.

Coined by OpenAI co-founder Andrej Karpathy, the word refers to “an emerging software development that turns into computer code using AI,” according to Collins Dictionary.

“It’s programming by vibes, not variables,” said Collins.

“While tech experts debate whether it’s revolutionary or reckless, the term has resonated far beyond Silicon Valley, speaking to a broader cultural shift toward AI-assisted everything in everyday life,” it added.

Lexicographers at Collins Dictionary monitor the 24 billion-word Collins Corpus, which draws from a range of media sources including social media, to create the annual list of new and notable words that reflect our ever-evolving language.

The 2025 shortlist highlights a range of words that have emerged in the past year to pithily reflect the changing world around us.

“Broligarchy” made the list in a year that saw tech billionaire Elon Musk briefly at the heart of US President Donald Trump’s administration and Amazon founder Jeff Bezos cozying up to the president.

The word is defined as a small clique of very wealthy men who exert political influence.

‘Coolcation’

New words linked to work and technology include “clanker,” a derogatory term for a computer, robot or source of , and “HENRY,” an acronym for high earner, not rich yet.

Another is “taskmasking,” the act of giving a false impression that one is being productive in the workplace, while “micro-retirement” refers to a break taken between periods of employment to pursue personal interests.

In the health and behavioral sphere, “biohacking” also gets a spot, meaning the activity of altering the natural processes of one’s body in an attempt to improve one’s health and longevity.

Also listed is “aura farming,” the deliberate cultivation of a distinctive and charismatic persona and the verb “to glaze,” to praise or flatter someone excessively or undeservedly.

Although the list is dominated by words linked to technology and employment, one from the world of leisure bags a spot—”coolcation,” meaning a holiday in a place with a cool climate.

Last year’s word of the year was “Brat,” the name of UK singer Charli XCX’s hit sixth album, signifying a “confident, independent, and hedonistic attitude” rather than simply a term for a badly-behaved child.

© 2025 AFP

Citation:
‘Vibe coding’ named word of the year by Collins dictionary (2025, November 6)
retrieved 6 November 2025
from https://techxplore.com/news/2025-11-vibe-coding-word-year-collins.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Tech

I’ve Tested a Lot of Bad, Cheap Laptops. These Ones Are Actually Good

Published

on

I’ve Tested a Lot of Bad, Cheap Laptops. These Ones Are Actually Good


Compare Top 12 Budget Laptops

Other Budget Laptops to Consider

Photograph: Daniel Thorp-Lancaster

The Acer Chromebook Plus Spin 714 for $750: The Acer Chromebook Plus Spin 714 (9/10, WIRED Recommends) checks a lot of boxes. It has a surprisingly premium feel for such an affordable machine, and the keyboard and trackpad are excellent for those of us who type all day long. It also has one of the best displays I’ve seen on a Chromebook, with fantastic colors that pop off the glossy touch display. It’s just a bit too expensive compared to something like the new Lenovo Chromebook Plus 14.

Acer Swift Go 14 for $730: The Acer Swift Go 14 (7/10, WIRED Recommends) has a chintzy build quality, a stiff touchpad, and lackluster keyboard backlighting, but it’s hard to beat the performance you get at this price. There’s also an array of ports that make it very versatile, including a microSD card slot. The Intel Core Ultra 7 155H chip with 16 GB of RAM makes for a surprisingly powerful punch when it comes to productivity work, and our tester noted decent results in AI tasks as well. We averaged 11 hours in our battery test (with a full-brightness YouTube video on loop), which is respectable.

Acer Chromebook Plus CX34 for $260: If you want to stand out from the crowd a bit and don’t need Windows, the Asus Chromebook Plus CX34 (7/10, WIRED Recommends) is the best-looking Chromebook. When I got my hands on the CX34, I was impressed by its beautiful white design that stands out in a sea of gray slabs. It’s not left wanting for power, either, with the Core i5 CPU inside offering plenty of performance to easily handle multiple tabs and app juggling.


What Are Important Specs in a Cheap Laptop?

Read our How to Choose the Right Laptop guide if you want all the details on specs and what to look for. In short, your budget is the most important factor, as it determines what you can expect out of the device you’re purchasing. But you should consider display size, chassis thickness, CPU, memory, storage, and port selection. While appropriate specs can vary wildly when you’re considering laptops ranging from $200 to $800, there are a few hard lines I don’t recommend crossing.

For example, don’t buy a laptop if doesn’t have a display resolution of at least 1920 x 1080. In 2025, there’s just no excuse for anything less than that. You should also never buy a laptop without at least 8 GB of RAM and 128 GB of storage. Even in Chromebooks, these specs are becoming the new standard. You’re selling yourself short by getting anything less. Another rule is to avoid a Windows laptop with an Intel Celeron processor—leave those for Chromebooks only.

Specs are only half the battle though. Based on our years of testing, laptop manufacturers tend to make compromises in display quality and touchpad quality. You can’t tell from the photos or listed specs online, but once you get the laptop in your hands, you may notice that the colors of the screen look a bit off or that the touchpad feels choppy to use. It’s nearly impossible to find laptops under $500 that don’t compromise in these areas, but this is where our reviewers and testers can help.

How Much RAM Do You Need in a Cheap Laptop?

The simple answer? You need at least 8 GB of RAM. These days, there are even some Windows laptops at around $700 or $800 that come with 16 GB of RAM standard, as part of the Copilot+ PC marketing push. That’s a great value, and ensures you’ll get the best performance out of your laptop, especially when running heavier applications or multitasking. Either way, it’s important to factor in the price of the RAM, because manufacturers will often charge $100 or even $200 to double the memory.

On Chromebooks, there are some rare occasions where 4 GB of RAM is acceptable, but only on the very cheapest models that are under $200. Even budget Chromebooks like the Asus Chromebook CX15 now start with 8 GB of RAM.

Are There Any Good Laptops Under $300?

Yes, but you need to be careful. Don’t just go buy a random laptop on Amazon under $300, as you’ll likely end up with an outdated, slow device that you’ll regret purchasing. You might be tempted by something like this or this, but trust me—there are better options, some of which you’ll find in this guide.

For starters, you shouldn’t buy a Windows laptop under $300. That price puts you solidly in cheap Chromebook territory. While these are still budget-level in terms of quality, they’re better in almost every way than their Windows counterparts of a similar price. A good example is the Asus Chromebook CX15.

If you want a Windows laptop that you won’t give you instant buyers remorse, you’ll need to spend at least a few hundred more. Once you hit $500 or $600, there are some more solid Windows laptops available, such as the Acer Aspire Go 14, though even there, you’re making some significant compromises in performance and storage capacity. These days, Windows laptops really start to get better in the $600-plus range.

Should You Buy a Chromebook or a Cheap Windows Laptop?

The eternal question. If you’re looking for a laptop under $500, I highly recommend that you opt for a Chromebook. I know that won’t be a possibility for everyone, as some have certain applications that require a Windows laptop or MacBook. If you do aim to get a Chromebook, make sure all your connected accessories and other devices are compatible.

Chromebooks give you access to a full desktop Chrome browser, as well as Android apps. While that leaves some gaps for apps that some may need, you might be surprised by how much you can get done without the need to install any software. Most applications have web versions that are every bit as useful.

While Chromebooks are most well-known as junky student laptops, the recent “Chromebook Plus” designation has filled in the gap between dirt-cheap Chromebooks and $800 Windows laptops. You’ll find some great Chromebook Plus options in the $400 to $600 range that have better performance and displays, while also looking a bit more like a modern laptop. The Lenovo Flex 5i Chromebook Plus is a great example of this. You can read more about the differences between Windows laptops and Chromebooks here.

Power up with unlimited access to WIRED. Get best-in-class reporting and exclusive subscriber content that’s too important to ignore. Subscribe Today.



Source link

Continue Reading

Trending