Connect with us

Tech

Similarities between human and AI learning offer intuitive design insights

Published

on

Similarities between human and AI learning offer intuitive design insights


Credit: Unsplash/CC0 Public Domain

New research has found similarities in how humans and artificial intelligence integrate two types of learning, offering new insights about how people learn as well as how to develop more intuitive AI tools.

The study is published in the Proceedings of the National Academy of Sciences.

Led by Jake Russin, a postdoctoral research associate in at Brown University, the study found by training an AI system that flexible and incremental learning modes interact similarly to working memory and long-term memory in humans.

“These results help explain why a human looks like a rule-based learner in some circumstances and an incremental learner in others,” Russin said. “They also suggest something about what the newest AI systems have in common with the human brain.”

Russin holds a joint appointment in the laboratories of Michael Frank, a professor of cognitive and psychological sciences and director of the Center for Computational Brain Science at Brown’s Carney Institute for Brain Science, and Ellie Pavlick, an associate professor of computer science who leads the AI Research Institute on Interaction for AI Assistants at Brown.

Depending on the task, humans acquire new information in one of two ways. For some tasks, such as learning the rules of tic-tac-toe, “in-context” learning allows people to figure out the rules quickly after a few examples. In other instances, incremental learning builds on information to improve understanding over time—such as the slow, sustained practice involved in learning to play a song on the piano.

While researchers knew that humans and AI integrate both forms of learning, it wasn’t clear how the two learning types work together. Over the course of the research team’s ongoing collaboration, Russin—whose work bridges machine learning and —developed a theory that the dynamic might be similar to the interplay of human working memory and long-term memory.

To test this theory, Russin used “meta-learning”—a type of training that helps AI systems learn about the act of learning itself—to tease out key properties of the two learning types. The experiments revealed that the AI system’s ability to perform in-context learning emerged after it meta-learned through multiple examples.

One experiment, adapted from an experiment in humans, tested for in-context learning by challenging the AI to recombine similar ideas to deal with new situations. If taught about a list of colors and a list of animals, could the AI correctly identify a combination of color and animal (e.g. a green giraffe) it had not seen together previously? After the AI meta-learned by being challenged to 12,000 similar tasks, it gained the ability to successfully identify new combinations of colors and animals.

The results suggest that for both humans and AI, quicker, flexible in-context learning arises after a certain amount of incremental learning has taken place.

“At the first board game, it takes you a while to figure out how to play,” Pavlick said. “By the time you learn your hundredth board game, you can pick up the rules of play quickly, even if you’ve never seen that particular game before.”

The team also found trade-offs, including between learning retention and flexibility: Similar to humans, the harder it is for AI to correctly complete a task, the more likely it will remember how to perform it in the future. According to Frank, who has studied this paradox in humans, this is because errors cue the brain to update information stored in long-term memory, whereas error-free actions learned in context increase flexibility but don’t engage long-term memory in the same way.

For Frank, who specializes in building biologically inspired computational models to understand human learning and decision-making, the team’s work showed how analyzing strengths and weaknesses of different learning strategies in an artificial neural network can offer new insights about the .

“Our results hold reliably across multiple tasks and bring together disparate aspects of human learning that neuroscientists hadn’t grouped together until now,” Frank said.

The work also suggests important considerations for developing intuitive and trustworthy AI tools, particularly in sensitive domains such as mental health.

“To have helpful and trustworthy AI assistants, human and AI cognition need to be aware of how each works and the extent that they are different and the same,” Pavlick said. “These findings are a great first step.”

More information:
Jacob Russin et al, Parallel trade-offs in human cognition and neural networks: The dynamic interplay between in-context and in-weight learning, Proceedings of the National Academy of Sciences (2025). DOI: 10.1073/pnas.2510270122

Provided by
Brown University


Citation:
Similarities between human and AI learning offer intuitive design insights (2025, September 4)
retrieved 4 September 2025
from https://techxplore.com/news/2025-09-similarities-human-ai-intuitive-insights.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Tech

This Ambitious Laptop Doesn’t Leave Much Room for Your Hands

Published

on

This Ambitious Laptop Doesn’t Leave Much Room for Your Hands


The two USB-C ports are on the left side, alongside HDMI and a USB-A port. The second USB-A port, a microSD card slot, and a headphone jack are on the right. It’s not a nice assortment of ports overall, and I just wish Acer had split the USB-C ports up so the laptop could have a charging port on either side.

Acer is using a top-notch 16-inch OLED touchscreen display on the Swift 16 AI. It has a resolution of 2880 x 1800, a refresh rate of 120 Hz, and color saturation as close to perfect as I’ve seen. Like most OLED laptops, it has a glossy, highly reflective display that maxes out at 315 nits of brightness, according to my testing. It’s nowhere near as bright as IPS or mini-LED displays, but the trade-off in brightness is to achieve that unbeatable contrast that only OLED can deliver.

A Risky Touchpad

Photograph: Luke Larsen

The full-size keyboard and oversized touchpad are definitely the most notable elements of this laptop. The first thing you notice is the touchpad, which is certainly the largest I’ve ever seen. You might think it looks a bit silly, but I always like it when companies leave as little wasted space on a product as possible. I really wanted to like this touchpad, but unfortunately, it could deter most people from buying this product.

On large laptops like the Swift 16 AI, which have a number pad to the right of the keyboard, the touchpad is typically below the keyboard, making it visually off-center. While it’s functional, this arrangement looks odd, and some 16-inch laptops get around this by omitting the number pad entirely. That’s what you see on the MacBook Pro, the Dell XPS 16, and most gaming laptops these days, too.

Rather than removing the number pad, Acer expanded the touchpad and centered it. This makes good use of the space below the keyboard, preserves the number pad, and solves the aesthetic annoyance that typically plagues full-size laptops.



Source link

Continue Reading

Tech

Robotically assembled building blocks could make construction more efficient and sustainable

Published

on

Robotically assembled building blocks could make construction more efficient and sustainable



Robotically assembled building blocks could be a more environmentally friendly method for erecting large-scale structures than some existing construction techniques, according to a new study by MIT researchers.

The team conducted a feasibility study to evaluate the efficiency of constructing a simple building using “voxels,” which are modular 3D subunits that assemble into complex, durable structures.

After studying the performance of multiple voxels, the researchers developed three new designs intended to streamline building construction. They also produced a robotic assembler and a user-friendly interface for generating voxel-based building layouts and feeding instructions to the robots.

Their results indicate this voxel-based robotic assembly system could reduce embodied carbon — all of the carbon emitted during the lifecycle of building materials — by as much as 82 percent, compared with popular techniques like 3D concrete printing, precast modular concrete, and steel framing. The system would also be competitive in terms of cost and construction time. However, the choice of materials used to manufacture the voxels does play a major role in their carbon footprint and cost.

While scalability, durability, long-term robustness, and important considerations like fire resistance remain to be explored before such a system could be widely deployed, the researchers say these initial results highlight the potential of this approach for automated, on-site construction.

“I’m particularly excited about how the robotic assembly of discrete lattices can enable a practical way to apply digital fabrication to the built environment in a way that can let us build much more efficiently and sustainably,” says Miana Smith, a graduate student in the Center for Bits and Atoms (CBA) at MIT and lead author the study.

She is joined on the paper by Paul Richard, a graduate student at École Polytechnique Fédérale de Lausanne in Switzerland and former visiting researcher at MIT; Alfonso Parra Rubio, a CBA graduate student; and senior author Neil Gershenfeld, an MIT professor and the director of the CBA. The research appears in Automation in Construction.

Designing better building blocks

Over the past several years, researchers in the Center for Bits and Atoms have been developing voxels, which are lattice-structured building blocks that can be assembled into objects with high strength and stiffness, like airplane wings, wind turbine blades, and space structures.

“Here, we are taking aerospace principles and applying them to buildings. Why don’t we make buildings as efficiently as we make airplanes?” Gershenfeld says, based on prior work his lab has done on voxel assembly with NASA, Airbus, and Boeing.

To explore the feasibility of voxel-based assembly strategies for buildings, the researchers first evaluated the mechanical performance and sustainability of eight existing voxel designs, including a cuboctahedron made from glass-reinforced nylon and a Kelvin lattice made from steel.

Based on those evaluations, they developed a set of three voxels using a new geometry that could be more easily assembled robotically into a larger structure. The new design, based on a high-strength and high-stiffness octet lattice, mechanically self-aligns into rigid structures.

“The interlocking nature of these voxels means we can get nice mechanical properties without needing to have a lot of connectors in the system, so the construction process can run a lot faster,” Smith says.

To accelerate construction, they designed a robotic assembly system based on inchworm-like robots that crawl across a voxel structure by anchoring and extending their bodies. These Modular Inchworm Lattice Assembler robots, or MILAbots, use grippers on each end to place voxel building blocks and engage the snap-fit connections.

“The robots can assemble the voxels by dropping them into place and then stepping on them to have the pieces interlock. We can do precise maneuvers based on the mechanical relationship between the robots and the voxels,” Smith explains.

The team studied the embodied carbon needed to fabricate their new voxel designs using three materials: plastic, plywood, and steel. Then they evaluated the throughput and cost of using the robotic assembly system to build a simple, one-story building. The researchers compared these estimates with the performance of other construction methods.

Potential environmental benefits

They found that most existing voxels, and especially those made from plastics, performed poorly compared to existing methods in terms of sustainability, but the steel and wood voxels they designed offered significant environmental benefits.

For instance, utilizing their steel voxels would generate only 36 percent of the embodied carbon required for 3D concrete printing and 52 percent of the embodied carbon of precast concrete. The plywood voxels had the lowest carbon footprint, requiring about 17 percent and 24 percent of the embodied carbon needed, respectively.

“There is still a potential viable option for a plastics-based voxel approach, we just have to be a bit more strategic about which types of plastics, infills, and geometries we use,” Smith says.

In addition, projected on-site assembly time for the steel and wood voxel approaches averaged 99 hours, whereas existing construction methods averaged 155 hours.

These speed benefits rely on the distributed nature of voxel-based assembly. While one MILAbot working alone is far slower than existing techniques, with a team of 20 robots working in parallel, the system catches up to or surpasses existing automation methods at a lower cost.

“One benefit of this method is how incremental it is. You can start building, and if it turns out you need a new room, you can just add onto the structure. It is also reversible, so if your use changes, you can dissemble the voxels and change the structure,” Gershenfeld says.

The researchers also developed an interface that enables users to input or hand-design a voxelized structure. The automatic system determines the paths the MILAbots should follow for construction and sends commands to the assemblers.

The next step in this project will be a larger testbed in Bhutan, using the “super fab lab” that CBA helped set up there to replicate the robots to test construction for a planned sustainable city, Gershenfeld says.

Additional areas of future work include studying the stability of voxel structures under lateral loads, improving the design tool to account for the physics of the system, enhancing the MILAbots, and evaluating voxels that have integrated sheeting, insulation, or electrical and plumbing routing.

“Our work helps support why doing this type of distributed robot assembly might be a practical way to bring digital fabrication into building construction,” Smith says.

This work was funded, in part, by the MIT Center for Bits and Atoms Consortia.



Source link

Continue Reading

Tech

Container storage in the AI age: Block vs object and CSI vs container-native | Computer Weekly

Published

on

Container storage in the AI age: Block vs object and CSI vs container-native | Computer Weekly


When containers started out, they were meant to be ephemeral – stateless, disposable and data-light. But that’s all changed. As Gartner notes, use cases for containers have evolved to include analytics and artificial intelligence (AI) processing, and by 2028, it predicts 15% of on-premise production workloads will run in containers. That’s a 300% increase since 2022.

Now, while containers themselves retain all the benefits of ephemerality – rapidly reproducing, then dying back just as quickly to account for workload spikes – the storage attached to them cannot live by the same rules.

As enterprises move from proofs of concept to running a big chunk of production workloads in containers, the storage layer has become a pivot point. While the early days were focused on simple web scaling, containers have now moved into the realm of mission-critical databases, massive data science pipelines, and the power-hungry world of generative AI (GenAI).

The challenge lies in navigating key choices such as file versus block versus object storage, CSI versus container-native storage, and whether to go for a dedicated container storage platform.

Containerisation is lightweight virtualisation

Containerisation is a lightweight form of virtualisation. Unlike traditional virtual machines (VMs) that require a hypervisor and a full guest operating system (OS), containers share the host server’s OS. This makes them lighter, faster to scale and more portable. They are built on microservices principles that break monolithic applications into discrete, application programming interface (API)-linked components in a way that aligns with DevOps methodologies.

While several orchestrators exist (for example, Docker Swarm and OpenShift), Kubernetes is the market leader. It manages the cluster of nodes, which is where pods run the containers. Clusters are groups of nodes managed by a control plane, which is where we find the API server, a scheduler for pod placement, a controller to maintain the desired state, and etcd for storage configuration.

As originally conceived, container storage was ephemeral, and data vanished when a pod was deleted. So, to support enterprise applications, Kubernetes developed persistent volumes (PV), which are attached to a cluster and decouple storage from compute to allow applications to remain portable while maintaining access to data.

CSI vs container-native storage

Container Storage Interface (CSI) is a standard that allows storage suppliers – more than 130 drivers are available – to expose their systems to Kubernetes. CSI allows Kubernetes to trigger advanced data services such as snapshots, cloning and automated provisioning across block, file and object storage in on-premise and cloud environments.

Container-native storage potentially has the advantage of portability – on-premise, in the cloud, and so on, by virtue of the virtualisation inherent – while CSI is more likely to tie a deployment to deployed storage arrays

CSI is essentially a “broker”. It is an industry-standard API that acts as a middleman, allowing Kubernetes to talk to external storage arrays. For example, when a developer requests storage via a persistent volume claim (PVC), the CSI driver tells the external storage box to carve out a piece of capacity and plug it into the container. The advantage is that you get to use the expensive, reliable enterprise storage you already own, but the storage is still “outside” the cluster, and if you move containers to a different cloud or datacentre, that external hardware might not be there.

Meanwhile, container-native storage is storage that lives inside the Kubernetes cluster. It is usually deployed as a set of containers itself. It takes specified drives attached to Kubernetes nodes and pools them together into one big virtual resource.

Container-native storage potentially has the advantage of portability – on-premise, in the cloud, and so on, by virtue of the virtualisation inherent – while CSI is more likely to tie a deployment to deployed storage arrays.

Container-native storage is location independent, so you can run the same setup on-premise or in the cloud. But it can consume central processing unit (CPU) and random access memory (RAM) from your Kubernetes nodes to manage the data, which may be a concern. 

Do we need containers to be that portable?

CSI offers connection to big-iron fully featured storage, and container-native storage holds the promise of flexible deployment, portability, and so on.  But is portability that important? Eric Phenix, who leads the engineering practice at analysts GigaOm, says not. 

“Containers offer a compute abstraction layer that allows the application to be infrastructure agnostic, rather than a solution that is designed to make applications more portable,” he says.

Phenix argues that while containers make the code agnostic, deployment is another matter. “Unless a company is specifically a customer-facing instanced PaaS [platform as a service] where they need to run on every cloud, I don’t see the need to run the same workload on multiple clouds. Once things are deployed, they’re always messy to migrate,” he says.

And this “messiness” is almost always a data problem, according to Phenix. While the container image can move in seconds, the multi-terabyte persistent volume attached to it cannot.

James Brown, an analyst at GigaOm, points out that container-native storage is essentially software-defined storage and brings its own lock-ins. “Heavily integrated, container-native supplier platforms risk replacing hardware lock-in with software lock-in. Tying your architecture to proprietary in-cluster storage features creates massive migration hurdles, effectively breaking the core portability promise of Kubernetes,” he says.

So, the choice here comes down to just how portable you need things to be. Enterprises often use a hybrid approach: CSI to connect to massive, high-performance arrays for their heaviest databases; container-native storage for modern, distributed apps that need to be able to move without a “messy” data migration.

In 2026, choosing the correct storage protocol for container storage is all about playing in a “mixed economy”, with a Kubernetes cluster able to pull from all three formats simultaneously.

Block for high performance

Block storage presents data as a raw, unformatted volume – like a physical hard drive – that is attached to a single node at a time. In Kubernetes, this is typically handled via persistent volumes using the ReadWriteOnce (RWO) access mode.

Block storage can be in on-premise arrays or in the cloud, such as in Amazon Elastic Block Store (EBS), Google Persistent Disk, or Microsoft Azure Disk.

Block storage offers the lowest latency and highest input/output operations per second (IOPS) because there is no filesystem overhead between the application and the storage. That makes it ideal for databases where small, frequent updates happen at specific locations within files.

When it comes to the cons, most block storage cannot be mounted to multiple pods across different nodes simultaneously, and scaling usually requires resizing the volume and expanding the filesystem. Block storage is generally the most expensive, too.

File for directory access

File storage provides a shared hierarchical namespace (folders and files) accessible over a network. In Kubernetes, it is the primary way to achieve ReadWriteMany, allowing multiple pods on different nodes to read and write to the same data.

It is also available in on-premise storage or cloud services such as Amazon Elastic File System (EFS), Microsoft Azure Files and Google Filestore.

File access is perfectly suited for horizontal scaling of web servers where all pods need access to the same assets, and most legacy applications are built to read/write to a standard directory structure. Compared to block access, network protocols like NFS or SMB introduce more latency, and at large scales (millions of files), traversing deep directory trees can become extremely slow. Meanwhile, handling concurrent writes across many pods can lead to file locking conflicts if not managed carefully.

Object for sizeable datastores

Object storage manages data as discrete objects in a flat namespace and is accessed via APIs (for example, S3 or Swift) rather than being “mounted” like a disk. It’s the cloud-native storage protocol, though it can run on-site, too. Examples include Amazon Simple Storage Service (S3), MinIO, Google Cloud Storage and Ceph RGW. Object storage can store petabytes of data without worrying about partition limits or disk sizes, and is usually the cheapest option for large-scale unstructured data (logs, images, backups).

Object storage is ideal for modern “cloud-native” apps that talk directly to storage via HTTP/HTTPS, bypassing the OS kernel entirely.

On the negative side, object storage is generally the slowest for transactional work with high throughput but higher latency than block or file. Meanwhile, you can’t “edit” a single line in a file; you must re-upload the entire object to change it.

Storage protocol decision-making

In summary, block storage is expensive but the best performing, file storage is less costly but with scale restrictions, and object storage is great for huge capacity but also lags in performance terms. So, which one to choose? It’s a case of horses for courses, according to Tony Lock, director of engagement and distinguished analyst at Freeform Dynamics.  

“In an ideal world, the choice of underlying storage – block, file or object – will likely depend on what the app is, where the organisation wishes to run it, and what its characteristics are in terms of size, number of containers, latency requirements, security, location, cost, etc,” he says.

Meanwhile, Whit Walters, field chief technology officer at GigaOm, believes S3 is winning the battle, but block has its place. He says: “The real story is protocol bifurcation inside AI pipelines. Object storage dominates the ingestion and data lake tier, offering exabyte-scale horizontal scaling with rich, customisable metadata that enables semantic discovery natively at the storage layer.

“Block storage still owns the inference hot path where vector databases demand 500,000+ IOPS, however.

“The emerging trend to watch is COSI, the Container Object Storage Interface, which aims to make object storage buckets first-class Kubernetes resources with standardised, declarative lifecycle management.”

CSI vs container-native in storage supplier container platform

All the big storage suppliers provide some form of platform or wrapper for container storage. These include Dell’s Container Storage Modules, HPE’s Ezmeral Runtime Enterprise, the Hitachi Kubernetes Service (HKS), NetApp’s Astra and Everpure’s Portworx.

What they all have in common is a means of managing container storage – and in some cases, data protection and more. Where they differ under the hood is that most are based around CSI, so they provide a layer from which to manage CSI drivers to their storage.

CSI connectivity may well be better suited to larger, more static environments, while container-native solutions can be best for more dynamic sets of workloads

Some differ in that they provide their management functionality from within Kubernetes. Everpure’s Portworx, for example, lives entirely within Kubernetes but uses CSI as a “handshake” with external storage.

Meanwhile, HPE Ezmeral also runs in Kubernetes but accesses data via the CSI driver. NetApp’s Astra Datastore was container-native in a similar way to Portworx, but was discontinued in 2023.

While all the key storage suppliers offer products that can manage storage for containers, be sure to check the extent to which these are container-native or dependent on CSI. As mentioned, CSI connectivity may well be better suited to larger, more static environments, while container-native solutions can be best for more dynamic sets of workloads.

GigaOm’s Walters puts a finer point on it: “The Kubernetes tax is real, but it’s a trade-off. Container-native platforms run replication, dedupe and encryption on worker nodes. Ceph alone carries a 2-10% baseline CPU penalty per node just for cluster quorum, and that spikes hard during replica rebuilds.

“In GPU [graphics processing unit]-dense AI environments, where every cycle counts, offloading that work to dedicated array ASICs [application-specific integrated circuits] via an advanced CSI model keeps compute nodes clean. But in multicloud or edge scenarios without dedicated arrays, that CPU tax buys you topology-aware placement and self-healing automation that’s genuinely hard to replicate otherwise.”

There may also be performance considerations in terms of contention for resources, as well as questions about how they are administered. 

Towards autonomous, agentic storage

As we look towards 2027, the focus is shifting from manual provisioning to policy-driven storage.

The ultimate goal is a system where the storage “senses” workload requirements. For example, if an AI training container spins up, the system automatically provisions high-throughput file storage, or if a database scales up, it gets low-latency block storage.



Source link

Continue Reading

Trending