Tech
Artificial neuron merges DRAM with MoS₂ circuits to better emulate brain-like adaptability
The rapid advancement of artificial intelligence (AI) and machine learning systems has increased the demand for new hardware components that could speed up data analysis while consuming less power. As machine learning algorithms draw inspiration from biological neural networks, some engineers have been working on hardware that also mimics the architecture and functioning of the human brain.
Brain-inspired, or neuromorphic, hardware typically integrates components that mimic the functioning of brain cells, which are thus referred to as artificial neurons. Artificial neurons are connected to one another, with their connections weakening or strengthening over time.
This process resembles synaptic plasticity, the ability of the brain to adapt over time in response to experience and learning. By emulating synaptic plasticity, neuromorphic computing systems could run machine learning algorithms more efficiently, consuming less energy when analyzing large amounts of data and making predictions.
Researchers at Fudan University have recently developed a device based on the ultrathin semiconductor monolayer molybdenum disulfide (MoS₂) that could emulate the adaptability of biological neurons better than other artificial neurons introduced in the past. The new system, introduced in a paper published in Nature Electronics, combines a type of computer memory known as dynamic random-access memory (DRAM) with MoS₂-based circuits.
“Neuromorphic hardware that accurately simulates diverse neuronal behaviors could be of use in the development of edge intelligence,” Yin Wang, Saifei Gou and their colleagues wrote in their paper.
“Hardware that incorporates synaptic plasticity—adaptive changes that strengthen or weaken synaptic connections—has been explored, but mimicking the full spectrum of learning and memory processes requires the interplay of multiple plasticity mechanisms, including intrinsic plasticity. We show that an integrate-and-fire neuron can be created by combining a dynamic random-access memory and an inverter that are based on wafer-scale monolayer molybdenum disulfide films.”
The artificial neuron developed by the researchers has two key components: a DRAM system and an inverter circuit. DRAMs are memory systems that can store electrical charges in structures known as capacitors. The amount of electrical charge in the capacitors can be modulated to mimic variations in the electrical charge across the membrane of biological neurons, which ultimately determine whether they will fire or not.
An inverter, on the other hand, is an electronic circuit that can flip an input signal from high voltage to low voltage or vice versa. In the team’s artificial neuron, this circuit enables the generation of bursts of electricity resembling those observed in biological neurons when they fire.
“In the system, the voltage in the dynamic random-access memory capacitor—that is, the neuronal membrane potential—can be modulated to emulate intrinsic plasticity,” wrote the authors. “The module can also emulate the photopic and scotopic adaptation of the human visual system by dynamically adjusting its light sensitivity.”
To assess the potential of the artificial neuron they created, the researchers fabricated a few and assembled them into a 3 × 3 grid. They then tested the ability of this 3×3 neuron array to adapt its responses to inputs based on changes in light, mimicking how the human visual system adapts in different lighting conditions. Finally, they used their system to run a model for image recognition and assessed its performance.
“We fabricate a 3 × 3 photoreceptor neuron array and demonstrate light coding and visual adaptation,” wrote the authors. “We also use the neuron module to simulate a bioinspired neural network model for image recognition.”
The artificial neuron developed by Wang, Gou and their colleagues has proved to be very promising so far, particularly for the energy-efficient implementation of computer vision and image recognition models. In the future, the researchers could fabricate other bio-inspired computing systems based on the newly developed device and test their performance on other computational tasks.
Written for you by our author Ingrid Fadelli, edited by Gaby Clark, and fact-checked and reviewed by Robert Egan—this article is the result of careful human work. We rely on readers like you to keep independent science journalism alive.
If this reporting matters to you,
please consider a donation (especially monthly).
You’ll get an ad-free account as a thank-you.
More information:
Yin Wang et al, A biologically inspired artificial neuron with intrinsic plasticity based on monolayer molybdenum disulfide, Nature Electronics (2025). DOI: 10.1038/s41928-025-01433-y.
© 2025 Science X Network
Citation:
Artificial neuron merges DRAM with MoS₂ circuits to better emulate brain-like adaptability (2025, August 30)
retrieved 30 August 2025
from https://techxplore.com/news/2025-08-artificial-neuron-merges-dram-mos.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
Mom’s Microwaved Coffee Won’t Stand a Chance With This Ember Smart Mug Deal
The Ember Smart Mug 2 is niche, but it has a loyal following. Even though we think there are better mug warmers on the market, Ember is like Apple AirPods or Kleenex. People want what they want. Right now, for Mother’s Day, the Ember Smart Mug 2 is on sale for just under $100, a 30 percent discount and a match of the very best price we’ve tracked. You can save at Amazon, Best Buy, and the manufacturer’s website.
This smart mug is probably overkill. It has a smartphone app that notifies you when your coffee reaches the ideal temperature, and its onboard light also provides a visual indicator that your brew is ready. It intelligently adjusts power usage to keep your drink warm when you’re nearby, and turns off when you’re not around. The self-heating mug is on sale in a few variations—10 or 14 ounces, in blue, white, black, and purple.
The mug offers up to 80 minutes of powered heating time, or you can pop it on the included charging coaster to keep the battery going all day. And you don’t need the smartphone app unless you want to precisely dictate your coffee temperature—the mug defaults to 135 degrees Fahrenheit without your specific input.
Our main gripe is that this proprietary warming system is not dishwasher safe. You need to hand-wash each component, and ensure you do so carefully, because the items are not cheap to replace. But if Mom has been putzing around the house drinking perpetually microwaved coffee, perhaps an upgrade is in order. We have additional recommendations in our guide to the Best Coffee Warmers. You may also want to check our related stories on the Best Espresso Machines, Best Coffee Machines, and Best Pod Coffee Makers.
Tech
AI-Designed Drugs by a DeepMind Spinoff Are Headed to Human Trials
Google DeepMind’s AlphaFold has already revolutionized scientists’ understanding of proteins. Now, the ability of the platform to design safe and effective drugs is about to be put to the test.
Isomorphic Labs, the UK-based biotech spinoff of Google DeepMind, will soon begin human trials of drugs designed by its Nobel Prize–winning AI technology. “We’re gearing up to go into the clinic,” Isomorphic Labs president Max Jaderberg said on April 16 at WIRED Health in London. “It’s going to be a very exciting moment as we go into clinical trials and start seeing the efficacy of these molecules.”
Jaderberg did not elaborate on the timeline, but it’s later than the company had planned to initiate human studies. Last year, CEO Demis Hassabis said it would have AI-designed drugs in clinical trials by the end of 2025.
Isomorphic Labs was founded in 2021 as a spinoff from Alphabet’s AI research subsidiary, Google DeepMind. The company uses DeepMind’s AlphaFold, a groundbreaking AI platform that predicts protein structures, for drug discovery.
Built from 20 different amino acids, proteins are essential for all living organisms. Long strings of amino acids link together and fold up to make a protein’s three-dimensional structure, which dictates the protein’s function. Researchers had tried to predict protein structures since the 1970s, but this was a painstaking process given the astronomically high number of possible shapes a protein chain can take.
That changed in 2020, when DeepMind’s Hassabis and John Jumper presented stunning results from AlphaFold 2, which uses deep-learning techniques. A year later, the company released an open-source version of AlphaFold available to anyone.
In 2024, DeepMind and Isomorphic Labs released AlphaFold 3, which advanced scientists’ understanding of proteins even further. It moved beyond modeling proteins in isolation to predicting other important molecules, such as DNA and RNA, and their interactions with proteins.
“This is exactly what you need for drug discovery: You need to see how a small molecule is going to bind to a drug, how strongly, and also what else it might bind to,” Hassabis told WIRED at the time.
Since its release, the AlphaFold platform has been able to predict the structure of virtually all the 200 million proteins known to researchers and has been used by more than 2 million people from 190 countries. The breakthrough earned Hassabis and Jumper the Nobel Prize for chemistry in 2024, with the Nobel committee noting that AlphaFold has enabled a number of scientific applications, including a better understanding of antibiotic resistance and the creation of images of enzymes that can decompose plastic.
Earlier this year, Isomorphic Labs announced an even more powerful tool, what it calls IsoDDE, its proprietary drug-design engine. In a technical paper, the company touts that the platform more than doubles the accuracy of AlphaFold 3.
The startup has formed partnerships with Eli Lilly and Novartis to work together on AI drug discovery and is also advancing its own “broad and exciting pipeline of new medicines” in oncology and immunology, Jaderberg said.
“The exciting thing about the molecules that we’re designing is because we have so much more of an understanding about how these molecules work, we’ve engineered them to be very, very potent,” Jaderberg told the audience at WIRED Health. “You can take them at a much lower dose, and they’ll have lower side effects, off target effects.”
Last year, Isomorphic appointed a chief medical officer and announced it had raised $600 million in its first funding round to gear up for clinical trials. Meanwhile, the company has been building a clinical development team. Its mission is to “solve all disease.”
“It’s a crazy mission,” Jaderberg said. “But we really mean it. We say it with a straight face, because we believe this should be possible.”
Tech
Wiz founder: Hack yourself with AI, before the bad guys do | Computer Weekly
Security leaders should be turning offensive AI cyber tools on their own systems before threat actors do, exploiting the innate defenders’ advantage to attain the high ground and increase their chances of withstanding a cyber attack.
So says Yinon Costica, co-founder of Google-owned Wiz, who, speaking at Google Cloud Next in Las Vegas, argued that defenders can win against attackers by using AI to exploit an advantage that may not appear obvious at first glance, that of context.
“The same AI model can obviously produce very different results based on the context that we feed into it,” said Costica. “Now, attackers hopefully have much less context about us while as defenders we do have a lot of context about our environments that we can share with the model.
“If, as defenders, we take the first movers’ advantage and we use the AI against ourselves, with the context we have, we actually stand a chance to win…. But we need to act fast,” he said.
“We need to start using AI against ourselves as much as possible, whether it’s to scan attack surfaces, scan code, scan anything, in order to be the first one to see the results and not to wait for the bad guys to do it before us.”
As speed becomes ever more of the essence in cyber security, Costica conceded that this would be a challenge for defenders – but noted that the tools to do this are rapidly becoming available. To try to help, Wiz unveiled three new AI agents at Google Cloud Next – red, green and blue – which are named for the human cyber teams they are designed to help.
“What agents allow us to do is really to get to the next level of acceleration [and] automation of security work,” said Costica.
The red agent is designed to assist red team penetration testing work by probing deep into its owners’ IT estate, identifying potential exposures, such as application programming interfaces (APIs), end-of-life edge networking kit or operational technology (OT) assets, and runs penetration tests on them. The green agent follows on by automating the triage process, something that can take ages for humans. Finally, the blue agent acts as a detective, doing the investigative work that can also be a lengthy process for human teams.
“These three agents together form a layer that is autonomous and automated. Its not revolutionary in that it aligns closely to how security teams have been working for many years, but now it allows each team to automate their workflows,” said Costica.
“It’s like living in the future in the eyes of security teams because it means that from the moment they find a risk, they can automate the process to find who owns it and deliver the code fix to complete and redeploy to production.”
A little over a month on from the closure of the $32bn acquisition of Wiz – Google’s largest purchase to date – the two organisations reaffirmed their commitment to providing a unified security platform, retaining Wiz’s brand, that will enhance the speed with which customers detect, prevent and respond to threats, especially emerging ones created using AI.
They duo also claim their combined capability will accelerate adoption of multicloud security and spur more confidence in innovation around cloud and AI. Wiz’s products are also to continue to be made available across other platforms, including Amazon Web Services (AWS), Microsoft Azure and Oracle Cloud. It also announced support for Databricks and agent studios like AWS Agentcore, Microsoft Azure Copilot Studio, and Salesforce Agentforce, as well as Gemini Enterprise Agent Platform of course, and continues to support security ecosystems with integrations to the outer layer of the cloud, including Google Cloud Apigee, Cloudflare AI Security for Apps, and the Vercel platform.
Behind the scenes, Wiz has also updated how it integrates security detections from Wiz Defend with Google Security Operations and Mandiant Threat Defence to make life easier for human analysts.
And it announced new capabilities to secure the AI-native deployment cycle. These include scanning vibe coded applications for issues; AI-generated code scanning and vulnerability remediation; agent-based remediation allowing teams to automate remediation workflows; and an AI bill of materials (AI-BOM) to keep on top of the use of shadow AI for coding.
-
Fashion1 week agoFrance’s LVMH Q1 revenue falls 6%, shows resilience amid Iran war
-
Tech1 week agoCYBERUK ’26: UK lagging on legal protections for cyber pros | Computer Weekly
-
Sports5 days agoWWE WrestleMania 42 Night 2: Live match results and analysis
-
Sports1 week agoFaheem Ashraf backs Islamabad United’s push, calls league a ‘career-changing platform’
-
Sports5 days agoNCAA men’s gymnastics championship: All-time winners list
-
Business1 week agoPepsiCo earnings beat estimates as North American food business improves
-
Tech1 week agoAnthropic Plots Major London Expansion
-
Entertainment4 days agoLee Anderson, Zarah Sultana kicked out of UK Parliament for calling PM ‘liar’
