Tech
Splunk.conf: Splunk urges users to eat their ‘cyber veggies’ | Computer Weekly
Organisations’ lack of attention to some of the most basic tenets of cyber hygiene not only continues to hamstring defenders but increasingly leaves the door wide not only to career cyber criminals using tried-and-tested tactics, but also less sophisticated actors exploiting artificial intelligence (AI) agents and models to power attacks at scale in an emerging phenomenon that experts at data observability specialist Splunk are calling vibe-hacking.
Speaking at a session held at this year’s Splunk.conf, taking place in Boston this week, Splunk cyber executives lamented poor security practice and called on businesses to “eat their cyber vegetables”, while acknowledging that CISOs have a mountain to climb to do so.
Ryan Fetterman, senior security strategist at Cisco Foundation AI and Splunk SURGe, his historical position had been to tell people not to get too worked up about AI changing the nature of cyber attacks, because threat actors were typically using such models to recreate the same methodologies favoured by humans, albeit at scale and more efficiently.
However, he said, this was clearly now changing. He noted in particular the emergence of an AI-powered ransomware PromptLock – which was discovered by ESET researchers at the end of August – although this turned out to be a proof of concept (PoC) developed by engineers at the Tandon School of Engineering at New York University (NYU),
“Cyber vegetables are important,” said Fetterman. “The reason for that is because the bar has been lowered for attackers using AI to scale their attacks and require less sophistication to do the things that they want to do. That makes it easier to find the low-hanging fruit for things like ransomware.”
Fetterman detailed an example of a ransomware incident in which the threat actor engaged in vibe-hacking – a nefarious bedfellow to the marginally more benign vibe-coding phenomenon.
He explained how the attacker used an AI agent to help conduct a full ransomware attack chain from initial target reconnaissance to vulnerability exploitation to execution and encryption. If this wasn’t already bad enough, they were also able to scale this attack chain across a total of 16 victims.
“I think that is scary because that can obviously scale to more attackers and scale to more victims, and now the targets that may not have been appealing from a financial perspective previously can in aggregate bring more of a return for those attackers, and maybe organisations that would have been lower on the priority list are fair game,” said Fetterman.
Splunk CISO Michael Fanning told Computer Weekly that nailing the basics was the most important part of any cyber security programme.
“I think very often we chase these shiny new technologies and capabilities and often they are a solution looking for a problem,” he said. “We need to think about what are the problems we’re trying to solve.
“When you learn how to play basketball, you start by learning how to make a layup, how to shoot free throws, how to play defense – and those are some of the hallmarks of a good team, there’s nothing fancy about that,” added Fanning. “The same is true with running cyber security – really nailing the basics in the core domains of cyber security is just an integral part of actually protecting your environment.”
Fanning acknowledged that it is understandable that some security leaders might give into novelty. However, he added: “Usually when that happens that’s indicative of a lack of strategy for your organisation.”
Security leaders who have defined their top security initiatives and objectives can better keep their teams focused on what truly matters, and on the right track, and avoid such “pet projects” that serve only to distract and increase risk, said Fanning.
Tech
Affordability Doesn’t Suck With Eufy’s Newest Robot Vac
Where the X10 Pro Omni had rotating mop pads, the rolling mop pad on the Omni C28 continuously self-cleans to prevent spreading dirt or grime to other parts of the house. Both apply downward pressure, but neither can spot dirtier places on their own as pricier, AI-powered robot vacuums will. Still, I was happy to see that it was able to scrub away some of the large dirt smudges in my entryway, though it didn’t get all of them. It also didn’t manage to scrub away all of the cherry juice I intentionally spilled in my routine mess setup for robot vacuum testing, even after sending the vacuum to do a second mopping job on one of the spots.
Photograph: Nena Farrell
Still, the Omni C28 was able to raise its roller mop high enough when it switched from mopping my floors to vacuuming my living room rug that there was no hint of dampness anywhere. The older X10 did get my colleague Adrienne So’s carpet wet, but it didn’t get mine wet, though my carpet is a fairly low pile. It did a fine job vacuuming the carpet, though I could tell the difference in suction between this and more powerful vacuums I’ve tested.
The base station is nice and compact, and includes drying fans to dry off the roller mop. That does mean there’s a gentle fan noise in the background for a couple of hours after you use this robot vacuum, which was more annoying than I expected, but you could easily place this vacuum’s base station in a less central spot in your home so you don’t hear it. You could also set up a schedule for the vacuum to run in the morning and finish its drying job before you get home.
Multi-Floor Madness
Photograph: Nena Farrell
My favorite feature on the Omni C28 is that, even at this price point, it can still learn multiple maps. While it can’t climb up stairs, you can move it around your home and switch the maps in the app to the floor you’ve relocated to. This isn’t new for Eufy, as the older affordable model can do that too, but it’s nice to see the feature maintained when I’ve tried more expensive robot vacuums that don’t include it. It’s pretty simple to use; you’ll go to the maps, select “make a new map,” and then activate the robot to map. Once the map is made, you’ll switch to that map from the little map icon on the right side, which will label them with numbers in the order you created them.
Tech
‘She’s Never Going to Age’: Porn Stars Are Embracing AI Clones to Stay Forever Young
Lisa Ann technically quit the porn business in 2019, but for $30 a month you can now dream up any X-rated scenario of her on your computer.
Ann, 53, was an adult performer for three decades starting in the mid 1990s and retired because she had reached her savings goal.
But last year she had a change of heart. Ann, who considers herself an AI fanatic, signed a contract with OhChat, a London-based AI companion company, to license her likeness on its platform, essentially creating an AI version of her in every way that can be used to make sex scenes for paying customers: same voice, same physique, and same pillowy brown hair.
As issues around deepfakes intensify and questions about the future of the adult industry become more dire with the passing of age-verification laws, several AI companion platforms want to create a new standard for consent-driven AI porn. More than sexting a faceless chatbot, digital twins—also called duplicates, doubles, clones, or replicas—draw on the exact likeness, including speech and mannerisms, of your favorite performers and creators.
Ann, now a self-help author and sports radio host, represents a growing faction in adult entertainment who not only believe AI is going to reshape the sex industry but who want a say in how that change materializes. She sees the decision to partner with OhChat as a way to tap into a fountain of youth—and stay at her peak forever.
“This keeps my name alive,” she says of her digital twin. “She’s never going to age.”
For Cherie Deville, a 47-year-old performer known for shooting MILF content, digital twins are just a smart business strategy to earn passive income while the opportunity is hot. “We can either let the makers of AI take the lion’s share of the money in the sex-work space, or creators and businesses can get on board and start creating their own revenue sources through AI.”
OhChat creators, who must submit 30 images and undergo voice training with a bot, sign an agreement stating the level of sexual content allowed for their digital twin. Ann is considered a “Level 4”—the highest on the platform—which means paying members can create scenarios and chats of her that include full nudity and sex. Per the company’s guidelines, clones can be deleted at any time.
“For guys that like to say good morning or good night, they now have that access. The fact that I’m not shooting scenes anymore also allows new scenes to be created,” Ann says.
Once described by CEO Nic Young as the “love child between OnlyFans and OpenAI,” OhChat launched in 2024 and has since scaled to over 400,000 users. According to data shared with WIRED, OhChat has 250 creators, 90 percent of which are female, and has contracts with celebrities Carmen Elektra and Joe Exotic. The platform runs on a tiered subscription model—$5 a month for on-demand texts or up to $30 for unlimited adult content—and the company, like OnlyFans, takes a 20 percent cut.
Other competitors in the space include My.Club, Joi AI and SinfulX AI, the platform that adult film actress Georgia Koneva partnered with this month, saying, in a press statement, that her avatar gave her a “new way to share my voice and personality with the people who follow me.” According to SinfulX AI, it also develops “original” synthetic characters using licensed source imagery from adult performers whose content it has the rights to use. In the same statement, the company said that those AI-generated “characters” are “designed not to replicate any single individual while still maintaining the realism for which its content is known.”
Tech
AI system learns to keep warehouse robot traffic running smoothly
Inside a giant autonomous warehouse, hundreds of robots dart down aisles as they collect and distribute items to fulfill a steady stream of customer orders. In this busy environment, even small traffic jams or minor collisions can snowball into massive slowdowns.
To avoid such an avalanche of inefficiencies, researchers from MIT and the tech firm Symbotic developed a new method that automatically keeps a fleet of robots moving smoothly. Their method learns which robots should go first at each moment, based on how congestion is forming, and adapts to prioritize robots that are about to get stuck. In this way, the system can reroute robots in advance to avoid bottlenecks.
The hybrid system utilizes deep reinforcement learning, a powerful artificial intelligence method for solving complex problems, to figure out which robots should be prioritized. Then, a fast and reliable planning algorithm feeds instructions to the robots, enabling them to respond rapidly in constantly changing conditions.
In simulations inspired by actual e-commerce warehouse layouts, this new approach achieved about a 25 percent gain in throughput over other methods. Importantly, the system can quickly adapt to new environments with different quantities of robots or varied warehouse layouts.
“There are a lot of decision-making problems in manufacturing and logistics where companies rely on algorithms designed by human experts. But we have shown that, with the power of deep reinforcement learning, we can achieve super-human performance. This is a very promising approach, because in these giant warehouses even a 2 or 3 percent increase in throughput can have a huge impact,” says Han Zheng, a graduate student in the Laboratory for Information and Decision Systems (LIDS) at MIT and lead author of a paper on this new approach.
Zheng is joined on the paper by Yining Ma, a LIDS postdoc; Brandon Araki and Jingkai Chen of Symbotic; and senior author Cathy Wu, the Class of 1954 Career Development Associate Professor in Civil and Environmental Engineering (CEE) and the Institute for Data, Systems, and Society (IDSS) at MIT, and a member of LIDS. The research appears today in the Journal of Artificial Intelligence Research.
Rerouting robots
Coordinating hundreds of robots in an e-commerce warehouse simultaneously is no easy task.
The problem is especially complicated because the warehouse is a dynamic environment, and robots continually receive new tasks after reaching their goals. They need to be rapidly redirected as they leave and enter the warehouse floor.
Companies often leverage algorithms written by human experts to determine where and when robots should move to maximize the number of packages they can handle.
But if there is congestion or a collision, a firm may have no choice but to shut down the entire warehouse for hours to manually sort the problem out.
“In this setting, we don’t have an exact prediction of the future. We only know what the future might hold, in terms of the packages that come in or the distribution of future orders. The planning system needs to be adaptive to these changes as the warehouse operations go on,” Zheng says.
The MIT researchers achieved this adaptability using machine learning. They began by designing a neural network model to take observations of the warehouse environment and decide how to prioritize the robots. They train this model using deep reinforcement learning, a trial-and-error method in which the model learns to control robots in simulations that mimic actual warehouses. The model is rewarded for making decisions that increase overall throughput while avoiding conflicts.
Over time, the neural network learns to coordinate many robots efficiently.
“By interacting with simulations inspired by real warehouse layouts, our system receives feedback that we use to make its decision-making more intelligent. The trained neural network can then adapt to warehouses with different layouts,” Zheng explains.
It is designed to capture the long-term constraints and obstacles in each robot’s path, while also considering dynamic interactions between robots as they move through the warehouse.
By predicting current and future robot interactions, the model plans to avoid congestion before it happens.
After the neural network decides which robots should receive priority, the system employs a tried-and-true planning algorithm to tell each robot how to move from one point to another. This efficient algorithm helps the robots react quickly in the changing warehouse environment.
This combination of methods is key.
“This hybrid approach builds on my group’s work on how to achieve the best of both worlds between machine learning and classical optimization methods. Pure machine-learning methods still struggle to solve complex optimization problems, and yet it is extremely time- and labor-intensive for human experts to design effective methods. But together, using expert-designed methods the right way can tremendously simplify the machine learning task,” says Wu.
Overcoming complexity
Once the researchers trained the neural network, they tested the system in simulated warehouses that were different than those it had seen during training. Since industrial simulations were too inefficient for this complex problem, the researchers designed their own environments to mimic what happens in actual warehouses.
On average, their hybrid learning-based approach achieved 25 percent greater throughput than traditional algorithms as well as a random search method, in terms of number of packages delivered per robot. Their approach could also generate feasible robot path plans that overcame congestion caused by traditional methods.
“Especially when the density of robots in the warehouse goes up, the complexity scales exponentially, and these traditional methods quickly start to break down. In these environments, our method is much more efficient,” Zheng says.
While their system is still far away from real-world deployment, these demonstrations highlight the feasibility and benefits of using a machine learning-guided approach in warehouse automation.
In the future, the researchers want to include task assignments in the problem formulation, since determining which robot will complete each task impacts congestion. They also plan to scale up their system to larger warehouses with thousands of robots.
This research was funded by Symbotic.
-
Fashion1 week agoSales at US apparel, clothing accessories stores up 4% YoY in Jan 2026
-
Tech1 week agoJustice Department Says Anthropic Can’t Be Trusted With Warfighting Systems
-
Entertainment1 week agoVal Kilmer revived 1 year after death through AI
-
Sports1 week agoMarch Madness 2026 – How to watch in SA, start time, schedule, TV channel for NCAA championship basketball tournament
-
Business1 week agoStocks and pound rise as US rate call approaches
-
Business1 week agoBrits cashing in jewellery as gold price hits record high
-
Politics1 week agoIran strikes Tel Aviv with cluster-warhead missiles in retaliation of Larijani’s martyrdom
-
Fashion1 week agoUS’ G-III Apparel’s FY26 sales fall 7% to $2.96 bn

