Over the years, we’ve tested dozens of different lubes, and some of them are pretty good if not exactly the best in any particular category. For those, we have this section.
Other Good Lubes
Over the years, we’ve tested dozens of different lubes, and some of them are pretty good if not exactly the best in any particular category. For those, we have this section.
LubeLife Water-Based Lubricant for $8: Not only does LubeLife make a stellar silicone lube, but their water-based lubes are great too. At the moment, I’m really enjoying their most recent water-based lube—they have a long and impressive line of these types of lubes—that’s surprisingly long-lasting for something that’s water-based. It’s also super smooth, feeling 100 percent natural, never gets that awful sticky or tacky texture that some water-based lubes develop over time, and upon tasting it, I noticed it had a very slight sweetness to it. While I haven’t used this lube during oral sex, I can definitely see it being a major asset in my performance.
Playground Free Love Lube for $18: If you’re susceptible to UTIs, bacterial vaginosis (BV), or similar infections, then this is the lube for you, as it’s been scientifically proven to both reduce and prevent such vaginal issues. Free Love is also free of glycerin and fragrance, both of which can lead to yeast infections and general irritations. Although Free Love is extremely smooth and makes for a great complement when trying to avoid friction, the biggest selling point is that it will protect you from infections that some other lubes just can’t.
Dame Arousal Serum for $30: I’m not a huge fan of warming or tingling lubes and have yet to try one that makes me a true believer. But Dame’s Arousal Serum comes close. This is a warming, tingling, water-based lube that uses peppermint oil, cinnamon leaf oil, and ginger oil to provide some extra sensation during sex. If you have sensitive skin, I’d leave these products alone, but if you don’t and want to try a stimulating lube, this is the one I’d recommend. Try it on a non-genital area first to ensure you know how your skin will react.
Maude Shine Water-Based Lube for $25: This used to be our top pick. It offers a silky-smooth texture, though it’s on the thicker side for a water-based lube. Thicker water-based lubes typically last longer between applications. Using the thumb test, this lube gives you a slick but smooth cushion between your fingertips, which is a good indicator that it’s going to keep things nice and slick.
Robotically assembled building blocks could be a more environmentally friendly method for erecting large-scale structures than some existing construction techniques, according to a new study by MIT researchers.
The team conducted a feasibility study to evaluate the efficiency of constructing a simple building using “voxels,” which are modular 3D subunits that assemble into complex, durable structures.
After studying the performance of multiple voxels, the researchers developed three new designs intended to streamline building construction. They also produced a robotic assembler and a user-friendly interface for generating voxel-based building layouts and feeding instructions to the robots.
Their results indicate this voxel-based robotic assembly system could reduce embodied carbon — all of the carbon emitted during the lifecycle of building materials — by as much as 82 percent, compared with popular techniques like 3D concrete printing, precast modular concrete, and steel framing. The system would also be competitive in terms of cost and construction time. However, the choice of materials used to manufacture the voxels does play a major role in their carbon footprint and cost.
While scalability, durability, long-term robustness, and important considerations like fire resistance remain to be explored before such a system could be widely deployed, the researchers say these initial results highlight the potential of this approach for automated, on-site construction.
“I’m particularly excited about how the robotic assembly of discrete lattices can enable a practical way to apply digital fabrication to the built environment in a way that can let us build much more efficiently and sustainably,” says Miana Smith, a graduate student in the Center for Bits and Atoms (CBA) at MIT and lead author the study.
She is joined on the paper by Paul Richard, a graduate student at École Polytechnique Fédérale de Lausanne in Switzerland and former visiting researcher at MIT; Alfonso Parra Rubio, a CBA graduate student; and senior author Neil Gershenfeld, an MIT professor and the director of the CBA. The research appears in Automation in Construction.
Designing better building blocks
Over the past several years, researchers in the Center for Bits and Atoms have been developing voxels, which are lattice-structured building blocks that can be assembled into objects with high strength and stiffness, like airplane wings, wind turbine blades, and space structures.
“Here, we are taking aerospace principles and applying them to buildings. Why don’t we make buildings as efficiently as we make airplanes?” Gershenfeld says, based on prior work his lab has done on voxel assembly with NASA, Airbus, and Boeing.
To explore the feasibility of voxel-based assembly strategies for buildings, the researchers first evaluated the mechanical performance and sustainability of eight existing voxel designs, including a cuboctahedron made from glass-reinforced nylon and a Kelvin lattice made from steel.
Based on those evaluations, they developed a set of three voxels using a new geometry that could be more easily assembled robotically into a larger structure. The new design, based on a high-strength and high-stiffness octet lattice, mechanically self-aligns into rigid structures.
“The interlocking nature of these voxels means we can get nice mechanical properties without needing to have a lot of connectors in the system, so the construction process can run a lot faster,” Smith says.
To accelerate construction, they designed a robotic assembly system based on inchworm-like robots that crawl across a voxel structure by anchoring and extending their bodies. These Modular Inchworm Lattice Assembler robots, or MILAbots, use grippers on each end to place voxel building blocks and engage the snap-fit connections.
“The robots can assemble the voxels by dropping them into place and then stepping on them to have the pieces interlock. We can do precise maneuvers based on the mechanical relationship between the robots and the voxels,” Smith explains.
The team studied the embodied carbon needed to fabricate their new voxel designs using three materials: plastic, plywood, and steel. Then they evaluated the throughput and cost of using the robotic assembly system to build a simple, one-story building. The researchers compared these estimates with the performance of other construction methods.
Potential environmental benefits
They found that most existing voxels, and especially those made from plastics, performed poorly compared to existing methods in terms of sustainability, but the steel and wood voxels they designed offered significant environmental benefits.
For instance, utilizing their steel voxels would generate only 36 percent of the embodied carbon required for 3D concrete printing and 52 percent of the embodied carbon of precast concrete. The plywood voxels had the lowest carbon footprint, requiring about 17 percent and 24 percent of the embodied carbon needed, respectively.
“There is still a potential viable option for a plastics-based voxel approach, we just have to be a bit more strategic about which types of plastics, infills, and geometries we use,” Smith says.
In addition, projected on-site assembly time for the steel and wood voxel approaches averaged 99 hours, whereas existing construction methods averaged 155 hours.
These speed benefits rely on the distributed nature of voxel-based assembly. While one MILAbot working alone is far slower than existing techniques, with a team of 20 robots working in parallel, the system catches up to or surpasses existing automation methods at a lower cost.
“One benefit of this method is how incremental it is. You can start building, and if it turns out you need a new room, you can just add onto the structure. It is also reversible, so if your use changes, you can dissemble the voxels and change the structure,” Gershenfeld says.
The researchers also developed an interface that enables users to input or hand-design a voxelized structure. The automatic system determines the paths the MILAbots should follow for construction and sends commands to the assemblers.
The next step in this project will be a larger testbed in Bhutan, using the “super fab lab” that CBA helped set up there to replicate the robots to test construction for a planned sustainable city, Gershenfeld says.
Additional areas of future work include studying the stability of voxel structures under lateral loads, improving the design tool to account for the physics of the system, enhancing the MILAbots, and evaluating voxels that have integrated sheeting, insulation, or electrical and plumbing routing.
“Our work helps support why doing this type of distributed robot assembly might be a practical way to bring digital fabrication into building construction,” Smith says.
This work was funded, in part, by the MIT Center for Bits and Atoms Consortia.
When containers started out, they were meant to be ephemeral – stateless, disposable and data-light. But that’s all changed. As Gartner notes, use cases for containers have evolved to include analytics and artificial intelligence (AI) processing, and by 2028, it predicts 15% of on-premise production workloads will run in containers. That’s a 300% increase since 2022.
Now, while containers themselves retain all the benefits of ephemerality – rapidly reproducing, then dying back just as quickly to account for workload spikes – the storage attached to them cannot live by the same rules.
As enterprises move from proofs of concept to running a big chunk of production workloads in containers, the storage layer has become a pivot point. While the early days were focused on simple web scaling, containers have now moved into the realm of mission-critical databases, massive data science pipelines, and the power-hungry world of generative AI (GenAI).
The challenge lies in navigating key choices such as file versus block versus object storage, CSI versus container-native storage, and whether to go for a dedicated container storage platform.
Containerisation is lightweight virtualisation
Containerisation is a lightweight form of virtualisation. Unlike traditional virtual machines (VMs) that require a hypervisor and a full guest operating system (OS), containers share the host server’s OS. This makes them lighter, faster to scale and more portable. They are built on microservices principles that break monolithic applications into discrete, application programming interface (API)-linked components in a way that aligns with DevOps methodologies.
While several orchestrators exist (for example, Docker Swarm and OpenShift), Kubernetes is the market leader. It manages the cluster of nodes, which is where pods run the containers. Clusters are groups of nodes managed by a control plane, which is where we find the API server, a scheduler for pod placement, a controller to maintain the desired state, and etcd for storage configuration.
As originally conceived, container storage was ephemeral, and data vanished when a pod was deleted. So, to support enterprise applications, Kubernetes developed persistent volumes (PV), which are attached to a cluster and decouple storage from compute to allow applications to remain portable while maintaining access to data.
CSI vs container-native storage
Container Storage Interface (CSI) is a standard that allows storage suppliers – more than 130 drivers are available – to expose their systems to Kubernetes. CSI allows Kubernetes to trigger advanced data services such as snapshots, cloning and automated provisioning across block, file and object storage in on-premise and cloud environments.
Container-native storage potentially has the advantage of portability – on-premise, in the cloud, and so on, by virtue of the virtualisation inherent – while CSI is more likely to tie a deployment to deployed storage arrays
CSI is essentially a “broker”. It is an industry-standard API that acts as a middleman, allowing Kubernetes to talk to external storage arrays. For example, when a developer requests storage via a persistent volume claim (PVC), the CSI driver tells the external storage box to carve out a piece of capacity and plug it into the container. The advantage is that you get to use the expensive, reliable enterprise storage you already own, but the storage is still “outside” the cluster, and if you move containers to a different cloud or datacentre, that external hardware might not be there.
Meanwhile, container-native storage is storage that lives inside the Kubernetes cluster. It is usually deployed as a set of containers itself. It takes specified drives attached to Kubernetes nodes and pools them together into one big virtual resource.
Container-native storage potentially has the advantage of portability – on-premise, in the cloud, and so on, by virtue of the virtualisation inherent – while CSI is more likely to tie a deployment to deployed storage arrays.
Container-native storage is location independent, so you can run the same setup on-premise or in the cloud. But it can consume central processing unit (CPU) and random access memory (RAM) from your Kubernetes nodes to manage the data, which may be a concern.
Do we need containers to be that portable?
CSI offers connection to big-iron fully featured storage, and container-native storage holds the promise of flexible deployment, portability, and so on. But is portability that important? Eric Phenix, who leads the engineering practice at analysts GigaOm, says not.
“Containers offer a compute abstraction layer that allows the application to be infrastructure agnostic, rather than a solution that is designed to make applications more portable,” he says.
Phenix argues that while containers make the code agnostic, deployment is another matter. “Unless a company is specifically a customer-facing instanced PaaS [platform as a service] where they need to run on every cloud, I don’t see the need to run the same workload on multiple clouds. Once things are deployed, they’re always messy to migrate,” he says.
And this “messiness” is almost always a data problem, according to Phenix. While the container image can move in seconds, the multi-terabyte persistent volume attached to it cannot.
James Brown, an analyst at GigaOm, points out that container-native storage is essentially software-defined storage and brings its own lock-ins. “Heavily integrated, container-native supplier platforms risk replacing hardware lock-in with software lock-in. Tying your architecture to proprietary in-cluster storage features creates massive migration hurdles, effectively breaking the core portability promise of Kubernetes,” he says.
So, the choice here comes down to just how portable you need things to be. Enterprises often use a hybrid approach: CSI to connect to massive, high-performance arrays for their heaviest databases; container-native storage for modern, distributed apps that need to be able to move without a “messy” data migration.
In 2026, choosing the correct storage protocol for container storage is all about playing in a “mixed economy”, with a Kubernetes cluster able to pull from all three formats simultaneously.
Block for high performance
Block storage presents data as a raw, unformatted volume – like a physical hard drive – that is attached to a single node at a time. In Kubernetes, this is typically handled via persistent volumes using the ReadWriteOnce (RWO) access mode.
Block storage can be in on-premise arrays or in the cloud, such as in Amazon Elastic Block Store (EBS), Google Persistent Disk, or Microsoft Azure Disk.
Block storage offers the lowest latency and highest input/output operations per second (IOPS) because there is no filesystem overhead between the application and the storage. That makes it ideal for databases where small, frequent updates happen at specific locations within files.
When it comes to the cons, most block storage cannot be mounted to multiple pods across different nodes simultaneously, and scaling usually requires resizing the volume and expanding the filesystem. Block storage is generally the most expensive, too.
File for directory access
File storage provides a shared hierarchical namespace (folders and files) accessible over a network. In Kubernetes, it is the primary way to achieve ReadWriteMany, allowing multiple pods on different nodes to read and write to the same data.
It is also available in on-premise storage or cloud services such as Amazon Elastic File System (EFS), Microsoft Azure Files and Google Filestore.
File access is perfectly suited for horizontal scaling of web servers where all pods need access to the same assets, and most legacy applications are built to read/write to a standard directory structure. Compared to block access, network protocols like NFS or SMB introduce more latency, and at large scales (millions of files), traversing deep directory trees can become extremely slow. Meanwhile, handling concurrent writes across many pods can lead to file locking conflicts if not managed carefully.
Object for sizeable datastores
Object storage manages data as discrete objects in a flat namespace and is accessed via APIs (for example, S3 or Swift) rather than being “mounted” like a disk. It’s the cloud-native storage protocol, though it can run on-site, too. Examples include Amazon Simple Storage Service (S3), MinIO, Google Cloud Storage and Ceph RGW. Object storage can store petabytes of data without worrying about partition limits or disk sizes, and is usually the cheapest option for large-scale unstructured data (logs, images, backups).
Object storage is ideal for modern “cloud-native” apps that talk directly to storage via HTTP/HTTPS, bypassing the OS kernel entirely.
On the negative side, object storage is generally the slowest for transactional work with high throughput but higher latency than block or file. Meanwhile, you can’t “edit” a single line in a file; you must re-upload the entire object to change it.
Storage protocol decision-making
In summary, block storage is expensive but the best performing, file storage is less costly but with scale restrictions, and object storage is great for huge capacity but also lags in performance terms. So, which one to choose? It’s a case of horses for courses, according to Tony Lock, director of engagement and distinguished analyst at Freeform Dynamics.
“In an ideal world, the choice of underlying storage – block, file or object – will likely depend on what the app is, where the organisation wishes to run it, and what its characteristics are in terms of size, number of containers, latency requirements, security, location, cost, etc,” he says.
Meanwhile, Whit Walters, field chief technology officer at GigaOm, believes S3 is winning the battle, but block has its place. He says: “The real story is protocol bifurcation inside AI pipelines. Object storage dominates the ingestion and data lake tier, offering exabyte-scale horizontal scaling with rich, customisable metadata that enables semantic discovery natively at the storage layer.
“Block storage still owns the inference hot path where vector databases demand 500,000+ IOPS, however.
“The emerging trend to watch is COSI, the Container Object Storage Interface, which aims to make object storage buckets first-class Kubernetes resources with standardised, declarative lifecycle management.”
CSI vs container-native in storage supplier container platform
All the big storage suppliers provide some form of platform or wrapper for container storage. These include Dell’s Container Storage Modules, HPE’s Ezmeral Runtime Enterprise, the Hitachi Kubernetes Service (HKS), NetApp’s Astra and Everpure’s Portworx.
What they all have in common is a means of managing container storage – and in some cases, data protection and more. Where they differ under the hood is that most are based around CSI, so they provide a layer from which to manage CSI drivers to their storage.
CSI connectivity may well be better suited to larger, more static environments, while container-native solutions can be best for more dynamic sets of workloads
Some differ in that they provide their management functionality from within Kubernetes. Everpure’s Portworx, for example, lives entirely within Kubernetes but uses CSI as a “handshake” with external storage.
Meanwhile, HPE Ezmeral also runs in Kubernetes but accesses data via the CSI driver. NetApp’s Astra Datastore was container-native in a similar way to Portworx, but was discontinued in 2023.
While all the key storage suppliers offer products that can manage storage for containers, be sure to check the extent to which these are container-native or dependent on CSI. As mentioned, CSI connectivity may well be better suited to larger, more static environments, while container-native solutions can be best for more dynamic sets of workloads.
GigaOm’s Walters puts a finer point on it: “The Kubernetes tax is real, but it’s a trade-off. Container-native platforms run replication, dedupe and encryption on worker nodes. Ceph alone carries a 2-10% baseline CPU penalty per node just for cluster quorum, and that spikes hard during replica rebuilds.
“In GPU [graphics processing unit]-dense AI environments, where every cycle counts, offloading that work to dedicated array ASICs [application-specific integrated circuits] via an advanced CSI model keeps compute nodes clean. But in multicloud or edge scenarios without dedicated arrays, that CPU tax buys you topology-aware placement and self-healing automation that’s genuinely hard to replicate otherwise.”
There may also be performance considerations in terms of contention for resources, as well as questions about how they are administered.
Towards autonomous, agentic storage
As we look towards 2027, the focus is shifting from manual provisioning to policy-driven storage.
The ultimate goal is a system where the storage “senses” workload requirements. For example, if an AI training container spins up, the system automatically provisions high-throughput file storage, or if a database scales up, it gets low-latency block storage.
A jury was selected on Monday during the first day of trial for Musk v. Altman in a federal court in Oakland, California. Some of the jurors that were ultimately selected voiced concerns over Musk himself, as well as the AI technology at the core of the case, but assured the court they would put these concerns aside for the trial. The kick off also catalyzed an array of shenanigans outside the courtroom.
OpenAI CEO Sam Altman and president Greg Brockman were spotted in the security line inside the courthouse this morning, but Elon Musk was nowhere to be found. A few dozen journalists crammed into an overflow room to listen to an audio stream of the proceedings.
The goal today was to select nine jurors who could be fair and impartial in this case—an especially difficult challenge considering the main characters are some of the most high-profile tech executives in the world. Several potential jurors said they had negative opinions about Musk when questioned by Judge Yvonne Gonzalez Rogers and attorneys. But that didn’t necessarily disqualify them; only one juror was ultimately excused on the basis of their strong negative opinions regarding Musk.
“The reality is that many people don’t like him,” Gonzalez Rogers told the courtroom. She added that she believed Americans with negative feelings about Musk could still have integrity for the judicial process and decide the case fairly. The jury will help establish the core facts regarding whether Sam Altman and other defendants improperly steered OpenAI’s nonprofit venture away from its original mission, potentially violating the law in the process. But their verdict will be advisory—Gonzalez Rogers will have the final call.
The nine jurors that were ultimately selected represent quite a diverse group, including a painter, a former Lockheed Martin employee, and a psychiatrist. Some of them said they had negative opinions about artificial intelligence technology more broadly. In the end, however, all of the people selected assured the court that their outside opinions about Musk and AI shouldn’t interfere with their ability to determine the facts of the case.
OpenAI’s attorney William Savitt said at a press briefing afterward that he was satisfied with the jury the court settled on.
“Mr. Altman, Mr. Brockman, and OpenAI are looking forward to presenting their case to that jury. They’re confident in their position and are looking forward to the facts being known,” Savitt told reporters. “The hurdle we think we need to get over is just to present the truth here. We’ve got a story about what happened that is consistent with the facts, it’s consistent with the documents, and we just want the jury to see that.”
Musk is already trying to win his case in the court of public opinion. On Monday morning, the billionaire used his social media platform X to boost a recent New Yorker investigation into Altman’s alleged deceptive business conduct. The story is weeks old, and the fact that Musk promoted it on the first day of the trial is no coincidence. Earlier this morning, OpenAI’s official newsroom account published a post on X calling Musk’s lawsuit an “attempt to undermine our work to ensure that artificial general intelligence benefits all of humanity.” Meanwhile, demonstrators were outside the court protesting the AI race altogether and calling for a pause on further development.
On Tuesday, lawyers for OpenAI and Elon Musk will deliver opening statements, and the first witness in the case will be called to the stand.