Connect with us

Tech

Gemini in Google Home Keeps Mistaking My Dog for a Cat

Published

on

Gemini in Google Home Keeps Mistaking My Dog for a Cat


A cat jumped up on my couch. Wait a minute. I don’t have a cat.

The alert about the leaping feline is something my Google Home app sent me when I was out at a party. Turns out it was my dog. This notification came through a day after I turned on Google’s Gemini for Home capability in the Google Home app. It brings the power of large language models to the smart home ecosystem, and one of the most useful features is more descriptive alerts from my Nest security cameras. So, instead of “Person seen,” it can tell me FedEx came by and dropped off two packages.

In the two weeks since I allowed Gemini to power my Google Home, I’ve enjoyed its ability to detect delivery drivers the most. At the end of the day, I can ask in the Google Home app, “How many packages came today” and get an accurate answer. It’s nice to know that it’s FedEx at the door, per my Nest Doorbell, and not a salesperson offering to replace my windows. Yet for all its smarts, Gemini refuses to understand that I do not have a cat in my house.

Person Seen

ScreenshotGoogle Home via Julian Chokkattu

Google isn’t the only company souping up its smart-home ecosystem with AI. Amazon recently announced a feature on its Ring cameras called Search Party that will use a neighborhood’s worth of outdoor Ring cameras to help someone find their lost dog. (I don’t need to stretch to imagine something like this being used for nefarious purposes.)

In early October, Google updated the voice assistant on its smart-home devices—some of which have been around for a decade—by replacing Google Assistant with Gemini. For the most part, the assistant is better. It can understand multiple commands in a spoken sentence or two, and you can very easily ask it to automate something in your home without fussing with the Routines tab in the Google Home app. And when I ask it a simple question, it generally gives me some kind of a reliable answer without punting me to a Google Search page.

Smarter camera alerts are indeed more helpful at a glance. Most of the time, I dismissed Person Seen notifications because they’re often just people walking by my house. Now the alerts actually say “Person walks by,” which gives me greater confidence to dismiss those. Some alerts accurately say “Two people opened the gate,” though sometimes it will hallucinate: “Person walks up stairs,” when no one actually did. (They just walked on the sidewalk.) It has fairly accurately noted when UPS, FedEx, or USPS are at the door, which is nice to know when I’m busy or out and about, so I can make sure to check for a package when I get home—no need to hunt through alerts.

But with my indoor security cameras, Gemini routinely says I have a cat wandering the house. It’s my dog. Even in my Home Brief—recaps at the end of the day from Gemini about what happened around the home—Gemini says, “In the early morning, a white cat was active, walking into the living room and sitting on the couch.” It’s amusing, especially considering my dog hates cats.

CatDog

Screenshot

ScreenshotGoogle Home via Julian Chokkattu

You would think then that I would be able to just tell this smarter assistant, “Hey, I don’t have a cat. I have a dog,” and it would adjust its models and fix the error. Well, I did exactly that. In the Ask Home feature, you can talk to Gemini and ask it anything about the home. This is where you can ask it to set up automations, for example. I asked it to turn on the living room lights when the cameras detect my wife or I arriving home, and it understood the action. It even guessed that I wanted the lights to come on only when arriving at night, despite me forgetting to mention that.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Do Lightsaber Blades Have Mass?

Published

on

Do Lightsaber Blades Have Mass?


When you think of Star Wars, you think of lightsabers. Right? What could be better, from a movie-making standpoint, than a futuristic sword that lets you create awesome fencing duels like in old-time Errol Flynn swashbucklers. (So much better than watching Stormtroopers fire their blasters into walls and ceilings and anything else except their targets.)

Lightsabers come in a cosmic rainbow of hues (color-coded blue or green for good guys, red for bad) and a variety of shapes. There’s even a double-bladed version in Phantom Menace. (I don’t want to start a nerd fight—yet—but the best lightsaber battle in the canon has to be the “Duel of the Fates” in that movie, thanks to the skills and scariness of Darth Maul actor Ray Park.)

So … exactly what are lightsabers? Of course, they aren’t real, so nobody really knows how they work. Even the characters in the movies seem a little confused about it. In Phantom Menace, Anakin calls it a “laser sword.” Yeah, he was a kid, but both Din Djarin (the Mandalorian) and Luke Skywalker also refer to it as a laser sword—though I suspect Luke was being sarcastic.

Anyway, that’s just wrong: It can’t be a laser. For starters, lasers beams are invisible from the side, so you wouldn’t see a thing unless you staged the duels in a disco with fog machines to scatter the beams. Second, the beams go on forever; they don’t have an end. Third, laser beams can’t clank together like swords—they’d just pass through each other when you try to parry.

But what is it then? We can greatly narrow the possibilities by asking if the blade has mass. If it’s some kind of light (as you’d think from the name “lightsaber”), then the answer is no—light, or electromagnetic radiation, has no mass. If we can determine that it has mass, then it’s not light.

This is a question we can answer, by analyzing how lightsabers move when you wave them around. In other words, it’s time for some physics!

Mass and Motion

Don’t confuse mass and weight. Mass is a measure of how much “stuff” like protons, neutrons, and electrons are in an object, and weight is the amount of gravitational force acting on an object. Here we want to see what impact the mass of a lightsaber would have on its motion. But let’s start with something simpler.

Instead of a lightsaber, say we have a “lightball” made of the same buzzy substance. Since it’s symmetrical, we can describe its motion without worrying about rotation. If we want to move this ball back and forth, we call on Newton’s second law of motion. This says the acceleration (a) of an object depends on its mass (m) and the amount of force (F) applied to it.



Source link

Continue Reading

Tech

How a cloud-native architecture handles persistent storage | Computer Weekly

Published

on

How a cloud-native architecture handles persistent storage | Computer Weekly


Cloud-native, or containerised, applications are now mainstream. As many as 82% of enterprises now have Kubernetes in production, according to the Cloud Native Computing Forum (CNCF). That is up from 66% in 2023. And a full 98% of organisations have at least some cloud-native applications, the industry body says.

But moving applications to cloud-native environments does not just mean creating new code. It also means adapting infrastructure. Compute, networking and data storage all need to work with container environments. By no means can all systems do this out of the box, especially when it comes to on-premise hardware.

At the same time, enterprise IT architects need to consider the requirements of legacy applications and virtual machines (VMs) that are not being updated.  And enterprises will want to make the most efficient use of their storage hardware, regardless of their application environments.

Moving to containers means adapting a technology that was not designed for persistent storage to handle business-critical data. 

Stateless states

Containerised applications started out as stateless, or ephemeral. The designers never intended containers to hold persistent data. They expected that microservices or containerised applications would use no non-volatile storage and discard the contents of memory, and even their settings, once they had completed their tasks. 

Instead, containerised applications rely on an external data store, usually a database or cache. 

There are advantages to this approach. These include simpler deployment, easier scaling, fault tolerance and recovery, and application portability. But most business applications, if not the majority, need persistent data. 

“Most business applications require storage. In reality, unless you’re converting Fahrenheit to Celsius and back, you’re storing something somewhere,” says Dan Ciruli, vice-president and general manager for cloud native at Nutanix. 

And the need to work with persistent data is all the more important, as enterprises look to containers as an alternative to conventional virtual machines.

But this means rethinking the way applications work. And it requires IT architects to update their storage systems to support modernised, cloud-native applications. This can be directly, where array manufacturers support containers, or through a control plane such as Nutanix or Everpure’s Portworx.

Almost inevitably, changes are being driven by AI, as enterprises look to support its data-heavy workloads in modern, cloud-native environments. But there are other drivers, too, including a trend to move virtualised applications to containers and the need for cost controls.

“Kubernetes might be over a decade old, but it’s continuing to evolve as AI transforms the way we handle data. Already, Kubernetes has moved beyond the days when it was built only for ephemeral, stateless applications,” says Michael Cade, global field chief technology officer at Veeam Software.

“Today, stateful applications such as databases, machine learning pipelines and streaming systems are now being treated as first-class citizens [in containerised environments] and have been given the specialised tools they need to thrive.” 

Storage connections

Connecting storage to Kubernetes, though, relies on support from both application developers and hardware suppliers. 

The main way to connect storage to container environments is through the container storage interface (CSI). CSI needs to be supported directly by the storage provider, be that the hardware manufacturer, a cloud service, or a software-defined storage (SDS) supplier. 

As the CNCF’s Kubernetes page notes: “CSI was developed as a standard for exposing arbitrary block and file storage systems to containerised workloads on container orchestration systems like Kubernetes.” CSI allows third-party storage providers to write, and deploy, plug-ins for storage without changing the core Kubernetes code. 

SDS technologies, for their part, also use CSI drivers, but run on commodity hardware rather than dedicated storage arrays, as well as hyper-converged infrastructure. It also includes open source options, such as OpenEBS, Longhorn and Ceph. 

“Every environment needs a storage back end, with a CSI driver that connects it to Kubernetes. It’s up to the storage provider to provide the CSI driver,” says Nigel Poulton, an author and independent expert in Kubernetes and containers. 

“Most CSI drivers create at least one StorageClass that maps to a tier of storage and its capabilities. For example, a CSI driver might create a StorageClass called ‘fast-replicated’ that maps to high-speed flash storage automatically replication to a remote location. Any application using this class automatically gets that tier and set of capabilities,” he adds. 

This level of abstraction is highly useful for application developers, as they no longer have to worry about the physical capabilities of the storage system. That is handled by the CSI drivers. 

“The CSI drivers enable us to give access to storage from the containerised application, but [for firms to] still administer the storage the way they do the storage that’s running under their VMs,” says Nutanix’s Ciruli. “And that’s a big advantage.” He also sees customers installing Kubernetes on bare metal clusters. 

This also maintains separation between the Kubernetes workloads and the underlying storage hardware. On paper at least, enterprises can move their containerised applications to a different platform or supplier, or new storage hardware, without rewriting code and with minimal disruption. 

In practice, large-scale moves of Kubernetes applications between platforms are still relatively rare. Enterprises tend to develop applications to run on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, or local hardware, depending on their business requirements.  

Application portability, supported by CSI, is a useful insurance, even if there are enough differences between platforms to suggest caution.

“We really don’t need to become an expert in how EBS [Elastic Block Store] works versus Azure disk, or local SSD [solid-state drives] and how that works,” says Greg Muscarella, general manager for Portworx at Everpure. “If you have to manage those things, it becomes somewhat complex. Companies tend to focus on a single cloud environment.”

Few organisations, he suggests, have code where they could “push a button and move it to a different cloud”, not least because of differences between storage architectures from both hardware suppliers and cloud providers. However, enterprises are moving more applications to cloud-native environments. And this increasingly includes databases and applications that previously ran in conventional virtual machines.

New platforms 

One of the most significant trends in application modernisation is to move both virtual machines and database-driven applications to containers. Cost, avoiding supplier lock-in and the need to consolidate on fewer platforms are all drivers. 

“The line between ‘containerised’ and ‘virtualised’ is blurring,” suggests Veeam’s Cade. “For a long time, containers and VMs were seen as two separate siloes. But as stateful applications have developed, and since VMs are essentially a typical stateful workload, we’re seeing a significant rise in businesses running them directly within Kubernetes using platforms such as Red Hat OpenShift Virtualization.” 

Poulton agrees. He sees more organisations moving virtualised workloads to containers, via tools such as KubeVirt. But, although organisations are porting over virtualised applications, and databases, IT architects need to be sure that all the application’s requirements are met by the storage layer. 

“Databases have much more demanding requirements, including ordered startup, replication, automated failover and backup,” he cautions. “The two biggest changes are ensuring a CSI driver exists for the storage system and potentially deploying an operator.” 

A Kubernetes operator provides details about a database’s specific requirements, and sometimes storage, too. Operator support is essential to allow databases to deliver enterprise workloads over Kubernetes. Again, the operator supports the modern application goal of separating the code from the storage array or cloud storage service. 

Percona, for example, provides operators for MySQL, PostgreSQL and MongoDB, as well as Everest. “The operators are basically the game changers,” says Kate Obiidykhata, the company’s general manager for cloud native. “They encode the human DBA knowledge into the software, and you have all those most important resilience components, backup, failover, replication and upgrades automated.”

Operators, she adds, help enterprises to adopt hybrid architectures or multicloud strategies, allowing data portability without the need to rewrite applications. But workloads that operate on VMs will not automatically run on containers, she says. Firms will need to plan, and test, their deployments with care. 

“There are specific playbooks that you should apply and methodologies that are obviously different from the classic database setup on VMs,” says Obiidykhata. “But it’s all doable, and many companies are now running those databases on Kubernetes. They just have a different playbook to mitigate those issues.”

Firms also need to factor in how they run their ported applications in production. Development, understandably, attracts much of the attention. But how systems run from “day two” onwards is critical. This includes storage provisioning and tiering, as well as backup, recovery and security.

The CSI drivers take care of much of the hard work, but enterprises are likely to look to invest in new hardware, or even storage from suppliers focused on cloud-native environments, to ease the migration to containers.

“This is usually by deploying new storage architectures, either via new storage products from existing vendors, but increasingly by engaging with new vendors,” says Poulton. Enterprises, he adds, might still be running older hardware systems, but they are unlikely to use them for Kubernetes. 



Source link

Continue Reading

Tech

The Asus Zenbook 16 Delivers Great Performance in an Otherwise Mediocre Laptop

Published

on

The Asus Zenbook 16 Delivers Great Performance in an Otherwise Mediocre Laptop


So, what’s not to like? Well, early compatibility problems slowed the initial uptake of Snapdragon X, and the CPU’s integrated graphics performance turned out to be pretty terrible. And to date, powerful onboard AI features just haven’t proven important, as most AI workloads are still being done in the cloud. With the second-generation X2, Qualcomm set out to deliver on the original promise of faster performance.

But what exactly does “faster” mean? As with most claims in the PC computing space, it’s all about the benchmarks. On the Zenbook A16, the tests I ran indeed showcased exemplary performance from the X2 Elite Extreme, in some of the most widely used benchmarking tools, namely Geekbench 6 and Cinebench 2024. (I don’t have enough competitive Cinebench 2026 results to make wide comparisons yet on that benchmark.)

The performance boost on Geekbench is particularly striking, with the A16 scoring 50 to 100 percent faster than competing systems from AMD and Intel. It’s even faster than the Apple MacBook M4 Pro, the last Mac for which I have comparable benchmark scores. However, that Mac did beat the Asus on the Cinebench benchmark, but not by much, and the Asus now stands solidly in second place in my testing archive.

Graphics performance is much better than in previous generations of Snapdragon X chips, with frame rates quadrupling on average, depending on the test. That’s a dramatic and much-needed improvement for the CPU, and while no one will accuse the A16 of being a gaming rig, it does at least make for a workable experience with less taxing games and graphics-heavy workloads.

Beige Belies Performance

Photograph: Chris Null

I’m happy enough with how the Snapdragon X2 Elite Extreme performs to sign off on its performance claims, but there’s a lot more to the Zenbook A16 than its CPU.

Under the hood, the Snapdragon X2 Elite Extreme X2E94100 CPU is complemented by 48 GB of RAM and a 1-TB SSD. The 16-inch touchscreen offers a solid resolution of 2880 x 1800 pixels, and it’s incredibly bright. A weight of 2.9 pounds is impressive (if not unheard of) for the 16-inch category, and at 0.65 inches (at its thickest), it has a svelte, quite portable carrying experience. Asus’s Ceraluminum technology (now with added magnesium) is used in the machine’s lid, base, and keyboard frame. That helps keep it thin and light, though when adjusted or touched, the screen shimmied more than I expected.



Source link

Continue Reading

Trending