Tech
From OLED to Budget LCDs, These Are Our Favorite Computer Monitors
Once you’ve decided on a size, there are a number of other important aspects of your next monitor to consider. Some of these factors may matter more for certain uses—for example, gamers generally care more about higher frame rates than office workers do—but they’re all handy to know going in.
Resolution: The bigger the monitor, the more it will benefit from higher resolutions. That will allow you to have app windows that take up less space but are still legible. Most monitors today are typically 1080p (1920 x 1080), 1440p (2560 x 1440), 4K (3840 x 2160), or even 5K (5120 x 2160).
Refresh rate: This refers to how many times the display can refresh the picture per second, measured in hertz (or Hz). A higher refresh rate makes all movement and animation look smoother because you’re seeing more information. For productivity, 60 Hz is probably enough, but gamers will generally want a panel that can at least hit 120 or 144 Hz. 240 Hz has become the new standard for high-end gaming monitors, but there are now extreme models that go up to 500 Hz and beyond. You’ll need a powerful enough computer that can maintain a high frame rate to take advantage of these high refresh rates, and you usually have to enable this feature in your operating system’s display settings.
Panel type: Monitors usually have a type of LCD (liquid-crystal display) panel. Three of the most popular options—twisted nematic (TN), vertical alignment (VA), and in-plane switching (IPS)—are all different types of LCD panels, and all use TFT (thin-film-transistor) technology too. Each is popular for different reasons: IPS for color, VA for contrast, and TN for speed with higher refresh rates and response times. IPS has become especially popular thanks to its growing refresh rate speeds. Mini-LED uses a more advanced backlighting solution that uses a number of lighting zones to more accurately and efficiently control pixels. These tend to be the brightest monitors you can buy. OLED (organic light-emitting diodes) panels take that even further, allowing the monitor to control individual pixels, including turning them off entirely to create extreme contrast. These are becoming highly popular in gaming monitors, in particular. You should think about what’s most important to you (great color? thin form factor? max brightness?) to choose the best panel type for your needs.
Nvidia G-Sync/AMD FreeSync support: A gamer-specific criteria, these two features let monitors adjust their frame rates based on the games they’re playing. This reduces screen tearing without affecting performance. G-Sync is made by Nvidia and FreeSync comes from AMD, and while FreeSync monitors can usually work with most modern Nvidia graphics cards, G-Sync doesn’t work with AMD cards, so make sure everything you have is compatible when buying.
HDR support: This isn’t crucial for productivity, but if you watch a lot of media or play games, it’s nice to have. Just like on TVs, HDR dramatically expands the range of colors a screen can reproduce, leading to more vivid pictures. Content still has to support HDR, but many sources do these days, so it’s often worth springing for. You’ll find lots of monitors that say they support HDR (such as DisplayHDR 400 certification), but in almost all cases, you’ll need a Mini-LED or OLED screen to really get proper HDR.
Port availability: A crucial but easy-to-overlook factor is what kind of ports the monitor has for connecting your devices. Most typically come with one or two HDMI inputs, and a DisplayPort input, which will cover most needs, but it’s always a good idea to check what your setup needs. More expensive monitors can function as USB hubs, letting you connect all your peripherals and accessories directly to your monitor. Conversely, check out our Best USB Hubs guide if you need to expand your computer’s port options without paying for a more expensive monitor.
Built-in KVM switch: A KVM (Keyboard, Video, Mouse) switch is a device that helps you easily switch your monitor, keyboard, and mouse between two different computers or source inputs (like a gaming console). If you have one setup for both a work and personal computer, or a computer and gaming console, having a KVM switch built into the monitor means you can easily switch everything between your two devices without needing an external KVM switch.
Tech
Researchers launch smoke-sensing drones that one day could fight wildfires
Plumes of smoke drifted up from a fire steadily taking over a 30-acre prairie at Cedar Creek Ecosystem Science Reserve, north of the Twin Cities. Amid the haze, five black drones zipped around.
More than 150 feet below the flying robots, research student Nikil Krishnakumar raised the controller in the air. The work has been published on the arXiv preprint server.
“It’s all autonomous now,” he said. “I’m not doing anything.”
The aerial robotic team’s mission: examine the smoke from the prescribed burn and send the data to a computer on the ground. The computer then analyzes the smoke data to understand the fire’s flow patterns, Krishnakumar said.
The University of Minnesota project is the latest research into using artificial intelligence to detect and track wildfires. The work has become more urgent as climate change is expected to make wildfires, like those that devastated Manitoba this summer, larger and more frequent.
NOAA’s Next-Generation Fire System consists of two satellites 22,000 miles above the equator that detect new sources of heat and report them to local National Weather Service stations and its online dashboard. Earlier this year, the satellites were credited with spotting 19 fires in Oklahoma and preventing $850 million in structure and property damage, according to the agency.
In Minnesota, Xcel has installed tower-mounted, AI-equipped high-definition cameras near power lines in Mankato and Clear Lake. Thirty-six more are planned. When a fire is detected, local fire departments are notified.
Krishnakumar and other members of the U’s research team performed their 11th trial at the U’s field station in East Bethel on Friday, with notable improvements from their previous attempts.
The first-generation drones crashed several times during previous field tests, Krishnakumar said. The team upgraded sensors for better data collecting and autonomous steering, and improved the drones’ propulsion by making them bigger and fitting them with better propellers.
“The big picture is one day these drones can be used to understand where the wildfires go, how they behave and to perform large-scale surveillance of wildfires,” Krishnakumar said. “The major challenge we’re trying to understand is how far these smoke particles can be transported and the altitude at which they can go.”
Understanding the behavior of particles like embers can help firefighters prevent wildfires from spreading, said Yue Weng, another researcher on the team.
Though the project has a way to go before it can be used for large-scale wildfires, the research represents a significant step toward using fully autonomous drone systems for emergency response and scientific research missions, said Jiarong Hong, professor at the University of Minnesota’s Department of Mechanical Engineering.
This year, 1,200 wildfires have been recorded in Minnesota so far, according to the state Department of Natural Resources. On a smaller scale, the technology could also be used to better manage prescribed burns, Hong said. Between 2012 and 2021, prescribed burns that went out of control caused 43 wildfires nationwide, according to the Associated Press.
“To characterize and measure particle transport in the real field is very challenging. Traditionally, people do small-scale lab experiments and study this at a fundamental level,” Hong said. “Such an experiment doesn’t capture the complexity involved in the real field environment.”
Smoke changes direction with the wind. Deploying multiple drones—with one at the center managing the four around it—enables them to navigate in the air without human intervention, Hong said.
The 11-pound drones were custom-built by the students to autonomously collect particle data. Future improvements to the project include collecting more data and extending the battery life of the drones. The drones are currently able to operate in the air for about 25 minutes, less in colder temperatures, Hong said.
“We have drones flying out at different heights, so we can actually measure the particle composition at different elevations at the same time,” Hong said.
“Particles are in a very irregular shape and some of them are porous and have varying levels of density. But we have been able to characterize their morphology and shape for the very first time.”
More information:
Nikil Krishnakumar et al, 3D Characterization of Smoke Plume Dispersion Using Multi-View Drone Swarm, arXiv (2025). DOI: 10.48550/arxiv.2505.06638
2025 The Minnesota Star Tribune. Distributed by Tribune Content Agency, LLC
Citation:
Researchers launch smoke-sensing drones that one day could fight wildfires (2025, November 3)
retrieved 3 November 2025
from https://techxplore.com/news/2025-11-drones-day-wildfires.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
An Anarchist’s Conviction Offers a Grim Foreshadowing of Trump’s War on the ‘Left’
By the standards of the San Francisco Bay Area’s hard left, Casey Goonan’s crimes were unremarkable. A police SUV partially burned by an incendiary device on UC Berkeley’s campus. A planter of shrubs lit on fire after Goonan unsuccessfully tried to smash a glass office window and throw a firebomb into the federal building in downtown Oakland.
But thanks to a series of communiques where Goonan claimed to have carried out the summer 2024 attacks in solidarity with Hamas and the East Bay native’s anarchist beliefs, federal prosecutors claimed Goonan “intended to promote” terrorism on top of a felony count for using an incendiary device. Goonan’s original charges notably did not contain terrorism counts. In late September, US District Court Judge Jeffrey White sentenced Goonan, whom they called “a domestic terrorist” during the hearing, to 19 and a half years in prison plus 15 years probation. Prosecutors also asked that he be sent to the Bureau of Prisons facility that contains a Communications Management Units, a highly restrictive assignment reserved for what the government claims are “extremist” inmates with terrorism-related offenses or affiliations.
Although Goonan’s case began under the Biden Administration, it offers a glimpse of the approach the Department of Justice may take in President Donald Trump’s forthcoming offensive against the “left,” formalized in late September in National Security Presidential Memorandum 7 (NSPM-7), an executive order targeting anti-fascist beliefs, opposition towards Immigrations and Customs Enforcement raids, and criticism of capitalism and Christianity as potential “indicators of terrorism.”
In addition to Goonan’s purported admiration for Hamas—a designated terrorist organization since 1997—and cofounding of True Leap, a tiny Anarchist publisher, the 35-year-old doctorate in African-American Studies’ biography includes another trait being targeted by the Trump administration and its allies: Goonan identifies as a transgender person. While NPSM-7 cites “extremism migration, race, and gender” as an indicator of “this pattern of violent and terroristic tendencies,” the Heritage Foundation has attempted to link gender-fluid identity to mass shootings and is urging the FBI to create a new, specious domestic terrorism classification of “Transgender Ideology-Inspired Violent Extremism,” or TIVE.
The executive order, meanwhile, directs the American security state’s sprawling post-9/11 counterterrorism apparatus to be reoriented away from neo-Nazis, Proud Boys, white nationalists, Christian nationalists, and other extreme right-wing actors that have been overwhelmingly responsible for the majority of political violence in the past few decades, and towards opponents of ICE, anti-fascists, and the administration writ large. Along with potentially violent actors, NSPM-7 instructs federal law enforcement to scrutinize nonprofit groups and philanthropic foundations involved in funding organizations that espouse amorphous ideologies, from “support for the overthrow of the United States Government” to expressing “hostility towards those who hold traditional American views on family, religion, and morality.”
“NSPM-7 is the natural culmination of ‘radicalization theory’ as the basis for the American approach to counterterrorism,” says Mike German, a retired FBI agent who spent years infiltrating violent white supremacist groups and quit the Bureau in response to its post-9/11 shift in terrorism strategy. German explored radicalization theory’s trajectory in his 2019 book, Disrupt, Discredit and Divide: How the New FBI Damages Democracy.
Tech
What are the storage requirements for AI training and inference? | Computer Weekly
Despite ongoing speculation around an investment bubble that may be set to burst, artificial intelligence (AI) technology is here to stay. And while an over-inflated market may exist at the level of the suppliers, AI is well-developed and has a firm foothold among organisations of all sizes.
But AI workloads place specific demands on IT infrastructure and on storage in particular. Data volumes can start big and then balloon, in particular during training phases as data is vectorised and checkpoints are created. Meanwhile, data must be curated, gathered and managed throughout its lifecycle.
In this article, we look at the key characteristics of AI workloads, the particular demands of training and inference on storage I/O, throughput and capacity, whether to choose object or file storage, and the storage requirements of agentic AI.
What are the key characteristics of AI workloads?
AI workloads can be broadly categorised into two key stages – training and inference.
During training, processing focuses on what is effectively pattern recognition. Large volumes of data are examined by an algorithm – likely part of a deep learning framework like TensorFlow or PyTorch – that aims to recognise features within the data.
This could be visual elements in an image or particular words or patterns of words within documents. These features, which might fall under the broad categories of “a cat” or “litigation”, for example, are given values and stored in a vector database.
The assigned values provide for further detail. So, for example “a tortoiseshell cat”, would comprise discrete values for “cat” and “tortoiseshell”, that make up the whole concept and allow comparison and calculation between images.
Once the AI system is trained on its data, it can then be used for inference – literally, to infer a result from production data that can be put to use for the organisation.
So, for example, we may have an animal tracking camera and we want it to alert us when a tortoiseshell cat crosses our garden. To do that it would infer the presence or not of a cat and whether it is tortoiseshell by reference to the dataset built during the training described above.
But, while AI processing falls into these two broad categories, it is not necessarily so clear cut in real life. It will always be the case that training will be done on an initial dataset. But after that it is likely that while inference is an ongoing process, training also becomes perpetual as new data is ingested and new inference results from it.
So, to labour the example, our cat-garden-camera system may record new cats of unknown types and begin to categorise their features and add them to the model.
What are the key impacts on data storage of AI processing?
At the heart of AI hardware are specialised chips called graphics processing units (GPUs). These do the grunt processing work of training and are incredibly powerful, costly and often difficult to procure. For these reasons their utilisation rates are a major operational IT consideration – storage must be able to handle their I/O demands so they are optimally used.
Therefore, data storage that feeds GPUs during training must be fast, so it’s almost certainly going to be built with flash storage arrays.
Another key consideration is capacity. That’s because AI datasets can start big and get much bigger. As datasets undergo training, the conversion of raw information into vector data can see data volumes expand by up to 10 times.
Also, during training, checkpointing is carried out at regular intervals, often after every “epoch” or pass through the training data, or after changes are made to parameters.
Checkpoints are similar to snapshots, and allow training to be rolled back to a point in time if something goes wrong so that existing processing does not go to waste. Checkpointing can add significant data volume to storage requirements.
So, sufficient storage capacity must be available, and will often need to scale rapidly.
What are the key impacts of AI processing on I/O and capacity in data storage?
The I/O demands of AI processing on storage are huge. It is often the case that model data in use will just not fit into a single GPU memory and so is parallelised across many of them.
Also, AI workloads and I/O differ significantly between training and inference. As we’ve seen, the massive parallel processing involved in training requires low latency and high throughput.
While low latency is a universal requirement during training, throughput demands may differ depending on the deep learning framework used. PyTorch, for example, stores model data as a large number of small files while TensorFlow uses a smaller number of large model files.
The model used can also impact capacity requirements. TensorFlow checkpointing tends towards larger file sizes, plus dependent data states and metadata, while PyTorch checkpointing can be more lightweight. TensorFlow deployments tend to have a larger storage footprint generally.
If the model is parallelised across numerous GPUs this has an effect on checkpoint writes and restores that mean storage I/O must be up to the job.
Does AI processing prefer file or object storage?
While AI infrastructure isn’t necessarily tied to one or other storage access method, object storage has a lot going for it.
Most enterprise data is unstructured data and exists at scale, and it is often what AI has to work with. Object storage is supremely well suited to unstructured data because of its ability to scale. It also comes with rich metadata capabilities that can help data discovery and classification before AI processing begins in earnest.
File storage stores data in a tree-like hierarchy of files and folders. That can become unwieldy to access at scale. Object storage, by contrast, stores data in a “flat” structure, by unique identifier, with rich metadata. It can mimic file and folder-like structures by addition of metadata labels, which many will be familiar with in cloud-based systems such as Google Drive, Microsoft OneDrive and so on.
Object storage can, however, be slow to access and lacks file-locking capability, though this is likely to be of less concern for AI workloads.
What impact will agentic AI have on storage infrastructure?
Agentic AI uses autonomous AI agents that can carry out specific tasks without human oversight. They are tasked with autonomous decision-making within specific, predetermined boundaries.
Examples would include the use of agents in IT security to scan for threats and take action without human involvement, to spot and initiate actions in a supply chain, or in a call centre to analyse customer sentiment, review order history and respond to customer needs.
Agentic AI is largely an inference phase phenomenon so compute infrastructure will not need to be up to training-type workloads. Having said that, agentic AI agents will potentially access multiple data sources across on-premises systems and the cloud. That will cover the range of potential types of storage in terms of performance.
But, to work at its best, agentic AI will need high-performance, enterprise-class storage that can handle a wide variety of data types with low latency and with the ability to scale rapidly. That’s not to say datasets in less performant storage cannot form part of the agentic infrastructure. But if you want your agents to work at their best you’ll need to provide the best storage you can.
-
Tech6 days agoOpenAI says a million ChatGPT users talk about suicide
-
Tech6 days agoHow digital technologies can support a circular economy
-
Sports1 week agoGiants-Eagles rivalry and the NFL punt that lives in infamy
-
Tech6 days agoAI chatbots are becoming everyday tools for mundane tasks, use data shows
-
Fashion1 week agoCFDA changes New York Fashion Week dates for February edition
-
Tech5 days agoUS Ralph Lauren partners with Microsoft for AI shopping experience
-
Fashion6 days agoITMF elects new board at 2025 Yogyakarta conference
-
Fashion1 week agoJapan’s textile trade shows strong apparel demand, weak yarn imports
