Connect with us

Tech

Tips and Advice for Buying Used or Refurbished Electronics

Published

on

Tips and Advice for Buying Used or Refurbished Electronics


You can save money and help save the planet by buying used or refurbished electronics instead of new devices. Since most of the environmental impact of devices comes from the manufacturing phase, buying secondhand gear can reduce your carbon footprint. Do it right, and buying refurbished can feel much like buying new. This guide delves into what you need to know about refurbished terminology, offers tips on what to look for to snag yourself the best deals, and lists some of the best places to buy refurbished gadgets and used electronics.

You may also be interested in How to Buy Ethical and Eco-Friendly Electronics, The Best Used Tech to Buy and Sell, What to Think About Before Buying a Used Smartphone, and How to Responsibly Dispose of Your Electronics.

Updated March 2026: I’ve added some tips for buying, new links to refurbished sellers, and advice on what to do after you buy.

Table of Contents

What Does Refurbished Mean?

There is no legal definition of refurbished. Some sellers prefer used, pre-loved, secondhand, reconditioned—the list continues. Refurbishment implies that the seller has tested the device and may have repaired and cleaned it, but the only way to be sure is to read the fine print and understand what the seller means by whatever term is used.

If you’re lucky, you may get an open-box device, which a buyer has opened but never actually used. Sellers are not legally allowed to resell returned devices as new, and it’s common for all returns to end up sold in the same place. At the other end of the scale, you may end up with a device that looks like it has survived the apocalypse and doesn’t work.

Tips for Buying Refurbished

I’ll recommend a few good places to buy refurbished electronics below, but first, let’s explore what you should look for in a seller and what you need to do to protect yourself when you buy.

While buying older electronics is often a great way to save money, there are a few things to keep in mind. It may make more sense to buy a discounted flagship phone from a couple of years ago than a brand-new budget phone, for example, but there are also some potential cons. Always consider software updates and ask:

  • How many more years of software updates will the product receive?
  • How long will it continue to get security updates?
  • What version of the software does it come with?
  • How easy is it to update the software?

Aside from working out what the seller means by refurbished, you should read the listing for any potential purchase very carefully and try to answer questions such as these:

  • Has it been tested, and does everything work?
  • Does it have a new battery or a guarantee about battery health? (This is crucial for old phones and laptops.)
  • Has it been wiped if a previous user set it up?
  • Is there any cosmetic damage like scratches or cracks? (Look for a transparent grading system.)
  • What is included? (Does it come with chargers, cables, manuals, and original packaging?)
  • Is there any warranty offered? (The longer the better.)
  • If there is a problem, how do returns work? Do you have to pay, and what is the return window?

If you’re uncertain about anything, it’s worth asking before you buy to avoid disappointment.

Photograph: Simon Hill

There are protections for purchases, such as Section 170 of the Fair Credit Billing Act in the US or Section 75 in the UK. But you should use a credit card for purchases to get the best charge-back protection and avoid going through a third-party payment service, such as PayPal. Some banks and credit card companies are better than others, so it’s worth researching their reputations and the protections they offer.

If you can inspect and test devices before you buy, do it. Otherwise, you should closely examine and thoroughly test any device you buy immediately when you receive it. Remember that there is a limited window to report any faults or issues with the condition and return an item. Always keep the box and packaging it arrived in at least until you are satisfied that you won’t need to return it.

You’ve done your initial tests and decided that you are keeping the refurbished device you bought, but there are still a couple of things you might consider doing before you start using it.

Best Places to Buy Refurbished Electronics

Image may contain Electronics Mobile Phone Phone and Iphone

Photograph: Simon Hill

You have an enormous choice when buying refurbished electronics, so let’s break down your options.

We have had some good experiences buying refurbished devices from their original manufacturers, which makes sense since they know precisely how to test and repair their own devices. All of these manufacturers certify the refurbished devices they sell, and most offer at least a one-year warranty, but the savings vary; for example, Apple offers up to 15 percent off, while Dell offers up to 50 percent off.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Any List of the Best Gifts for Hikers Always Includes a Knife

Published

on

Any List of the Best Gifts for Hikers Always Includes a Knife


After suggesting a wood-burning stove, and a mini bellows, you should have seen this coming. What you need to complete the full-fire package is Cooking On Fire, a gorgeous book of recipes and techniques for cooking over an open flame. Cooking on Fire has a good mix of recipes, ranging from simple and delicious veggies to slow-cooked meats that require hours. There’s also plenty of background on different types of fires and cooking techniques, as well all the equipment you might want to cook various things (for example: spits, forked sticks, cast iron pans, and so on). It’s everything you—er, sorry, your outdoorsy friend—need to get started cooking on fire.

What I really want to try is the fire inside a log technique pictured on the cover, but I haven’t gotten around to that yet. So far I’ve only had a chance to make the grilled pork belly, with grilled carrots and “Krabbelurer” griddle cakes for desert. All of them were excellent, though of course, perhaps that universal rule applies more so here than with any other form of cooking: Your results may vary. In the end, though, this isn’t really a gift about cooking. It’s gift to remind us all to slow down and take your time, with food and everything else.



Source link

Continue Reading

Tech

Inside FDP – part 1: Understanding the problems facing NHS data | Computer Weekly

Published

on

Inside FDP – part 1: Understanding the problems facing NHS data | Computer Weekly


Inside FDP is an exclusive series of articles written by the former deputy director of data engineering at NHS England, Tom Bartlett, who led the 150-person team that built the Federated Data Platform (FDP), the controversial Palantir-supplied system linking data across the health and care service. His insights into the challenges facing NHS data, and the solutions available to resolve them, make essential reading for anyone who wishes to understand what’s really happening with FDP in the NHS.

Since I left NHS England in March I have been speaking publicly about the NHS Federated Data Platform (FDP). The response has been striking. Senior analysts, clinical leaders, healthtech founders and journalists keep asking variations of the same questions.

Why is the software platform from Palantir uniquely suited to this? What does FDP do that existing platforms cannot? Why can’t the NHS – or a UK-based software company – just build one itself? Why aren’t we using our existing investments? Is it really just an expensive data warehouse?

And underneath all of them, the question that matters most – what problem is FDP actually trying to solve?

The more I have these conversations, the more I realise that the answer has never been clearly stated in public.

The programme’s own communications have described FDP in terms of connecting vital health information across the NHS, helping staff deliver better care for patients and work more efficiently.

Critics have focused on the supplier and its controversial reputation. Commentators have discussed the procurement.

Almost nobody has named the underlying problem that the platform was designed to address, or the architectural vision that some of the most senior data leaders in NHS England have been working toward but have rarely articulated publicly.

This series of articles is an attempt to fill that gap.

The argument rests on a concept I call a “frontline-first” approach to data. The idea is not new. Elements of it exist in pockets across the NHS and in the thinking of people who have been working on this for years. But as a named concept with a clear definition, it has not been part of the public discourse. I think it should be.

FDP is the first attempt to build the integrated foundation that the NHS has been accumulating workarounds in the absence of, for 30 years
Tom Bartlett

The series has five parts. This first post defines the problem. Part 2 defines the Frontline-First concept and what it looks like in practice, including how FDP delivers it. Part 3 describes the architectural choice that makes FDP structurally different – the ontology, object types, and actions. Part 4 explains why the Canonical Data Model is the most important asset in the programme. Part 5 addresses the objections I hear most often, including whether the NHS needs a single platform at all.

How we got here

The current NHS data architecture was not designed. It accumulated.

When I started my first job in the NHS I worked at the Royal Cornwall Hospital in Treliske, in a massive warehouse office called the megashed. Elsewhere in the warehouse were thousands of paper patient notes, and if I looked out of the window at any time of day I would see porters carrying red waterproof satchels containing those notes between departments. Accessing a record was extremely slow and resource intensive. You literally had to go and get the paper from the warehouse.

Electronic patient records (EPR) improved on this by making notes available at the click of a mouse. That was the primary purpose – replace paper. The analytical use case crept in slowly afterwards, driven by NHS initiatives like Referral to Treatment targets, Payment by Results, and the national targets originally linked to achievement of Foundation Trust status. Each new national requirement added another reason to extract data from the EPR, but the EPR was never designed to support this. Analytics was retrofitted onto a system built for a different purpose.

Shared care records were a further retrofit. They allowed individual records held in one EPR to surface in view of a clinician working in a different organisation. This was the digital equivalent of the red waterproof satchel – one record, carried from one place to another. Useful, but still a point-to-point solution rather than an integrated system.

At no point did anyone design an NHS-wide integration of all NHS data across all care settings, all organisations, and all use cases. The ambition to do so stunned me when I heard it for the first time, and I knew I had to be a part of it.

That ambition is what FDP represents. It is not another retrofit. It is the first attempt to build the integrated foundation that the NHS has been accumulating workarounds in the absence of, for 30 years.

Understanding this history matters because it explains how the following problems came to exist, and why they have persisted despite decades of investment in NHS data infrastructure.

The problems that Frontline-First is designed to solve

The NHS has several interconnected data problems that have persisted for decades. They are well known individually but rarely discussed as a connected picture. Before explaining what Frontline-First means, it is worth naming them together, because the case for FDP only makes sense once you can see how they reinforce each other. FDP was designed to address all of these problems. But the argument for how it does so, which begins in Part 2 of this series, only lands if the problems are understood first.

The feedback gap

Every patient interaction generates structured records that are used directly in the clinical process and also flow upward through NHS Trust data warehouses, through national submissions, and into the analytical infrastructure the centre uses to monitor performance.

A large proportion of what clinicians are asked to record, particularly items captured for national returns, performance metrics, coding for Payment by Results and secondary uses, gives them little in return that is locally useful.

The data leaves the point of care and the person who recorded it never sees what happened to it. Often they are asked by a performance manager to correct a record for reasons that seem low priority to the clinician. The consequence is that when workloads are pressured, clinicians will not prioritise low-value recording. Where they see local value in recording well, they do – medication prescribing, for instance, where accuracy has immediate clinical consequences.

But for items recorded primarily for downstream consumption, where the system gives no useful feedback, recording quality varies. The incentive to get it right is weak when the recording feels like an administrative overhead rather than a clinically useful act. This creates gaps and inconsistencies in the data that compound through every downstream use.

The shadow IT problem

Where formal systems fall short of the operational workflow a team actually follows, staff build something that does. Spreadsheets tracking waiting lists. Whiteboards in nurse stations. Word documents containing discharge proposals. Emails coordinating theatre schedules. Printed patient lists updated with biro on ward rounds. Daily phone calls from a ward coordination administrator to wards establishing bed state, recorded on a spreadsheet.

This is not laziness or poor governance. It is staff putting in place a workable, efficient solution to a gap the formal system left. The work has to happen, the EPR does not support it, so the team builds a tool that does.

Some years ago I did an audit at one Trust with the Caldicott Guardian – the person responsible for protecting patient confidentiality in health and care organisations – and we found over a thousand non-approved data sources of exactly this kind.

No information governance official could eliminate shadow IT without bringing the clinical service that depends on it to a halt. Few individual items of shadow IT are prioritised for investment to promote it to a formal system.

On the other side of the same gap, the clinical transformation team in IT who could change the EPR configuration to capture what the frontline actually needs are largely bypassed. Clinical teams would rather build a spreadsheet that fits their process now than wait months for a configuration change that may not match what they need. This is one reason shadow IT persists even in Trusts that have invested heavily in EPR.

The consequence is that the real operational data – the data that reflects what is actually happening on the ward – stays locked in these local tools and never enters the formal data estate. It is not linkable to the data warehouse, to national submissions, to the research environment, or to any other Trust. Data becomes more valuable as it connects to other data. Shadow IT severs that connection at the source.

The inaccessible record

Some of the most clinically meaningful data in the NHS is recorded diligently inside the formal system but is functionally lost to everyone, including the team that recorded it.

In one clinical team I observed, outcome scores in mental health from DIALOG (a set of questions where patients are asked to rate their satisfaction) were recorded as free text in generic progress note fields, buried in a mountain of clinical notes, never accessible to the Trust’s data warehouse, difficult for the clinical team to resurface at the next multi-disciplinary team (MDT) meeting, and invisible to national returns like the Mental Health Services Data Set (MHSDS).

Frontline users suffer detriment from problems that would be addressable if information was better integrated. Data recorded at the point of care is not enriched by data from elsewhere in the system before decisions are made
Tom Bartlett

Discharge letters from a mental health consultant to a GP contain clinical reasoning, risk assessments, medication rationale and follow-up intentions that are more clinically useful than anything in the structured record. But they sit as free text or PDF attachments, inaccessible to any downstream analytical process.

The data exists. A clinician thought it mattered enough to write down. But because it was entered as narrative rather than structured data, it is invisible to every downstream process. This is not shadow IT. It is data that is technically inside the formal system but recorded in a form that no other part of the system can use.

The timeliness problem

Clinicians often do not record their data on formal systems in real time. I have seen queues in a care team’s office for the only operational PC on a Friday afternoon. Occasionally, clinicians leave the queue to end their shift before they’ve had the chance to input their week.

When data is sent up the line, national data lands months after the clinical event. By the time a metric is published, the Trust has already lived through the period and moved on. Changes to the scope of national collections take months or sometimes years to implement, so if a new clinical pathway emerges or a coding practice changes, or if a new question comes up, the national data model is still measuring the old world long after the frontline has moved on.

Worse, national returns generally do not allow retrospective revision. Data quality issues discovered after submission, corrections, late entries, updated coding, are rarely corrected in the published datasets. When the clinician who went home on the Friday manages to get their data into the system the following week it is too late to be included in the national figures, because the data has already been sent. The month’s submission with the coding error becomes the permanent version used for planning, funding allocation and research. The error is baked in.

The integration gap

Frontline users suffer detriment from problems that would be addressable if information was better integrated. Data recorded at the point of care is not enriched by data from elsewhere in the system before decisions are made.

The clinician makes the next decision based on what they personally know and what is in front of them, not on what the system knows. The A&E clinician does not see the mental health history. The consultant does not see how their outcomes compare to peers. The discharge coordinator does not see what community services have arranged. In every case, the problem is the same – data exists somewhere in the system that would improve the decision being made, but it does not reach the person making the decision at the time they need it.

Insights without context

When national or regional analysis does reach the frontline, it often arrives without the operational context that would make it accurate.

NHS England’s productivity tools send Trusts headline figures identifying financial opportunities based on national benchmarks. One Trust I am aware of received a figure of £89m. When the financial turnaround team started working through it, they found that £7.8m of an apparent £8m opportunity in women and children’s health was clinical negligence insurance premiums, a cost the Trust has no ability to influence. The headline looked actionable. The reality required hours of decomposition by people with operational knowledge before anyone could distinguish genuine opportunity from noise.

The analysis was produced centrally, without the context that would have filtered out the irrelevant before it reached the Trust. The frontline becomes a validation function for centrally produced insight, rather than a recipient of useful intelligence.

The technology barrier

Where clinical leadership teams have had embedded analysts – people who sit with the clinical team and understand the context – the work is far superior. These analysts contribute directly in the meeting rather than the service manager having to note the question, go back to the data team, wait for a response, and return two weeks later with a spreadsheet nobody has time to interpret.

But even embedded analysts are tethered to the back office. They still have to return to the data warehouse and business intelligence (BI) stack to get their answers, because the technology sits behind them rather than in front of the clinical team.

For this reason many Trusts centralise their analyst teams. The staffing model follows the technology architecture even if the outcomes are better with embedded analysts.

The invisible error

The data does not announce that it is wrong. The numbers look plausible. The dashboard is green. Nothing in the Integrated Care Board’s (ICB’s) dataset or the national submission flags the coding quirk that double-counted three urology cases, or the rota model that was never updated after two consultants left.

These problems do not show up as errors. They show up as slightly different numbers within the range of normal variation. An analyst at ICB or national level, querying data extracted weeks ago from a system they have never used, has no context for what the values mean operationally and no way to distinguish a genuine outlier from a local recording practice. The data is passing validation while being wrong in ways that only someone at the point of care would recognise.

This is what makes the other problems so hard to fix – the people with the authority to invest in solutions cannot see the problems from where they sit.

How these problems connect

These are not eight separate problems. They reinforce each other in ways that make each one harder to fix in isolation.

Two things happen in parallel. Clinicians record inconsistently because the data they are asked to capture gives them little back. And staff build shadow IT because the formal systems do not support their workflows. Both have the same effect – the analytical layer works from an incomplete picture.

Because the picture is incomplete and late, national and ICB-level decisions are based on data that does not reflect reality. Because nobody at those levels knows the data is wrong, no corrective signal flows back to the source.

The damage to the reliability of data used for decisions does not stop at Trust level. At ICB level, commissioning decisions are based on data that is months old and semantically inconsistent across Trusts, because each Trust codes and submits differently.

Population health management – the work of identifying at-risk patients before they become expensive acute admissions – is built on linked datasets assembled from extracts that arrived at different times with different definitions. The frail elderly patient known to community services, mental health and the GP may not appear as a single coherent person in the ICB’s linked data because the linking is probabilistic and the extracts were taken on different days. The intervention that would have prevented the A&E attendance never happens.

At national level, policy is made on data that does not reflect reality. Cohorts of patients to be shielded are incomplete. Elective recovery targets are set on Referral to Treatment data that is months old. Funding formulae that allocate resources to ICBs depend on activity data with enough coding variation across regions that some areas are systematically overfunded and others underfunded. National programmes launch without accurate baselines, so progress gets claimed or denied on numbers that do not reliably reflect what patients are experiencing. Research is slower than it should be because researchers spend months cleaning and validating data before they can begin analysis.

All of this is downstream of the same root cause. If the data were right at source, because the clinician had the means and a reason to record it carefully, every downstream use would improve as a side effect.

The ICB’s linked dataset would be more reliable. The national submission would be more timely. The funding formula would be less distorted. The research would be faster.

You do not fix commissioning data by building a better ICB warehouse. You fix it by giving the clinician a reason to record well at the point of care. Everything downstream follows.

These problems are addressable. Not with better dashboards, not with another warehouse, and not by asking clinicians to try harder. The next article describes what a Frontline-First approach to data looks like, and why FDP is the first platform designed to deliver one.



Source link

Continue Reading

Tech

Microsoft explains value of E7 usage-based pricing | Computer Weekly

Published

on

Microsoft explains value of E7 usage-based pricing | Computer Weekly


The Microsoft 365 E7 licensing model was among the big focus areas during the earnings call for the company’s latest quarterly results.

Microsoft reported revenue of $82.9bn, an increase of 18% over last year’s third-quarter results.

Microsoft 365 Commercial cloud revenue increased 19%, and its Productivity and Business Processes business posted revenue of $35bn, an increase of 17% over the same period last year.

While the headline figure is its growth in cloud revenue, the company is attempting to shift to a value-based software licensing model, tied to a user-based licence base, with usage-based pricing to cover additional usage.

This additional usage is positioned by Microsoft executives as a way to show that the greater use of the software is generating additional business value for the customer.

The Microsoft 365 E7 licence becomes generally available on 1 May. The licensing model has been introduced to help fund the investments Microsoft is making to support artificial intelligence (AI) and the broader use of agentic AI across its product portfolio.

The E7 plan bundles base usage rights into seat-based pricing. According to Microsoft, it offers customers a convenient way to purchase consumption packs tied to seats or agents. Beyond the base usage covered by the user licence, customers are charged on pure consumption-based pricing, tied to token usage and consumption.

Over the next three to five years, Microsoft’s mix of consumption versus traditional seat-based models will evolve. It anticipates that customers will increasingly adopt hybrid models like E7, balancing predictability with the flexibility of consumption-based pricing. It expects IT budgets to adapt to this new model, driven by business outcomes and the value derived from token usage.

When asked about the shift in licensing, chief financial officer Amy Hood said: “As we go through using a model that’s been historically thought of as a per-seat business, suddenly, if you think about getting work done and being more productive, it’s thinking about being a seat or a worker plus an agent.”

She described the shift as a “licence business plus a consumption business”. “It’ll still have that per-seat licence logic, but it’ll also have a meter, just like you see in Azure.”

What this means for IT departments is that they will procure E7 licenses, but will also need to account for usage costs on top.

CEO Satya Nadella said this model will be rolled out to all software that is licensed on a per-user basis. “Any per-user business of ours, whether it’s productivity, coding, security, will become a per-user and usage business,” he said.

Given the intensity of usage the company has experienced, his response to the question on licensing changes implies that Microsoft needs to somehow fund investment in infrastructure. “Where are these dollars going to come from,” said Nadella. 

He argued that Microsoft business customers, who see their costs decrease or revenue increase as they roll out AI agents, will drive greater usage. “It may not be, by the way, pure seat coverage-type of motions, like in the past,” said Nadella. “This is more about getting intense users and intense usage, and that’s what we’re focused on.”



Source link

Continue Reading

Trending