Coffee is the original biohack and the nation’s most popular productivity tool. As we adjust to the changeover to daylight saving time, the caffeine-addicted WIRED Reviews team is writing about our favorite coffee brewing routines and devices. Today, contributor Brad Bourque pays homage to his manual espresso maker. Look out for other Java.Base stories about other WIRED writers’ favorite brewing methods.
For me, coffee is as much a nerdy obsession as it is a practical necessity. I dislike maintenance, and I prefer simplicity, but I also need my coffee to be bold and interesting. For years, I used a kettle and Aeropress, which were easy to keep clean and tucked away in a crowded cabinet. My roommates at the time really appreciated that. But when I got a place of my own, I wanted something more substantial, if also still dead simple. The Flair Signature, a manual espresso maker, seemed like an obvious choice. It still sits proudly on my counter in all its stainless steel glory, occupying a permanent spot by my sink.
Where larger, electric espresso machines generate the pressure and heat needed for espresso inside their massive housings, the Flair takes a different approach. A large lever sits atop a small stack of brewing equipment, and you use that lever to create the bars of pressure necessary to get espresso. There’s a chamber for your grounds and another atop it for hot water. Fill them up in the correct order, pull down on the handle, guided by the handy pressure gauge, and watch in delight as thick, crema-topped espresso drips out the bottom.
There are other crucial pieces to this puzzle, and I’ve fully committed to the bit by opting for a simple gooseneck kettle and hand burr grinder, chosen for their simplicity and consistency. Coffee enthusiasts should instantly recognize the Stagg EKG kettle from Fellow, and yes, mine is draped in green and yellow reminiscent of my favorite soccer team, thank you for noticing. The 1ZPresso JX-Pro S isn’t particularly fancy, but it’s easy to clean and consistent, and it came highly recommended by Reddit, though I’ll admit I’ve been tempted by the Comandante C40, a hand grinder that costs more than the rest of my setup combined.
Flair
Espresso Maker Classic
The entire workflow is thankfully almost silent, a blessing on quiet and/or hungover Sunday mornings. I can throw some Steely Dan on the record player, fire up the kettle, and start turning the hand grinder as I take care of my other morning chores. While it seems straightforward, it’s a process that has a surprising number of variables to tweak, and I feel them firsthand every time I pull a shot. Each minor adjustment to the grind or water temperature creates a cascading set of changes to both the process and the end result. It’s a daily chase for unattainable perfection that I’m well familiar with after using the Aeropress for so long, and I find it deeply satisfying when I feel like I’ve nailed it. Knowing I was fully responsible for that great first sip gives me a bigger boost in the morning than any amount of caffeine could.
After suggesting a wood-burning stove, and a mini bellows, you should have seen this coming. What you need to complete the full-fire package is Cooking On Fire, a gorgeous book of recipes and techniques for cooking over an open flame. Cooking on Fire has a good mix of recipes, ranging from simple and delicious veggies to slow-cooked meats that require hours. There’s also plenty of background on different types of fires and cooking techniques, as well all the equipment you might want to cook various things (for example: spits, forked sticks, cast iron pans, and so on). It’s everything you—er, sorry, your outdoorsy friend—need to get started cooking on fire.
What I really want to try is the fire inside a log technique pictured on the cover, but I haven’t gotten around to that yet. So far I’ve only had a chance to make the grilled pork belly, with grilled carrots and “Krabbelurer” griddle cakes for desert. All of them were excellent, though of course, perhaps that universal rule applies more so here than with any other form of cooking: Your results may vary. In the end, though, this isn’t really a gift about cooking. It’s gift to remind us all to slow down and take your time, with food and everything else.
Meta is continuing to invest aggressively to meet its technology infrastructure requirements, involving datacentre expansion and supply chain deals to secure components for future capacity. The company’s latest quarterly earnings filing shows Meta has embarked on a strategy to sign up for multi-year cloud contracts driving $107bn in contractual commitments for Q1 2026.
For the quarter that ended in March 2026, Meta posted revenue of $56.3bn, a 33% increase from the same quarter in 2025.
The company has forecast that its capital expenditures, including principal payments on finance leases, has increased by $10bn due to component price increases and additional datacentre costs, putting CapEx in the range of $125bn to $145bn.
Chief financial officer Susan Li said: “Our investments will support our training needs for future models, and most importantly, provide us with the inference capacity necessary to deliver personal and business agents to billions of people around the world, along with several other AI product experiences we’re developing.”
Responding to a question during the earnings call about balancing model training versus product launches and the potential impact on Meta’s 2027 capital expenditure, CEO Mark Zuckerberg said the company is moving towards greater capabilities and scaling of AI models. “We have the research team, which is focused on scaling increasingly intelligent models with capabilities for the specific things that we’re focused on, which are business and personal agents,” he said.
Beyond model development, Zuckerberg said: “We have our next set of more advanced models in training now. And that work will continue. I don’t think we’re going to be done with that anytime soon.”
He emphasised the significance of Meta AI models in product development. “The product team is really unlocked to be able to build things on top of our models because we now have a very strong model,” said Zuckerberg.
When Li was asked about how the company uses large language models in its ad business to direct adverts to users, she said: “The size and complexity would make them too cost-prohibitive.”
Instead, Li said the way Meta uses large language models is to transfer knowledge to smaller, more lightweight models. “The inference models are bound by strict latency requirements since they need to find the right ad within milliseconds, and that has, again, historically prevented us from meaningfully sizing up – to scale up their size and complexity,” she added.
Li said Meta plans to tackle this scaling issue with the introduction of an adaptive ranking model later this year, using the model complexity of a trillion parameters. “We made advances in the model architecture and co-design the system with the underlying silicon, so it maintains the sub-second speed that is required to serve ads at scale,” she added.
Commenting on Meta’s strategy, Forrester vice-president research director Mike Proulx said: “Meta’s future‑facing AI ambitions are being underwritten almost entirely by the company’s legacy business: advertising inside social media apps. There’s no material AI revenue yet.
“The question is whether Meta’s core can continue to act as a cash cow while the company reduces headcount and diverts focus toward AI,” he said. “If Meta’s ad engine slows, the market’s margin for patience shrinks fast. Meta’s slight dip in daily active users is already beginning to raise eyebrows. Q2 will tell us if it’s really just a blip or the start of a trend.”
Inside FDP is an exclusive series of articles written by the former deputy director of data engineering at NHS England, Tom Bartlett, who led the 150-person team that built the Federated Data Platform (FDP), the controversial Palantir-supplied system linking data across the health and care service. His insights into the challenges facing NHS data, and the solutions available to resolve them, make essential reading for anyone who wishes to understand what’s really happening with FDP in the NHS.
Since I left NHS England in March I have been speaking publicly about the NHS Federated Data Platform (FDP). The response has been striking. Senior analysts, clinical leaders, healthtech founders and journalists keep asking variations of the same questions.
Why is the software platform from Palantir uniquely suited to this? What does FDP do that existing platforms cannot? Why can’t the NHS – or a UK-based software company – just build one itself? Why aren’t we using our existing investments? Is it really just an expensive data warehouse?
And underneath all of them, the question that matters most – what problem is FDP actually trying to solve?
The more I have these conversations, the more I realise that the answer has never been clearly stated in public.
The programme’s own communications have described FDP in terms of connecting vital health information across the NHS, helping staff deliver better care for patients and work more efficiently.
Critics have focused on the supplier and its controversial reputation. Commentators have discussed the procurement.
Almost nobody has named the underlying problem that the platform was designed to address, or the architectural vision that some of the most senior data leaders in NHS England have been working toward but have rarely articulated publicly.
This series of articles is an attempt to fill that gap.
The argument rests on a concept I call a “frontline-first” approach to data. The idea is not new. Elements of it exist in pockets across the NHS and in the thinking of people who have been working on this for years. But as a named concept with a clear definition, it has not been part of the public discourse. I think it should be.
FDP is the first attempt to build the integrated foundation that the NHS has been accumulating workarounds in the absence of, for 30 years Tom Bartlett
The series has five parts. This first post defines the problem. Part 2 defines the Frontline-First concept and what it looks like in practice, including how FDP delivers it. Part 3 describes the architectural choice that makes FDP structurally different – the ontology, object types, and actions. Part 4 explains why the Canonical Data Model is the most important asset in the programme. Part 5 addresses the objections I hear most often, including whether the NHS needs a single platform at all.
How we got here
The current NHS data architecture was not designed. It accumulated.
When I started my first job in the NHS I worked at the Royal Cornwall Hospital in Treliske, in a massive warehouse office called the megashed. Elsewhere in the warehouse were thousands of paper patient notes, and if I looked out of the window at any time of day I would see porters carrying red waterproof satchels containing those notes between departments. Accessing a record was extremely slow and resource intensive. You literally had to go and get the paper from the warehouse.
Electronic patient records (EPR) improved on this by making notes available at the click of a mouse. That was the primary purpose – replace paper. The analytical use case crept in slowly afterwards, driven by NHS initiatives like Referral to Treatment targets, Payment by Results, and the national targets originally linked to achievement of Foundation Trust status. Each new national requirement added another reason to extract data from the EPR, but the EPR was never designed to support this. Analytics was retrofitted onto a system built for a different purpose.
Shared care records were a further retrofit. They allowed individual records held in one EPR to surface in view of a clinician working in a different organisation. This was the digital equivalent of the red waterproof satchel – one record, carried from one place to another. Useful, but still a point-to-point solution rather than an integrated system.
At no point did anyone design an NHS-wide integration of all NHS data across all care settings, all organisations, and all use cases. The ambition to do so stunned me when I heard it for the first time, and I knew I had to be a part of it.
That ambition is what FDP represents. It is not another retrofit. It is the first attempt to build the integrated foundation that the NHS has been accumulating workarounds in the absence of, for 30 years.
Understanding this history matters because it explains how the following problems came to exist, and why they have persisted despite decades of investment in NHS data infrastructure.
The problems that Frontline-First is designed to solve
The NHS has several interconnected data problems that have persisted for decades. They are well known individually but rarely discussed as a connected picture. Before explaining what Frontline-First means, it is worth naming them together, because the case for FDP only makes sense once you can see how they reinforce each other. FDP was designed to address all of these problems. But the argument for how it does so, which begins in Part 2 of this series, only lands if the problems are understood first.
The feedback gap
Every patient interaction generates structured records that are used directly in the clinical process and also flow upward through NHS Trust data warehouses, through national submissions, and into the analytical infrastructure the centre uses to monitor performance.
A large proportion of what clinicians are asked to record, particularly items captured for national returns, performance metrics, coding for Payment by Results and secondary uses, gives them little in return that is locally useful.
The data leaves the point of care and the person who recorded it never sees what happened to it. Often they are asked by a performance manager to correct a record for reasons that seem low priority to the clinician. The consequence is that when workloads are pressured, clinicians will not prioritise low-value recording. Where they see local value in recording well, they do – medication prescribing, for instance, where accuracy has immediate clinical consequences.
But for items recorded primarily for downstream consumption, where the system gives no useful feedback, recording quality varies. The incentive to get it right is weak when the recording feels like an administrative overhead rather than a clinically useful act. This creates gaps and inconsistencies in the data that compound through every downstream use.
The shadow IT problem
Where formal systems fall short of the operational workflow a team actually follows, staff build something that does. Spreadsheets tracking waiting lists. Whiteboards in nurse stations. Word documents containing discharge proposals. Emails coordinating theatre schedules. Printed patient lists updated with biro on ward rounds. Daily phone calls from a ward coordination administrator to wards establishing bed state, recorded on a spreadsheet.
This is not laziness or poor governance. It is staff putting in place a workable, efficient solution to a gap the formal system left. The work has to happen, the EPR does not support it, so the team builds a tool that does.
Some years ago I did an audit at one Trust with the Caldicott Guardian – the person responsible for protecting patient confidentiality in health and care organisations – and we found over a thousand non-approved data sources of exactly this kind.
No information governance official could eliminate shadow IT without bringing the clinical service that depends on it to a halt. Few individual items of shadow IT are prioritised for investment to promote it to a formal system.
On the other side of the same gap, the clinical transformation team in IT who could change the EPR configuration to capture what the frontline actually needs are largely bypassed. Clinical teams would rather build a spreadsheet that fits their process now than wait months for a configuration change that may not match what they need. This is one reason shadow IT persists even in Trusts that have invested heavily in EPR.
The consequence is that the real operational data – the data that reflects what is actually happening on the ward – stays locked in these local tools and never enters the formal data estate. It is not linkable to the data warehouse, to national submissions, to the research environment, or to any other Trust. Data becomes more valuable as it connects to other data. Shadow IT severs that connection at the source.
The inaccessible record
Some of the most clinically meaningful data in the NHS is recorded diligently inside the formal system but is functionally lost to everyone, including the team that recorded it.
In one clinical team I observed, outcome scores in mental health from DIALOG (a set of questions where patients are asked to rate their satisfaction) were recorded as free text in generic progress note fields, buried in a mountain of clinical notes, never accessible to the Trust’s data warehouse, difficult for the clinical team to resurface at the next multi-disciplinary team (MDT) meeting, and invisible to national returns like the Mental Health Services Data Set (MHSDS).
Frontline users suffer detriment from problems that would be addressable if information was better integrated. Data recorded at the point of care is not enriched by data from elsewhere in the system before decisions are made Tom Bartlett
Discharge letters from a mental health consultant to a GP contain clinical reasoning, risk assessments, medication rationale and follow-up intentions that are more clinically useful than anything in the structured record. But they sit as free text or PDF attachments, inaccessible to any downstream analytical process.
The data exists. A clinician thought it mattered enough to write down. But because it was entered as narrative rather than structured data, it is invisible to every downstream process. This is not shadow IT. It is data that is technically inside the formal system but recorded in a form that no other part of the system can use.
The timeliness problem
Clinicians often do not record their data on formal systems in real time. I have seen queues in a care team’s office for the only operational PC on a Friday afternoon. Occasionally, clinicians leave the queue to end their shift before they’ve had the chance to input their week.
When data is sent up the line, national data lands months after the clinical event. By the time a metric is published, the Trust has already lived through the period and moved on. Changes to the scope of national collections take months or sometimes years to implement, so if a new clinical pathway emerges or a coding practice changes, or if a new question comes up, the national data model is still measuring the old world long after the frontline has moved on.
Worse, national returns generally do not allow retrospective revision. Data quality issues discovered after submission, corrections, late entries, updated coding, are rarely corrected in the published datasets. When the clinician who went home on the Friday manages to get their data into the system the following week it is too late to be included in the national figures, because the data has already been sent. The month’s submission with the coding error becomes the permanent version used for planning, funding allocation and research. The error is baked in.
The integration gap
Frontline users suffer detriment from problems that would be addressable if information was better integrated. Data recorded at the point of care is not enriched by data from elsewhere in the system before decisions are made.
The clinician makes the next decision based on what they personally know and what is in front of them, not on what the system knows. The A&E clinician does not see the mental health history. The consultant does not see how their outcomes compare to peers. The discharge coordinator does not see what community services have arranged. In every case, the problem is the same – data exists somewhere in the system that would improve the decision being made, but it does not reach the person making the decision at the time they need it.
Insights without context
When national or regional analysis does reach the frontline, it often arrives without the operational context that would make it accurate.
NHS England’s productivity tools send Trusts headline figures identifying financial opportunities based on national benchmarks. One Trust I am aware of received a figure of £89m. When the financial turnaround team started working through it, they found that £7.8m of an apparent £8m opportunity in women and children’s health was clinical negligence insurance premiums, a cost the Trust has no ability to influence. The headline looked actionable. The reality required hours of decomposition by people with operational knowledge before anyone could distinguish genuine opportunity from noise.
The analysis was produced centrally, without the context that would have filtered out the irrelevant before it reached the Trust. The frontline becomes a validation function for centrally produced insight, rather than a recipient of useful intelligence.
The technology barrier
Where clinical leadership teams have had embedded analysts – people who sit with the clinical team and understand the context – the work is far superior. These analysts contribute directly in the meeting rather than the service manager having to note the question, go back to the data team, wait for a response, and return two weeks later with a spreadsheet nobody has time to interpret.
But even embedded analysts are tethered to the back office. They still have to return to the data warehouse and business intelligence (BI) stack to get their answers, because the technology sits behind them rather than in front of the clinical team.
For this reason many Trusts centralise their analyst teams. The staffing model follows the technology architecture even if the outcomes are better with embedded analysts.
The invisible error
The data does not announce that it is wrong. The numbers look plausible. The dashboard is green. Nothing in the Integrated Care Board’s (ICB’s) dataset or the national submission flags the coding quirk that double-counted three urology cases, or the rota model that was never updated after two consultants left.
These problems do not show up as errors. They show up as slightly different numbers within the range of normal variation. An analyst at ICB or national level, querying data extracted weeks ago from a system they have never used, has no context for what the values mean operationally and no way to distinguish a genuine outlier from a local recording practice. The data is passing validation while being wrong in ways that only someone at the point of care would recognise.
This is what makes the other problems so hard to fix – the people with the authority to invest in solutions cannot see the problems from where they sit.
How these problems connect
These are not eight separate problems. They reinforce each other in ways that make each one harder to fix in isolation.
Two things happen in parallel. Clinicians record inconsistently because the data they are asked to capture gives them little back. And staff build shadow IT because the formal systems do not support their workflows. Both have the same effect – the analytical layer works from an incomplete picture.
Because the picture is incomplete and late, national and ICB-level decisions are based on data that does not reflect reality. Because nobody at those levels knows the data is wrong, no corrective signal flows back to the source.
The damage to the reliability of data used for decisions does not stop at Trust level. At ICB level, commissioning decisions are based on data that is months old and semantically inconsistent across Trusts, because each Trust codes and submits differently.
Population health management – the work of identifying at-risk patients before they become expensive acute admissions – is built on linked datasets assembled from extracts that arrived at different times with different definitions. The frail elderly patient known to community services, mental health and the GP may not appear as a single coherent person in the ICB’s linked data because the linking is probabilistic and the extracts were taken on different days. The intervention that would have prevented the A&E attendance never happens.
At national level, policy is made on data that does not reflect reality. Cohorts of patients to be shielded are incomplete. Elective recovery targets are set on Referral to Treatment data that is months old. Funding formulae that allocate resources to ICBs depend on activity data with enough coding variation across regions that some areas are systematically overfunded and others underfunded. National programmes launch without accurate baselines, so progress gets claimed or denied on numbers that do not reliably reflect what patients are experiencing. Research is slower than it should be because researchers spend months cleaning and validating data before they can begin analysis.
All of this is downstream of the same root cause. If the data were right at source, because the clinician had the means and a reason to record it carefully, every downstream use would improve as a side effect.
The ICB’s linked dataset would be more reliable. The national submission would be more timely. The funding formula would be less distorted. The research would be faster.
You do not fix commissioning data by building a better ICB warehouse. You fix it by giving the clinician a reason to record well at the point of care. Everything downstream follows.
These problems are addressable. Not with better dashboards, not with another warehouse, and not by asking clinicians to try harder. The next article describes what a Frontline-First approach to data looks like, and why FDP is the first platform designed to deliver one.