Connect with us

Tech

What Is a Preamp, and Do I Really Need One?

Published

on

What Is a Preamp, and Do I Really Need One?


Every audio system requires amplification. In a traditional hi-fi set-up, the loudspeakers are always “passive”—which is to say, they don’t produce their own power. Instead, they must receive an amplified audio signal from an external source, aptly called an amplifier, in order to do their thing. Even in a more modern, self-contained audio system (like the Sonos Era 100, for example), the drivers that produce the sound must be amplified in order to function—this just all happens in a singular box rather than across hi-fi separates.

But if you’ve heard about amplifiers, you may have also heard about preamplifiers (often referred to as “preamps”) and wondered where they fit into an audio system, and whether you need one. Let’s answer those questions, shall we?

What Does a Preamp Do?

An audio signal needs plenty of attention before it’s ready to be amplified—so ultimately the question “what does a preamp do?” broadly contains its own answer. A preamplifier takes care of everything that needs to be done before the audio signal (sent from the music source) is amplified and sent onwards to the system’s speakers.

In a self-contained audio system like the Sonos speaker, the preamplifier and the amplifier are in the same enclosure, along with the speaker drivers that actually deliver the sound. Even in a more sophisticated hi-fi separates setup, the preamplifier part of proceedings is still often handled out of sight, within the amplifier. These types of amps are known as “integrated amplifiers” and contain both preamp and amplifier functionality.

However, some people prefer to separate out this functionality, which is when you may come across a preamplifier as its own piece of equipment, paired with a power amplifier. In these cases, the preamplifier allows you to select the source of music you’d like to hear (the majority have a selection of input options in order to support a system with multiple sources), and also set and adjust the volume.

The preamp also ensures the audio signal is at “line level”—that is, the standard voltage strength of an audio signal transmitted between components—and sends it on to be amplified, ready to be moved onwards, finally, to the speakers.

Does an External Preamp Improve Sound Quality?

Hi-fi orthodoxy says that individual functions in any system should be kept as separate as possible if the best results are to be achieved. The thinking goes that, by keeping electrical activity as shielded and self-contained as possible, the audio signal has the best shot at remaining as pure and uncolored as possible.

By dividing the preamplifier and the amplifier functions into separate boxes, there should be a reduction in electrical noise and interference around the signal compared to having it all crammed into a single box.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

The Commodore 64 Ultimate Is an Authentic Re-Creation for Die-Hard Fans

Published

on

The Commodore 64 Ultimate Is an Authentic Re-Creation for Die-Hard Fans


Photograph: Matt Kamen

Boot up the C64U, and you’re greeted by a re-creation of the C64’s menu. Here, you can type in operation commands just as you would back in the day, using the BASIC programming language. Problem: I don’t have the first clue about BASIC. However, in what is possibly the greatest throwback of all, the C64U comes with a spiral-bound, 273-page user guide. It is an absolute tome. Somewhat surprisingly, it’s not a reprint of anything that came with the original, but rather a tailored guide to what the C64U does, where it differs from the C64, and how to get to grips with the computer’s capabilities. Equal parts history book and instruction manual, it starts out teaching you some simple commands and builds up to teaching you how to code. I’m still very much working my way through it, but that tactile approach—referring to the book, trying something out on the computer, back and forth—is a great touch.

Hidden Upgrades

If you don’t fancy having to do homework, the C64U’s own default menu, accessed at any time with a flick of the multifunction power button on the right-hand side of the unit, is a simple list of options and settings. Hit RETURN to go into any section—say, “Video Setup” to adjust whether the C64U outputs in original resolution, in PAL or NTSC modes (surprisingly important, given some games will only work with one display standard or the other), or a crystal clear 1080p with scanlines removed—and back out to save any changes to the system’s flash memory. It’s still a minimalist approach, but feels fairly intuitive.

This is also where you can start playing around with some of the other modern touches of the C64U, like how to leverage its far greater power. Well, “greater” in comparison to 1982. Spec-wise, this isn’t going to threaten any more modern machine, but running on an AMD Xilinx Artix-7 FPGA chip and packing 128-MB DDR2 RAM—compared to the 64 KB of the C64—it blows its inspiration out of the water. While at baseline it replicates the performance of the 1982 hardware, meaning it operates as if there’s only the original 64 KB were there, you can menu-dive to activate a virtualized RAM Expansion Unit, or activate a “Turbo Boost” to accelerate the clock speed to a lightning-fast (in this particular context) 64 MHz.



Source link

Continue Reading

Tech

Tips for Keeping a Digital Diary and Why You Should

Published

on

Tips for Keeping a Digital Diary and Why You Should


Keeping a daily diary doesn’t come easily to most people, but it takes less effort than you might imagine. It could also become a meaningful way to reflect and grow as a person.

For more than 10 years, I’ve written a few words every morning, and what I’ve learned from this practice has changed my life. My only regret is not starting sooner.

If you’re interested in adding a daily journaling practice to your life, these tips and tools can help you not only get started, but also stick with it.

Why Keep a Journal or Diary?

My diary is a tool for clearing out my thoughts, recording details of my life that are sometimes useful to know later, and reflecting. The value in reflecting, however, only became apparent after I’d been writing for several years and could look back on my life to see it from a different perspective.

I’ve always been very hard on myself. I don’t make excuses, and I look upon my failures with consternation. Whenever I’ve gone back and read a series of diary entries from low points in my life, I’ve been able to view them with an outsider’s perspective. I can see more clearly just how tough things were, or how many things went wrong at once, or the gravity of a single event that I might have downplayed in the moment. This reflection has led me to be more compassionate toward myself—and toward others. I have learned to cut myself some slack.

You might discover something else, whether a pattern of behavior or something you want to change. Or maybe with hindsight you realize the things you thought you wanted to change don’t need changing at all. Journaling sheds light on all these things.

Memory is fickle. The personal self-reflection that we do entirely in our heads differs wildly from what we can do with notes. In short, that’s why I’ve kept up my daily writing for more than 10 years.

What Should You Write in Your Journal?

Start every diary entry with the date and your location. Why bother if your computer or phone can add them automatically? A few reasons. First, you will never stare at a blank page, and you will always know how to start. Second, metadata can get bungled over time or during file transfers, so it’s more reliable to add them manually. Third, typing the date and location into the diary entry itself ensures that those very important pieces of information are searchable.

What else should you write? A diary entry can be a simple brain dump. That’s what I do. Other things worth mentioning are major events, strong emotions, and hopes and dreams.

If following a method helps, you could try gratitude journaling. Some parents I know ask their kids at the end of the day to reflect on their “rose, thorn, bud“—one highlight from the day, one difficulty, and something they’re looking forward to—which is an equally good diary formula.

How to Make It a Habit

The best trick I have for forming a new habit is to tie it to an existing one. Find a habit that you already have and combine it with a few minutes of daily writing.

I journal every morning as soon as I have coffee in front of me. My coffee-making routine is non-negotiable, immovable, set in stone, seven days a week. Even when I stay in a hotel, I bring a travel coffee maker with me, and I write in my diary while drinking the coffee.



Source link

Continue Reading

Tech

Top 10 technology ethics stories of 2025 | Computer Weekly

Published

on

Top 10 technology ethics stories of 2025 | Computer Weekly


Throughout 2025, Computer Weekly’s technology and ethics coverage highlighted the human and socio-technical impacts of data-driven systems, particularly artificial intelligence (AI).

This included a number of reports on how the Home Office’s electronic visa (eVisa) system, which has been plagued by data quality and integrity issues from the outset, is affecting migrants in the UK; the progress of both domestic and international efforts to regulate AI; and debates around the ethics of autonomous weaponry.

A number of stories also covered the role major technology companies have played in Israel’s genocide against Palestinians, which includes providing key digital infrastructure and tools that have enabled mass killings.

In June 2025, Computer Weekly reported on ongoing technical difficulties with the Home Office’s electronic visa (eVisa) system, which has left scores of people living in the UK with no means to reliably prove their immigration status or “right” to be in the country.

Those affected by the eVisa system’s technical failings told Computer Weekly, on condition of anonymity, that the entire experience had been “anxiety-inducing” and described how their lives had been thrust into “uncertainty” by the transition to a digital, online-only immigration system.

Each also described how the “inordinate amount of stress” associated with not being able to reliably prove their immigration status had been made worse by a lack of responsiveness and help from the Home Office, which they accused of essentially leaving them in the lurch.

In one case that was reported to the Information Commissioner’s Office, the technical errors with data held by the Home Office were so severe that it found a breach of UK data protection law.

Following the initial AI Safety Summit at Bletchley Park in November 2023 and the follow-up AI Seoul Summit in May 2024, the third AI Action Summit in Paris saw dozens of governments and companies outline their commitments to making the technology open, sustainable and work for the “public interest”.

However, speaking with Computer Weekly, AI experts and summit attendees said there was a clear tension in the direction of travel, with the technology caught between competing rhetorical and developmental imperatives.

They noted, for example, that while the emphasis on AI as an open, public asset was promising, there was worryingly little in place to prevent further centralisations of power around the technology, which is still largely dominated by a handful of powerful corporations and countries.

They added that key political and industry figures – despite their apparent commitments to more positive, socially useful visions of AI – were making a worrying push towards deregulation, which could undermine public trust and create a race to the bottom in terms of safety and standards.

Despite the tensions present, there was consensus that the summit opened more room for competing visions of AI, even if there was no guarantee these would win out in the long run.

In February 2025, Google parent Alphabet dropped its pledge not to use AI in weapons systems or surveillance tools, citing a need to support the national security of “democracies”.

Despite previous commitments that made it explicit the company would “not pursue” the building of AI-powered weapons, Google – whose company motto ‘Don’t be Evil’ was replaced in 2015 with ‘Do the right thing’ – said it believed “democracies should lead in AI development, guided by core values like freedom, equality and respect for human rights”.

For military technology experts, however, the move represented a worrying change. They noted that while companies such as Google had already been supplying military technology to a range of actors, including the US and Israel, “it indicates a worrying acceptance of building out a war economy” and “signals that there is a significant market position in making AI for military purposes”.

Google’s decision was also roundly condemned by human rights organisations across the globe, which called it “shameful” and said it would set a “dangerous” precedent going forward.  

Speaking during an event hosted by the Alan Turing Institute, military planners and industry figures claimed that using AI in military contexts could unlock a range of benefits for defence organisations, and even went as far as claiming there was an ethical imperative to deploy AI in the military.

Despite being the lone voice not representing industry or military interests, Elke Schwarz, a professor of political theory at Queen Mary University London and author of Death machines: The ethics of violent technologies, warned there was a clear tension between speed and control baked into the technology.

She especially argued this “intractable problem” with AI risks taking humans further out of the military decision-making loop, in turn reducing accountability and lowering the threshold for resorting to violence.

Highlighting the reality that many of today’s AI systems are simply not very good yet, she also warned against making “wildly optimistic” claims about the revolutionary impacts of the technology in every aspect of life, including warfare.

Workers in Kenya employed to train and maintain the AI systems of major technology companies formed the Data Labelers Association (DLA) this year to challenge the “systemic injustices” they face in the workplace, with 339 members joining the organisation in its first week.

While the popular perception of AI revolves around the idea of an autodidactic machine that can act and learn with complete autonomy, the reality is that the technology requires a significant amount of human labour to complete even the most basic functions.

Despite Kenya becoming a major hub for AI-related labour, the DLA said data workers were tremendously underpaid, often earning just cents for tasks that take a number of hours to complete, and yet still face frequent pay disputes over withheld wages that are never resolved.

During the launch, DLA secretary Michael Geoffrey Abuyabo Asia said weak labour laws in Kenya were being deliberately exploited by tech companies looking to cheaply outsource their data annotation work.

The Home Office is operating at least eight AI-powered surveillance towers along the south-east coast of England, which critics have said are contributing to migrant deaths in the English Channel, representing a physical marker of increasing border militarisation that is pushing people into taking ever more dangerous routes.

As part of a project to map the state of England’s coastal surveillance, the Migrants Rights Network (MRN) and researcher Samuel Story identified eight operational autonomous surveillance towers between Hastings and Margate where people seeking asylum via the Channel often land, as well as two more that had either been dismantled or relocated.

Responding to their freedom of information (FoI) requests, the Home Office itself also tacitly acknowledged that increased border surveillance would place migrants crossing the Channel in “even greater jeopardy”.

Created by US defence company Anduril – the Elvish name for Aragorn’s sword in The Lord of the Rings, which translates to “flame of the west” – the 5.5m-tall maritime sentry towers are fitted with radar, as well as thermal and electro-optical imaging sensors, enabling the detection of “small boats” and other water-borne objects in a nine-mile radius.

Underpinned by Lattice OS, an AI-powered operating system marketed primarily to defence organisations, the towers are capable of autonomously piecing together data collected from thousands of different sources, such as sensors or drones operated by Anduril, to create a “real-time understanding of the environment”.

The European Commission has been ignoring calls to reassess Israel’s data adequacy status for over a year, despite “urgent concerns” about the country’s data protection framework and “repressive” conduct in Gaza.

In April 2024, a coalition of 17 civil society groups coordinated by European Digital Rights signed an open letter voicing concerns about the commission’s January 2024 decision to uphold Israel’s adequacy status, which permits the continued free flow of data between the country and the European Union on the basis that each has “essentially equivalent” data protection standards.

Despite their calls for clarification from the commission on “six pivotal matters” – including the rule of law in Israel, the scope of its data protection frameworks, the role of intelligence agencies, and the onward transfer of data beyond Israel’s internationally recognised borders – the groups received no response, prompting them to author a second open letter in June 2025.

They said it was clear the commission is unwilling to uphold its own standards when politically inconvenient.

Given that Israel’s tech sector accounts for 20% of its overall economic output and 53% of total exports, according to a mid-2024 report published by the Israel Innovation Authority, losing adequacy could have a profound effect on the country’s overall economy.

The European Commission told Computer Weekly it was aware of the open letters, but did not answer questions about why it had not responded.

Francesca Albanese, the special rapporteur for the human rights situation in Palestine, said in July 2025 that technology firms globally were actively “aiding and abetting” Israel’s “crimes of apartheid and genocide” against Palestinians, and issued an urgent call for companies to cease their business activities in the region.

In particular, she highlighted how the “repression of Palestinians has become progressively automated” by the increasing supply of powerful military and surveillance technologies to Israel, including drones, AI-powered targeting systems, cloud computing infrastructure, data analytics tools, biometric databases and high-tech weaponry.

She said that if the companies supplying these technologies had conducted the proper human rights due diligence – including IBM, Microsoft, Alphabet, Amazon and Palantir – they would have divested “long ago” from involvement in Israel’s illegal occupation of Gaza and the West Bank.

“After October 2023, long-standing systems of control, exploitation and dispossession metamorphosed into economic, technological and political infrastructures mobilised to inflict mass violence and immense destruction,” she said. “Entities that previously enabled and profited from Palestinian elimination and erasure within the economy of occupation, instead of disengaging, are now involved in the economy of genocide.”

Under international law, however, Albanese pointed out that the mere fact that due diligence had been conducted did not absolve companies from legal liability over their role in abuses. Instead, the liability of companies is determined by both their actions and the ultimate human rights impact.

Later, in October 2025, human rights organisations jointly called for Microsoft to immediately end any involvement with the “Israeli authorities’ systemic repression of Palestinians” and work to prevent its products or services being used to commit further “atrocity crimes”.

This followed credible allegations that Microsoft Azure was being used to facilitate mass surveillance and lethal force against Palestinians, which prompted the company to suspend services to the Israeli military unit responsible.

As part of a joint Parliamentary inquiry set up to examine how human rights can be protected in “the age of artificial intelligence”, expert witnesses told MPs and Lords that the UK government’s “uncritical and deregulatory” approach to AI would ultimately fail to deal with the technology’s highly scalable harms, and could lead to further public disenfranchisement.

“AI is regulated in the UK, but only incidentally and not well … we’re looking at a system that has big gaps in [regulatory] coverage,” said Michael Birtwistle, the Ada Lovelace Institute’s associate director of law and policy, adding that that while the AI opportunities action plan published by the government in January 2025 outlined “significant ambitions to grow AI adoption”, it contained little on what actions could be taken to mitigate AI risks, and made “no mention of human rights”.

Experts also warned that the government’s current approach, which they said favours economic growth and the commercial interests of industry above all else, could further deepen public disenfranchisement if it failed to protect ordinary people’s rights and made them feel like technology was being imposed on them from above.

Witnesses also spoke about the risk of AI exacerbating many existing issues, particularly around discrimination in society, by automating processes in ways that project historical inequalities or injustices into the future.

In January 2025, Computer Weekly reported on how Black mothers from Birmingham had organised a community-led data initiative that aims to ensure their perinatal healthcare concerns are taken seriously by medical professionals.

Drawn from Maternity Engagement Action (MEA) – an organisation that provides safe spaces and leadership for black women throughout pregnancy, birth and early motherhood – the women came together over their shared concern about the significant challenges faced by black women when seeking reproductive healthcare.

Through a process of qualitative data gathering – entailing discussions, surveys, workshops, trainings and meetings – the women developed a participatory, community-focused approach to black perinatal healthcare, culminating in the launch of MEA’s See Me, Hear Me campaign.

Speaking with Computer Weekly, Tamanda Walker – a sociologist and founder of community-focused research organisation Roots & Rigour – explained how the initiative ultimately aims to shift from the current top-down approach that defines black perinatal healthcare, to one where community data and input drives systemic change in ways that better meet the needs of local women instead.



Source link

Continue Reading

Trending