You might call it Virtual Funk. It’s the process whereby “feel” and “groove” in modern pop music have been progressively diverted from actual human performers and inserted into microcircuits. Contemporary dance music of all kinds–rap and hip hop, soul, pop, and hard rock–all now benefit from the triumph of the machine. If it seems paradoxical to you that music software programs now carry a “humanize” function, you are living in the past.

First it was synthesizers, then drum machines. After that came sequencers, then sampling computers. Synths made artificial electronic sounds. Drum machines did the same thing with percussion (white noise for a snare drum) but gradually began to sound more and more like “real” drums. Sequencers enabled musicians to program automated riffs and melodies. If you could think it up, the machine could play it. Then samplers finished off real human performance once and for all by enabling musicians to make a digital “sample” of a sound, riff, or rhythm and store it in a computer, to be manipulated and reused at will. Suddenly James Brown’s “Funky Drummer” was the pulse propelling every other song on the radio.

Our old friend Bill Wyman once identified the moment of technological triumph with some precision: in the ninth bar of Joy Division’s 1980 classic “Love Will Tear Us Apart” you can hear the 1970s guitar-based punk band begin its transformation into New Order–the quintessential 1980s machine-based pop group. Guitars are swept away by synthesizers. Soon the rigid drumming was dropped in favor of more flexible drum programs. (NO’s drum machine was always much funkier than its drummer, because its programmers have more skill than he has sticking ability.) And Peter Hook’s bass would become purely melodic–its rhythmic function appropriated by sequenced keyboards.

The digital technologies of the 1980s have progressively eroded distinctions between human and automated performance and between natural and synthetic sounds. Drum machines and synthesizers no longer synthetically approximate particular sounds; now they exactly, digitally, reproduce the original sound. These days, there is often no way of distinguishing what a real keyboard player does from what a well-programmed machine can do. Short of being there in the studio to collect evidence, you’d be hard pressed in most cases to know whether the playing on your favorite album is flesh and blood or samples and chips.

Reactions to these developments are usually, well, reactionary. One instinct is to make a case for “real” sounds as “warmer” than artificial ones. I once bought a music software program and enjoyed a bewildering demonstration of how relative such values can be: the salesperson was pitching an additional package of tricks that converted the digital information (displayed via numbers) into an analog display (visualized as waveforms) that was, to him, unquestionably more “natural” and easy to understand. In fact, both systems were incomprehensible to me.

Like most arguments about what is “natural,” it depends where you start. Waveforms no doubt appear positively natural if you happen to have grown up reading Physics Monthly and playing an old-fashioned synthesizer that somewhat resembled a telephone exchange. But then I can remember when old-fashioned synths sounded “cold.” They still do, to some people. Many of whom think that electric guitars, amplifiers, and microphones are “natural” instruments. And you can keep going back in stages, but you’ll never find music without technology unless you think the lute, say, sprang one day out of the soil.

So let’s face it: Music is made through technology. And music is now being made by a generation of musicians who think that machines are funkier than people. In most cases they are right.

As sequencers have become more sophisticated they have been enabled to talk to one another via something called MIDI. MIDI (Musical Instrument Digital Interface) is actually the most important technological shift of all, for it allows machines to work together, in sync. MIDI, it has often been said, is a democratizing punk technology born a decade late. Once the machines can speak to each other, anyone with a few hundred dollars can pile up tracks at home on a personal computer, without going near a professional recording studio. The gap between professional and do-it-yourself recording is narrowing, literally by the month.

Musically, these developments have created a unique opportunity: machines with a rock sensibility. The new sequencers and samplers allow for the elastic placement of the beat, out of line with its “correct” mathematical position–just a little ahead or behind the beat, to create “feel.” Indeed, the equation of “feel” or “groove” with mistakes is now consolidated into the circuitry and software. A new generation of drum machines (such as Roland’s R8) now have a Feel button that activates random error in the timekeeping, to make it seem more “real” (i.e., human). Software packages that “humanize” the pulse of the music introduce programmable degrees of error. As the manual to the frequently used MasterTracks Pro sequencer puts it: “A humanize feature can compensate for the computer’s predilection to error correct ‘too much’ and thus create sterile, mechanical sounding performances.” Now, you might not like the sound of this. But can you tell the difference between a real person making real mistakes and a computer simulating them? I doubt it.

Some people believe that actual human performances are more spontaneous than machine-made music. (To maintain this belief, it does help if you haven’t been to a rock show for about ten years.) That would be true if machines didn’t generate their own happy accidents. In fact, you can jam with your software. Today’s musicians mess about with sequences, toy with samples, and sit waiting patiently for the software to do something strange they’ve never heard before. In any case, it has become almost impossible to tell automated and human performances apart. (Best drum lick of the year so far? The tom-tom fill in Electronic’s “Get the Message.” Is it human, a machine, or a sample? Who cares?) More remarkable still is the transformation of our perception of automated and electronic sounds: what once seemed “cold” now feels funky.

Racist ideologies of nonwhite naturalness (“black music” as more real, more true, more natural) obscure the fact that the most important form of black popular culture–rap–is made using extremely advanced machinery. In that respect Public Enemy can reasonably stake a claim as the leading artists working at the cutting edge of today’s technology. Yes, it’s funky, angry, and moving–but that music comes out of machines and computers without which it couldn’t exist in its current form. (Early hip hop was produced without computers, but the sonic “thickness” of PE is a direct result of MIDI and samplers.)

The most remarkable result of these changes is the trend toward using old analog synthesizers, on the grounds that they sound “warmer” than the newer digital machines. By the middle of the 1980s some bands were beginning to reject MIDI and digital sound and suggest a return to analog “roots.” The Human League, for example, pioneered a movement that saw 1970s technology (like the Moog synthesizer) as more “natural” than digital synthesis. The “old” machines have even bestowed classic sounds (the Moog’s fat bass, the Roland TR 808’s Handclap–so effective that it was sampled on later, digital models) that are now as familiar as the warm ring of a Les Paul guitar. The return of the analog synth can be heard in music by Scritti Politti, 808 State, the Pet Shop Boys, and Depeche Mode, among others. And what is important here is the paradigm shift in how these once alien-sounding, “harsh,” “cold” sounds now make sense: now they tell us to party.

Today’s musicians are the first generation to have grown up thinking of drums as something that comes out of a plastic box. They are the first musicians who taught themselves computer programming rather than guitar riffs. (I recently heard a musician talking of improving his “sampling chops.”) They don’t go to a band rehearsal, they gather around “workstations.” What they will do when they’ve developed sampling and MIDI for as long as today’s rock guitarists have practiced their blues licks is a fascinating prospect.

In the music of Public Enemy, N.W.A., Bomb the Bass, Big Audio Dynamite, New Order, Soho, and Ministry, we can hear the results of machine-made music programmed by musicians who’ve learned their sequencing and sampling chops with some care. The excitement, the dissonance, the syncopations, and–yes–the spontaneity of this music could not have happened without the new music technologies. Programming funk is not an oxymoron.

So next time you hear someone whining on about drum machines and computers and their negative effects on today’s music, consider this question: whoever said that there was anything natural about recording electric guitars in a 48-track studio?

Art accompanying story in printed newspaper (not available in this archive): illustration/Heather McAdams.