Geosonics by Soniccouture…it was on sale recently, half price through Native Instruments. I bought it without remembering much of what I knew about the instrument/library. It had been about a year since researching it. I just knew that it looked really interesting and that the presets were magnificent.
So, after buying it and playing around with a few presets to put together a droning new composition, I’ve started re-acquainting myself with what I now have. Maybe I should have done that before rushing in to make something (as though that would have radically changed the end product).
A brief comment on my initial experience…It took hours to download at 6 GB. Then, when I tried to play the instrument a message came up that it was created on a newer version and that I needed to download the latest version. I thought that referred to Geosonics. Wrong: Geosonics is a Kontakt instrument and it was Kontakt that was not up to date. Then I had some problem with the password when trying to activate the software, thinking I was logging in to Soniccouture when, in fact, I was activating it through Native Instruments. The final problem was that when I closed Kontakt and reopened it, the library was gone. I had to reinstall the library, which would again disappear. It was late on a Friday night, I was very tired, and I sent an email request for help to Soniccouture before searching any solutions. The next morning I explored the FAQs at Native Instruments and found that I needed to dig through a chain of folders until I found an xml file that needed to be deleted. Then the computer needed to be rebooted, Kontakt re-opened, and the Geosonics library added one last time. It’s been working since, and the smiles and deep sighs of pleasure have been accumulating. (When I did get to my email, after fixing the problem, I found a reply from Soniccouture with the same solution. They’ve already made a good impression on me by not only responding but doing so in a timely manner. If you’ve had to deal with online support you might have an idea of how rare that is.)
Geosonics is based on the field recordings of Chris Watson (once upon a time member of Cabaret Voltaire). These recordings unadorned are beautiful and inspiring and are part of the package (the fifth folder). There are four folders of presets of the recordings as manipulated by a selection of sound designers (Ian Boddy, Biomechanoid, Martin Walker, Andy Wheddon, among them…Soniccouture’s links for these names come back to the page you’re on; frustrating because it’s hard to find much info on any of them).
I really recommend watching the videos on the site. There are several of Chris Watson, which are a pleasure in themselves, detailing the stories behind the capture of some of the sounds (a residue of my childhood in the swamps of northern Minnesota, I can attest to the difficulty of doing anything when swarmed by mosquitos and other biting insects). There are also several instructional videos walking you through the sections of the Kontakt instrument ($179 seems a high price but after watching the videos you start to understand how much went into Geosonics and to how rich and flexible it is as an instrument).
The field recordings are of wind, wires (primarily, again, wind), water and ice, and swamps. And, so, there are four folders of presets. With the presets you can still access the original sound. Whether from an original sound or a preset you can still tweak until you’ve driven yourself crazy. There’s a fairly rich set of effects (delay, chorus, phaser, compressor, lo-fi, a button called “reverse”, and a host of reverbs ranging from conventional to IRs taken from Watson’s recordings). There are two effects busses. (Not immediately obvious to me, you click on the word (such as “reverb”…for instance, to toggle between the natural sound and the processed click on the word “focus” in the middle of the header…to change files click on “off” or the current name, just under just under “pitched 1” and “pitched 2”) to get a drop down menu. I saw this done in the videos but had to play around a little before I found the correct places to click.)
I would love to have the field recordings as WAV files so I could do my own kind of manipulation—mainly stretching, reversing, and pitch shifting—which leads to less musical results. Still, that’s such a small complaint and, if I really wanted, something that could probably be satisfied by other sample libraries.
The field recordings lend themselves to usage as pads and drones and most of the presets reflect this. You can make rhythmic and melodic instruments but, as stated, they work best for long, evolving sounds. Gorgeous sounds. I’m happy.
I mean that both ways: hand me the polish and throw it away. I can go either way but most often try to go both at once.
What the hell am I talking about? Finishing off your recordings by either adding finesse or by working them to death.
A common phrase amongst recording engineers: polishing a turd. Most often the full expression is, in reference to mediocre musicians playing bad songs trying to compensate with technological processing to make crap sound good, you can’t polish a turd.
For me the prime example is the professional or semi-pro musician (say, a keyboardist) who is just a hired hand in someone else’s band who has long dreamed of doing their own music. Really, they have no direction or quality material but they feel they have something to say that will change the world. The songs are mediocre and meaningless; maybe just pointless solos. In a an attempt for perfection the performances are lifeless. Then they spend several years tweaking the mix, trying to get the EQ and reverb just right. If they ever finish it you’ll wonder why they bothered.
Nowadays, with the luxury of cheap home recording and unlimited editing on the computer the process can be even worse. Take an amateur musician who can’t really play in time or stay in key yet they’re convinced they’re the new Lennon-McCartney. While having every intention of making great music the resulting recording, though obviously made with musical instruments in a musical structure, could in no way feel like music.
A friend who once worked in Nashville as a recording engineer almost walked away from music, as both fan and creator. I quote him: “What was so demoralizing about it all was that the music was polished, perfected and agonized over until it was devoid of any excitement at all. Vocals and instrumental solos were picked apart and re-recorded until they were note-perfect, completely inoffensive and utterly lifeless.”
My experiences do not quite fit the stereotype because, when I began recording my stuff in 1996, I was neither musician nor engineer. Nor did I know how to construct something that might be considered musical. Everything I worked on—absolutely everything—was a struggle and the results were usually oddly mechanical. Kind of Rube Goldberg audio (like having something in ten measures of 21/8 time because that’s how long it takes for all the vowels to fall into place if the “a” is on one, “e” is on two, “i” is on three, “o” on five, and “u” on seven—that is, vowels and primes combined, as they were in “Music, the Beginning”).
As I learned to work on this stuff I began to structure things to a MIDI grid. Of course it’ll sound mechanical. With no musical ability there’s no chance for it to have feel (which could have been the case if a musician played without quantization). I think a more destructive process for me was my pursuit of clean sound. Part of the frustration with cassette 4-track was the inevitability of tape hiss. Instead of focusing on any performances of words or somewhat random banging I obsessed about having a clean signal chain (this is always a potential problem for anyone recording themself). You’d think that going digital would have been a reprieve but, in fact, I actually went further in that direction (the irony is that I then buried all those cleanly recorded tracks in a stew of reverb).
Initially working with prerecorded commercial loops, as I was doing a few years later on computer, made me extremely uncomfortable because the loops were too clean, too professional, and failed to mesh with my home recording (and lack of skill).
On the whole I’ve never quite fallen into the trap of over processing just because I have never had the time to indulge my perfectionist tendencies. In so many ways deprivation can be your friend. The key is to have enough time and gear without having so much you lose focus.
I put the most time into processing raw sounds, such as recordings I’ve made of household sounds, and breaking them up into discrete samples and loops. After that, when I start playing with them in a DAW, everything transpires quickly. Most of my compositions are put together in a single day. Or, more often, a few loops and maybe some synth parts are combined in few hours, then the thing sits on my computer for months. When I come back to it, if it gels, the rest happens quickly, usually within a few more hours. I’ve got my studio (that is, my room) set up so that I can get to work without moving much of anything. The audio interface and MIDI controller are always set up. It takes a few minutes to plug in the mic cables and get the settings close to what they’ll need to be. I read through a poem several times to make sure I have the feel of it. Then I press “record”. I mix as I record so that as all the pieces fall into place on the DAW timeline the recording is almost complete.
If you work with software instruments you’ll either have to design your sound or select presets. Since I don’t know how to program a synth yet (after 20 years!…actually only about 10 years) I sometimes spend hours auditioning presets. If you know what you want to do, if you have the melody or chord progression in mind, and are just trying to find the right sound, chances are this process will kill your creative process. For me, browsing the presets is part of the creative process: whatever “music” I come up with is a reaction to the sound of the instrument. The potentially soul destroying business of sound selection is something I usually turn into an inspiring detour.
Perfection in audio is not something I’m capable of, for lack of skill but also as an esthetic choice. I love to process the shit out of sounds (in “Music, the Beginning” I ran my voice through a simulation of a rotary speaker, also known as a Leslie cabinet). But I don’t like the final product to be so smooth and shiny. I suppose that would in part be my rock roots showing.
But if you can find the musical equivalent to a coprolite polish away. Fossils take on a nice shine.
Have you played around with any convolution reverbs? I was unaware that I have them on my computer or didn’t know at all what to do with those I knew about. Maybe the last version of Sound Forge (9.0) that I’d purchased did not have Acoustic Mirror. Probably I didn’t know what it was. Or maybe I couldn’t figure out how to use it (a couple months ago I was trying to figure it out and realized that the IR files that Acoustic Mirror relies on had not downloaded when I bought version 11 of Sound Forge—no wonder I couldn’t get it to work). I’ve gotten slightly better at researching and finding things online (in that regard I’m very old and slow to learn). In general Sound Forge’s workflow isn’t great and the effects are clunky to use compared to the compositional DAW’s (such as ACID Pro or Sonar) so I tend not to use it for things like reverb.
Anyway, today I was playing around with Acoustic Mirror and downloaded a free library of somewhat exotic IR files (Echo Thief) that I found I could open and apply. Most of the files were not great sounding. In part, I think, because the guy might not be using the best gear or that the reverberation of the location was so weak the input had to be cranked (a lot of hiss added). But I am also finding that many sites really don’t sound that good. Like so much in life, the idea, the expectation, is often better than the reality.
Then I opened up Reflektor in Guitar Rig (still within Sound
Forge). I knew it was a convolution reverb but was unaware of just how easy it is to use third party files. The older Native Instruments products seem overly difficult (try Reaktor 4, for example), and I think I’m still intimidated by my experiences with them from almost a decade ago, while the newer programs have become almost a no-brainer. It took no effort to find, load, and use the Echo Thief files within Reflektor (the online PDF manual had so little information I was half-convinced I wouldn’t get anywhere but the software really is that simple to use). In all ways Reflektor was quicker and more straight forward than Acoustic Mirror. The interface is simpler, everything seems to be right there in front of you. More importantly, the program loaded the files, once I’d selected the folder, so it’s merely a skip through presets rather than having to return to the browser for each IR file (in Acoustic Mirror you return again and again to the Echo Thief folder).
Both Acoustic Mirror and Reflektor/Guitar Rig can be used in ACID Pro. I found both were also easy to use in Sonar X3, though Acoustic Mirror remains the clunkier of the two. I was playing around a little, just minutes ago, with Sonar. It’s a fairly easy to use program, not too far removed from ACID (I also have the advantage of having read The Power of Cakewalk Sonar (I think written for X2)). The explorer seemed incomplete in Sonar and I could find no way to access my samples, which are on the E-drive rather than the C-drive (that is, on a non-bootable storage drive that I’ve added rather than the basic hard drive that all the programs are on…I could find no way to access the E-drive). But it was an easy drag and drop from the computer’s explorer and, ultimately, no problem to quickly set up an experimental track with a looped sample of my own devising. It seemed the simplest way to get a taste of any IR reverb, just keep the sample looping while switching reverbs. (Try a drum loop to get a sense of how quickly reverb turns your recording to sonic goo if it’s something with a really long tail like a cathedral or tunnel.)
The interesting thing about Sonar…so far I haven’t figured out how to use many of the instruments or effects that come with it. I suspect some of the effects should be straight forward (and they are, I just tried a few) but I couldn’t get anywhere with the supplied convolution reverb (Perfect Space). (The new non-numbered version of Sonar has a new convolution reverb (REmatrix Solo) which I have not tried but judging from the screen shots it looks like it might be better designed. (At the end of September 2015 I purchased the new Sonar. REmatrix is indeed there, though it took some poking and prodding to find it. It isn’t in the browser with other effects; there are two FX sections in Prochannel, one where you add effects from the browser and one where you right-click and choose from exclusive Cakewalk products. Anyway, it works fine for the supplied IRs but is very cumbersome if you want to add your own, such as the aforementioned Echo Thief responses.))
All this leads to the question: why would I waste my time on convolution reverbs since I hardly ever use reverb of any kind? Actually, I have been using Reflektor, very lightly, usually on vocal-only productions but also on something like “Final Words” to give a feeling that I’m in a room somewhere. Truth is, I am in a room when recording but with the microphone so close that almost no ambience is being picked up (I do not have a vocal booth to eliminate all echo but my voice still comes out quite dry without it). (There is a very good TED talk by David Byrne about how space affects music, what kind of music can be heard in what kind of space. Most of my arrangements are very cluttered and sound like shit with much—or any—reverb and the poems I read become indecipherable. Perhaps if I were a performer I would create less cluttered arrangements to suit the environment. Though it could be argued that that’s exactly what I’m already doing since you’re likely to hear me on headphones or nearfield speakers.)
Convolution reverbs tie in with my rather vague and half-hearted interest in field recording. And my general interest in place and how it affects how you feel. All of this is being pushed to the fore at the moment as I read Spaces Speak, Are You Listening?: Experiencing Aural Architecture by Barry Blesser and Linda-Ruth Salter. I think this is an area of experience too often neglected by both scholars and designers. After a couple of quick searches online I got the impression that it’s a newly unfolding field with little but academic papers available for the reading public. (Now that I’m about three-quarters through the book, as of October 4, 2015, I find that Blesser is dismissive of convolution reverbs. Or at least of proponents’ claims that they have faithfully captured the true reflective character of a space. He contends that reverberation is not static, that constant changes in temperature—heat ripples or shimmers, especially in a place full of people—will modulate the sound.)
I’ve been trying to trim my sonic canvas to something more recognizable as music, where you can effortlessly distinguish one sound source (or instrument) from another, which might make room for reverb. The idea of placing a composition in an odd location, such as a forest, is very appealing to me—the aural spaces most deeply ingrained in my psyche are small, cluttered domestic spaces and northern forests. I think I’ll keep searching for downloadable IR libraries (saw one site with beautiful photos of a Finnish forest, which I have to assume sounds a hell of a lot like a northern Minnesota forest, but couldn’t figure out how to download anything from them, paid or free).
(I want to point out that the tone of this post is a bit odd. It began as a letter that quickly became too impersonal to be sent as a letter.)
It would have to be at least two years, probably three, since Tape Op ran an interview with Bob Heil in their “Behind the Gear” column, an interview that made me want to get a Heil Sound PR 40 microphone. Actually, it was seven years ago (issue 67, September/October 2008). Yet it still felt like a wild impulse buy.
The package arrived yesterday, a Friday, but I was too tired to do more than open the big box to make sure all the little boxes were there and in good condition. Today I set it up and gave the microphone a test (as well as testing my Presonus BlueTube preamp with this particular mic, which is reputed to not have enough boost for the PR 40).
The going price for the PR 40 is $327. I saw very few dealers offering it for less. My regular dealer, Sweetwater, does not carry it. My intention had been to buy it from Guitar Center, a company I often do business with, have it shipped to the store for pick up. But why? BSW had a package deal for $369 that included a shock mount, pop filter fitted for this type of mic, and an XLR cable. It’s not obvious from the ad but it also comes with a desk-mountable boom (that’s at least $150 of extras for $42). And it would be delivered to my door. You can see these extra items in the photo (except for the cable—I won’t keep it wired up until I decide where the permanent placement will be…also, the supplied cable is a little long).
The instructions on the BSW canister suggest that the pop filter be at least an inch from the screen on the mic. I pulled it to almost three inches and am also trying to get farther back when I read, to minimize breath and mouth noises. Ordinarily I keep my mics in the case for safe keeping and cleanliness, it being a quick job to pull them out and set up. But this filter is a pain to get on and off, requiring the use of a screw driver. Instead, I’m going to try leaving the whole thing set up on the boom (right now it’s visually intrusive, sitting in my left peripheral field, as any breeze coming in through the window ruffles the plastic bag I’ve enclosed it in).…The shock mount is wonderful, almost as good as suspension on a car. I’ve been recording for 19 years and it’s just now that I’ve gotten one? It took me almost ten years to get a mic preamp and I consider that something of an essential.
While testing the mic I had it on a floor-based mic stand with a boom, as I usually do. I couldn’t find a comfortable position. After the test, as in the above photo, I have it placed on the appropriate boom more or less in the position I would use it. This will allow me to read from the computer monitor, with the mic generally out of my field of vision, rather than having to read from paper (my eyesight is going to hell and I especially have trouble reading in fluorescent light). I do like the idea that it’s in position and always ready to go. Anything that cuts down on set up time is always good.
If you’re getting the impression that I just like posting pictures of it, you’re right. But this is the last one.
The diaphragm in the PR 40 is at the top or end of the mic rather than on the side, as it is in most microphones, so this is the position it will be in and my view of it when in use. (Heil Sound refer to this as “end fire”. I’ve also heard mics referred to as “end delivery” and “side delivery”. I’ve also seen “side address” and “end address”. I was going to say this only matters with a cardioid pattern, versus omni, but I’m not at all sure that’s a true statement.)
According to Heil Sound: “The PR 40, with its broad frequency response, is the ideal mic for bass drums and bass guitar. With its superb rear rejection the PR 40 is a must for broadcasters.” I sought it out because it’s their premiere broadcaster and podcaster mic. In reviews it really has been used on just about everything. It can take high SPL—that is, a very loud source—and has excellent off-axis rejection. I’m not that loud nor energetic enough to force myself to be loud, though I did get a scream of feedback because I’d left my computer speakers on when I said “boom!” in an attempt to get the preamp needle to move. But I definitely tested the off-axis rejection, intentionally, by leaving my windows open when recording and unintentionally by leaving the speakers on a couple of times. Without listening on headphones I did not hear the speakers doubling my voice (except for that one bit of feedback)—it was distracting to my ear while reading but seems not to have been picked up by the mic. From the window I had the constant accompaniment of a mourning dove and repeat visits from a blue jay. Possibly crickets. A lawn mower followed by a leaf blower, across the alley. The distant roar of jets at the airport (not the flyover problem you’d have in South Minneapolis but a more distant plane). Dogs barking. A neighbor taking out his trash. The window was behind the mic and I don’t think I captured any of these sounds. It did, though, pick up wind. There was only a light breeze but as it blew over the mesh of the mic it caused a rumble (I’m usually pretty careful about exhaling toward a mic, which is normally where the wind comes from). I don’t normally record with the window open but this was a test. I like the results.
The more interesting, challenging, and inconclusive part of the test was to get my input levels on a preamp and interface optimized. I had heard that this mic needs more boost than most home recordists’ preamps can deliver. Ordinarily when I record with a Røde NT-1 I have the gain on the Presonus Bluetube set at about 3 o’clock. much higher than that I start getting very noticeable circuit noise (though that might all be because I’m using the tube, which I tend to do with my voice). I think I have the input gain on the M-Audio M-Track set about the same. Also, with both, if I run it hotter than that on the rare occasion I do get loud it overloads and distorts in ways I prefer it didn’t. I found I didn’t always get a consistent signal with the same settings. Was my reading position varying that much? The first time I tried reading the signal was weak, well under -6dB, with very few pronounced peaks (the wave forms looked like something copied off an old cassette, if that’s a helpful image…kind of small and blocky shapes like charms on a bracelet). Then I cranked everything to maximum and it clipped and distorted. From there I kept turning it down, take after take, but it was still running hot. After going out for some errands I came back to the test and could not get it to run that hot again. I suppose it could be something about the circuitry failing, somewhere, but I’m inclined to attribute the difference to inconsistent positioning as I read.
What I was reading, aside from my chatter about recording conditions, was Edgar Allan Poe’s “To One in Paradise”. The screen shot is of maybe my tenth take and is trimmed down to just the poem. I had the preamp gain about two detents short of maximum, a little hotter than I normally do it, and the M-Audio gain at about 75%. Yet you can see in the screen shot that the signal was not all that strong. Hardly anything goes above -6 dB.
I will post four versions of that reading for your comparison. The original recording, in no way processed or edited except to cut just one take from a string of them. The second is with light compression from the maximizer module in iZotope’s Ozone 5 (threshold set at -6 dB and gain set to -0.5 dB). Not much changes but it makes the whole word easier to hear and brings up the noise floor a little. In the third I cleaned up the circuit noise with Sony’s Noise Reduction software. (All of these programs should be used with care. Some add artifacts, as does the noise reduction. Compression can do all kinds of horrible things, both sucking the life out of your recording by eliminating dynamic range and pushing the noise of circuits, ambience, and mouth sounds to the fore. Reverb, of course, is a disease unto itself, and not just in the hands of beginners.) The next step was to add a little reverb, something I normally avoid. But in a bare bones reading it gives the speaker’s voice a sense of place. For this I used a convolution reverb ( a great idea that I’ve barely explored). Initially I went to Sony’s Acoustic Mirror but didn’t find the right sound. Instead I fell back on Reflektor, which opens in Guitar Rig (once again, Native Instruments). Even so, I brought the room ambience to a pretty low level. In the final track I repeat the compression step, same settings. I don’t think I like it. Over all my voice is louder and easier to hear against any background sounds in your listening space. But the reverb and my mouth noises have also been boosted.
When processing my voice for a composition, to be heard over the other sounds, I would have done a few things differently. For instance, I would have boosted the frequencies around 1000 Hz (1 kHz) to make the words more audible. I probably would have used more compression, at least at that frequency range. And I would have cut the reverb way back or, more likely, left my voice dry.
I suppose I should have purchased the mic seven years ago. But what would I have had to look forward to? This is almost certainly the last microphone I’ll ever buy.
(This test was delayed by several hours while I copied some cassettes of my partner’s father speaking in public. They were recorded at double speed. I don’t know what they were recorded on. Not clearly thinking it through, I dusted off my 4-track. That just doubled the speed again. But I found a time stretching feature in Sound Forge Pro 11 that had not been included in earlier versions (I’ve barely used any of my audio software in the past year and Sound Forge is one I’ve updated but barely touched)—double the time and bring down the pitch twelve semitones (an octave) and it sounds like Stan.…We got word late afternoon that he had died. It had been expected. We were already gathering images and recordings for the funeral.)
To someone like me, with no training in audio engineering and no connections with people who do it for a living (well, okay, I know the guy who does live sound for The Prairie Home Companion but we seldom see each other…he’s the one who pointed me to 4-track portable studios back in 1996), a mastering engineer is something of a magician. The recording engineer, or tracking engineer, is the first step in the process. If they do their job well everyone else down the line has little to do except tweak. Then comes the mix engineer, who seems to have all the fun and glory (unless the goon squad from the record label is hovering over their shoulder). They make the pieces fit and add all the effects, generally making a song what it is when you hear it. The mastering engineer usually makes it sound even better. On a bad day all their client wants is for them to make the recording louder so the song will trounce the competition on the radio. Usually the mastering process requires the engineer to make all the songs fit together by matching the levels (for instance, having the vocal consistent from loud and quiet songs while keeping the songs loud and quiet…in a sense undoing the normalize function), finding some sonic common ground between songs, gluing them into a cohesive whole (that is, an album). And they might add a final sparkle, a last little something special, to make each instrument audible but not obtrusive. (All the links I’ve provided to distinguish between types of audio engineers are actually crap. I recommend subscribing to Tape Op. It’s free. Every issue has several interviews with recording engineers from all styles of music and all positions (some issues feature mastering engineers or film engineers or both historic and contemporary studio engineers—they’re really trying to explore all aspects of the field). They have had very little about poetry and music, except a sidebar with Eno, perhaps because there’s so little happening in this field. We need to change that.)
I’ll tell you right now I don’t know anything about mastering. I don’t have golden ears. But I have some pretty good software from iZotope and I know that my recordings can sound better after using Ozone. Or they can sound worse. Like you, I have some sense of what sounds good and what doesn’t (though we might disagree on the details).
I think I first encountered iZotope’s Ozone toward the end of 2005 or early 2006 in a demo version that came with Sony’s Sound Forge 7. It made no sense to me and I didn’t use it. Version 9 of Sound Forge came with a mastering bundle, a limited version of Ozone, and I think it was shortly after that that I bought Ozone 4. In 2010 I started to learn the software and what mastering meant. It was a slow and unsatisfying learning process.
It was around this time that I finally began to understand a little about the uses of compression. Compression is a tool I should have been using, sparingly, since the beginning (I’m not talking about file compression, or data compression, which is a whole different business). One of the key tools used in making tracks fit a song and making the song seem as loud as possible (hopefully without destroying all the dynamic range) is to use compression and/or limitation. The basic idea of both is that the loudest peaks of sound are squashed or limited, no longer overdriving the circuits, and the rest of the sound is then boosted because there’s more sonic space before the overload. I still use very little compression except on my voice, to make it more intelligible. (In the image below you’ll see the peaks. The idea is to bring those closer to the primary block of the sound wave so that you can boost the volume of the whole thing. Of course this will also boost the noise, which you can see clearly on the right just as the image exits the frame.)
In September 2012 I went through just about all my compositions to master them. In part this was a response to people on SoundCloud asking for the vocal to be clearer. I like what could be called a rock mix, like early Black Sabbath, where the vocal is mixed more evenly with the other instruments. Versus the pop mix in which all the other sounds are in the background while the main vocal is slapping you in the face, which was always the case with the music on the radio in the 1960s (and still seems to be true except that the drumming is often comparably loud if it’s dance music).
With my mastering I tried to do several things: make the vocal clearer and easier to understand, mainly with EQ and multiband compression, giving it a boost around 1000 Hz (1 kHz), which is where most of the verbal information is; make the bass sounds louder and centered, also with multiband dynamics; to notch out some room for the voice by cutting back a little on the other sounds, but also by making them more audible on the left and right channels (the mid/side feature of Ozone’s units works wonders—your options are stereo or mid/side); to make my recordings have a little more of that competitive volume; and to balance the levels so that if you played through all my recordings the volume of my voice wouldn’t seem to jump up on a quiet track (in effect making my whole oeuvre an album).
All of this was done with Ozone 4. I’ve since gotten Ozone 5 but because the interface has changed I haven’t taken the time to relearn it (also, I haven’t had much need for it). iZotope is currently on version 6. The lower priced version goes for $200-$250, which is a reasonable price for such excellent software. If you can’t afford a professional mastering engineer I’d certainly recommend Ozone.
When I first experimented with Ozone I tried using all the effects units: EQ, reverb, loudness (limiter), exciter, multiband dynamics (compression), and stereo imaging. The first thing to go was reverb; if I use a master reverb to make it seem all the sounds are in the same room (not a bad idea—it’s really easy to overload the sonic space with dozens of different reverb effects, one for each track). The exciter was quick to go as a standard tool. Usually I use only a touch of EQ, most often to bring things down rather than up. I almost always use the limiter (loudness maximizer) to give my recordings at least a little boost—even so they still seem quiet compared to other people’s recordings (I used to hate putting Peter Gabriel songs on a mix tape because they would always drop in volume compared to the other records of the day). Stereo spread is another unit I use sparingly.
I probably should be using Ozone in Sound Forge. For some reason I find it easier to put it in the master bus of ACID Pro. So much of the software available these days is redundant. Especially the primary DAWs, such as Pro Tools, Sonar, Studio One, and all the others were you do your primary recording and mixing, seem to be trying to become a complete package from initial tracking to mastering. I’ll checking out the possibilities within Sonar when I get back to recording.
Sonic Foundry’s ACID Pro —now owned by Sony—is a digital audio workstation (aka DAW). DAWs come in two primary flavors: audio editor and multitrack arranger. In the old days, as computer audio was just developing, there were three separate threads for composition programs: live tracking, MIDI arranger, and looping.
Pro Tools is the primary and most famous of the programs for live tracking, with a 4-track version being released by Digidesign in 1991 after several years development as an audio editor. Back in the 1990s, when I first started working in the medium, most of the software for computer music was merely a MIDI sequencer (Logic, Cubase, Digital Performer, and Cakewalk are some of the better known), in which the sound modules were actually external (you could have used a hardware sequencer such as an Akai MPC). Pro Tools was just about the only thing going for recording audio onto a computer and it was insanely expensive, the software costing thousands of dollars plus the cost of hardware converters and interface (analog tape was cheaper and better sounding). Hard drive and RAM in those days were so limited and such a precious commodity that it was ridiculously expensive and slow to make music on a computer. It wasn’t until the end of the decade that the more modestly priced MIDI programs began to effectively incorporate audio recording but you still had to put out a lot of money to connect your recording gear to the computer.
Sonic Foundry made a big splash toward the end of 1998 with the release of ACID, which was specifically for looping audio files. This was made possible by stretching the file to fit the tempo of the composition without altering the pitch, although you could also alter the pitch in a more controlled way to fit the key. The process used far less computer memory because you were creating long compositions of multiple audio tracks made of very small files, usually just a couple measures in length (you could take a file of a couple hundred kilobytes and repeat it to form a track of a five minute song, adding only a few more KB rather than creating, say, a 3 megabyte file recorded in real time…a rough estimate of 1 MB per minute of stereo). In theory the file’s sound quality was not altered or damaged as it was made to fit the key and tempo of your song. In practice, if you took a drum loop set for 250 BPM (beats per minute) and played it at 80 BPM it would be badly distorted, rather choppy sounding like the sound of scrubbed audio, because as the file stretched the samples would no longer blend so smoothly (like if you slow down a film too much you start to see individual frames). The sound of taking it too far from the original key is harder to describe but let it be known that it could make you queasy whereas an overstretched tempo could be kind of artsy.
(There are at least three definitions of “sample” commonly used in digital music and I will be using two of them. One is the sample rate of any digital recording and what I referred to when stretching a file too far. It is analogous to a film’s frames per second. The other version I use here is a rather sloppy, nontechnical term for any small audio file or a generic for almost any audio capture. I usually refer to a long recording of noodling with a real object, such as a stock pot, or a field capture, such as leaves rustling in the wind, as a recording and then all the fragments that I chop that file into as samples. The last type of sample would be single notes at different degrees of performance (as someone would naturally play an instrument) assigned to a MIDI key in layers that change with the velocity of your playing (think of samples assigned in a Kontakt instrument such as a piano to be played back as close as possible to a performance on a real piano).)
As cool as ACID was you really couldn’t do much if you were just looping. Initially people were using fragments of other people’s songs (The Beastie Boys’ Paul’s Boutique is one of the most famous and notorious examples of the technique). Then the court cases came and it made sense to market royalty free loop collections. These are still popular, especially loop libraries of drumming, though about the time really interesting artists started making great collections the fad was beginning to die.
My introduction to ACID went nowhere. It would have been late 1999 or early 2000, either version 1.0 or 2.0, one of many illegally copied programs on a pair CD-Rs a friend had given me. At that point in its development ACID really was nothing more than a looping program and I had no interest. By the time I seriously started to look into computer audio, late 2001 into 2002, Sonic Foundry had added real time recording (I bought version 3.0). The joke here is that I did not use that feature for several years, preferring initially to continue tracking on my Roland VS-880 then transferring the recording to the computer via Sound Forge and only then adding it to ACID as I would any other one shot. Or when I finally started tracking directly to the computer it was to Sound Forge. This deprived me of speaking in sync to the previously recorded tracks as you’d do while overdubbing, though by the time I recorded my voice I usually had the rhythm of the piece internalized. In a sense it spared me having to compete with and, sort of, shout over the other sounds.
It’s hard to look back almost fifteen years and remember what your motives were and what you had in mind. The other DAWs were more expensive and seemed less appealing in what you could do with them. Good MIDI instruments were still external and good quality but inexpensive interfaces were not yet common. By 2002 almost all of them had merged audio and MIDI capabilities. But only ACID featured looping. I think I had a lot of ideas of creating my own loops of odd and ordinary (nonmusical) sounds—almost certainly the case. Because collaborations had not panned out and I could not afford to hire anyone to play on my recordings, I think I was also intrigued by the idea of commercial loop discs (I mean, how else would I get David Torn or Bill Laswell on one of my compositions).
To create a loop is pretty simple, especially with something like a drum sample. In an audio editor like Sound Forge (very well integrated with ACID) you start by trimming an audio file to the exact length you want, to create a full measure or whatever. Then you go into a special “acidize” menu (in recent versions of Sound Forge you select “options” then “status format” and then “edit tempo” or maybe it’s easier to go into “view” then “metadata” then “acid properties” where if you select loop or beatmap you can select the root note as well as the tempo—this all seemed more straight forward before version 9 or 10) to select how many beats there are and what key it’s in (if any). Once you’ve done that you just drop it into a track in ACID, dragging it across the timeline for as many repetitions as you need. It automatically repeats while fitting into the measures of the song. On playback it adjusts to the tempo of your composition, so that if you change the tempo of the piece it will still play in time. You could very easily create complex musical entities using nothing but track after track of loops and one shots. (An interesting aside. If you weren’t careful with the settings in Sound Forge you might accidentally create loops when you’d intended to use your file as a one shot (plays only once without repeating). This could lead to some unexpected results when placing the audio file in ACID, changing the playback to add a choppiness if the sample’s BPM was set really high and your creation has a slow tempo. Or just fitting it into the tempo when you thought it would be free of the beat (see that check box that says “play looped”?). Often I would keep these mistakes.)
(In the ACID Pro screen shot you might notice that it is set snap to grid. You might also notice that the first sample is set to loop at three beats. No matter what tempo the song is set at it will always fall within three beats. You’ll also notice that it seamlessly repeats. The second sample is set to loop at two beats. The second and third sample are the same but the third is the original one shot. Notice that in real time it’s slightly longer than two beats.)
It wasn’t until the spring of 2005 with the release of version 5.0 that ACID started becoming a complete DAW that could compete with the others, when they added full implementation of MIDI. Version 4.0 had some rudimentary MIDI but even version 5.0 was iffy, not necessarily working every time. But it also came with a bundle from Native Instruments (XPress Keyboards) that pointed me back to MIDI. I was quick to dabble but slow to embrace this return to MIDI. Version 6.0 again took it further (summer 2006) but it wasn’t until version 7.0 at the end of 2008 that the MIDI implementation started to work without a ton of glitches. (As an aside, if I had waited until the end of 2002 I could have gotten all of it—audio, MIDI, and loops—in a more complete and functional package from Cakewalk. It must have been the release of the first version of Sonar that they first accommodated acidized loops.…It’s nearly impossible to track down much information on these old programs. Your best bet is product reviews but even there the further back in time you go the more you realize how many of the magazines were strictly print. I’ve had the best luck with Sound on Sound.)
I have never been comfortable working with commercial loops. It isn’t that I consider it too easy or cheating—I’m not a musician nor pretend to be one. One problem, for me, is that they’re too pretty, too polished, and sound too…musical. It’s too easy to make a bad Peter Gabriel track rather than something that sounds like me. Yet I’m still willing to use them.
Up until 2008 when my Windows XP computer died and I bought a Windows Vista 64-bit ACID always worked well. As I mentioned the MIDI was glitchy but, generally, it was a stable program. ACID Pro 6.0 was not compatible with a 64-bit operating system. It was in July that I bought a new computer and out of frustration I gave Sony until the end of the year to come out with an update. They came through with little time to spare that December. At first, when I had problems, even their tech support was claiming that version 7.0 was still not compatible with 64-bit (I got an apology from someone higher up about that one, I think in part because techboy was also very rude and snotty).
That was seven years ago. They’ve made small, tweakish updates since then (to accommodate Windows 8, for instance) but no new version has come out. This seemed ominous. So, last year (May? 2014) I finally made a switch to a more “professional” DAW, Cakewalk Sonar X3 Studio. I’ve had other things on my mind (like this memoir, which is now seven, almost eight months in the making) and have yet to really test out Sonar (which has kept me from updating to their new quasi-subscription plan). The drop in price on ACID Pro tells me I’ve done the right thing. While they keep coming out with new versions of Vegas and Sound Forge, which stay at their original price, ACID Pro now costs $150 direct from Sony (saw it for $100 at Walmart online). I think it was at least $300 when I first bought it in 2002, maybe running as high as $600. At $100-$150 it’s the price range of a budget, beginner’s DAW (well, almost). I still say it’s an excellent program for any kind of music making but I’d hesitate to buy it because it looks like it’s on its way out.
The DAW market has always been competitive. Now that almost all of them do the same thing: they’re either going back to doing one thing really well, dropping dead, lowering their price, trying to adapt to new technology (such as tablets), or any combination of the above.
I’ve had a lot of great experiences on ACID. I don’t know that everyone can say that. But it’s time to move on.
Sound Forge is an audio editor. At its original and most rudimentary form, that meant cutting and pasting sections of an audio file. Basically it was what you’d do with a razor blade and some tape to magnetic audio tape in earlier years. It was the software you’d use to shorten a song or combine two songs. You’d use it for creating fades and crossfades, or adjusting the volume of a recording. From there the concept and possibilities have grown to include time stretching or condensing, adding pretty much all the effects available to the engineer or musician (EQ, delay, reverb, distortion, filters, et cetera), as well as recording, file conversion, mastering, and publication.
Without Sony’s Sound Forge I might not have started making audio on a computer. I’m not saying that to plug a product—there are plenty of alternatives, some of them free. For the type of work I do, where I’m often mangling ordinary sounds, an audio editor is essential. Both in terms of my process and the history of my development in the audio medium it is the first program I use.
In 1999, when I got my first computer, someone I knew, who didn’t believe in paying for much of anything, gave me a couple of CD-Rs loaded with various programs, both audio and graphic. Most of it was a waste. But it was here that I got a taste for working in Photoshop, QuarkXPress, and Sound Forge (I think it was version 4.5), all very expensive, heavy duty programs. In those days Sound Forge was a product of Sonic Foundry in Madison, Wisconsin (I remember driving past their building once, though I have no recollection of exactly where it was, but it was cool that it was part of my physical and social world). A couple years before I went legal, toward the end of 2005 or early 2006 with version 7, they had been purchased by Sony.
Initially I only used Sound Forge for digitizing my LPs. If it came with Noise Reduction I couldn’t install it so I relied on EQ to minimize the crackle of old vinyl. Some of the big pops could be removed by zooming in so tightly that the offending noise would take up the whole screen, be selected, and reduced to silence. To maintain the pacing of the LP I would do something similar for large silences between tracks, selecting and silencing most of it while fading in or out at the ends of the songs.
What I really use Sound Forge for, and need it for, is transforming ordinary sounds. Professionally the field is known as sound design, which sort of overlaps with foley, and ranges from people making unique sounds for film and radio (think of the sound effects in Star Wars or Jurassic Park) to people designing unique soundbanks for digital musical instruments. Because of the flexibility of digital audio it has become a very common and almost invisible art at times. In my recordings I flaunt the oddness of my sounds.
Initially my samples were just the old 28 second file banks from my Roland MS-1 sampler, where I’d used it as a portable field recorder. One of my favorite sounds, used in it’s pure form on several early recordings (primarily on “The Apostle”, which is what the whole press sample set was created for in the spring of 1996, from objects in a silkscreen pressroom where I worked at the time), is a sheet of cover weight paper shaken. In 2002 or earlier I began playing with that in Sound Forge, I think just stretching, reversing, and splicing it, until I had some enjoyable sci fi sounds (they remind me of the electrical crackles in the laboratories of films like Frankenstein).
In the playlist below I have the original sample of shaken paper. I assume “r” stands for reverse and “s” for stretch. Not a clue what “x” means (it’s a composite of many variations used for the intro) or why I went to another sample number (17, which became a thumping rhythm). It was all put to use in my first computer composition, “Swamp Messiah”. At that time I did not understand looping nor how to create loops for ACID in Sound Forge, so the rhythm sample was manually laid end to end on ACID’s timeline.
Swamp Messiah, draft 3.5, September 27, 2012:
Another example of mutilation is a recording I made on a cheap cassette deck of our firstborn, aged three months, in 1991, a little baby yawp that I stretched into a wailing siren for “Like Unicorns”. The initial recording of our babbling child was several minutes long. I chopped it into little pieces, including some that are just tape hiss, and played with many of them until the results are very un-babyish.
Like Unicorns, draft 1.4, September 15, 2012:
Maybe ten years ago the squeak of our kitchen door was getting on my nerves, especially at night as our adolescent firstborn would come and go with the kids downstairs while I was trying to sleep. Before oiling it I swung the door back and forth while recording the resulting sounds. One of the samples was then stretched and/or pitch shifted until it almost sounded like a trombone. I used that on “Miasma”.
Miasma, draft 1.1, September 15, 2012:
The last example I have is of an aluminum bar being rubbed. I have no idea what I did to create the deformed sample. At the time I concocted “Cortex Failing, Frontal Lobe Already Down” (we haven’t gotten to this one yet in the overall narrative) I was trying to create compositions with one sample set or one synth (here I used two synth patches, my voice, and that one sample set). In this case I used eighteen samples. The original recording of my tapping, banging, and rubbing the bar was broken down into ninety-three raw samples. Many of those were manipulated several times into one shots and loops (for a total of one hundred and fifty-two files).
Cortex Failing, Frontal Lobe Already Down, draft 1.1, September 10, 2012:
(while writing and proofing this page I kept running into a problem with the playback of the above playlist being played in the “Cortex Failing” player—refreshing the page will allow “Cortex Failing” to play)
The other time I use Sound Forge is when mastering a composition with iZotope’s Ozone. Sometimes I do this in ACID Pro because it’s easy to monitor and everything remains loose and alterable until I render the file to WAV format. In ACID I just put Ozone into the master buss. In Sound Forge it’s more like using a filter in Photoshop where the effect is selected and previewed but then the file is changed when you click OK (obviously you then use “save as”).
If you’re just recording your voice or a stereo mix of a group performance, rather than creating multi-track compositions, Sound Forge is probably adequate for all your recording and editing needs.
As usual, my use of the software is only a small portion of what it’s capable of. This seems to be true for most users of software, not just some quirk or failing of my own. You could feel you’re spending a lot of money for a heap of features you’ll never use. I consider that the wrong attitude (if you want to try cheap, most audio software now comes in a budget version, the equivalent of Photoshop Elements, the sort of thing you give your teenager for Christmas).
Most of us do not need to periodically update our version of audio editors. Their functioning hasn’t changed much in the past decade. There will be tweaks to the workflow which are unimportant to the average user. One of the two reasons I’ve found for updating is when they’ve incorporated some major change to adapt to how the market works, whether it’s a change of process or a new standard. The other is to accommodate a new operating system. Whenever I’ve gotten a new computer I’ve usually updated my software (with Sound Forge I’ve purchased versions 7, 9, 10, and 11…version 10 was a waste of money because when I finally got a new computer in 2013 it was running Windows 8 rather than Windows 7 and version 10 wasn’t entirely compatible with Windows 8, it would run as a demo and not let you save your file unless you chose “run as administrator” when opening the program…though what made me update was a new feature, an extra window for viewing the whole file while in the usual window you’d zoom in for a close look).
As I mentioned at the beginning, I’m not plugging any particular product. I’ve always used Sound Forge, I like it and see no reason to change (Sony has continued to develop the program, unlike their treatment of ACID Pro which has made me skittish enough about continuing with ACID Pro so that I’ve switched to Cakewalk’s Sonar). Because I subscribe to Adobe’s Creative Cloud I could use Audition at no extra charge. It’s a good program but it would impose a learning curve. Or, I could download Audacity, which is totally free. (The few times I’ve used it I didn’t like it, having been spoiled by better software. But it could be all you need.)
(Most professional audio editors will open m4a files but not save them, though they will save to Apple’s uncompressed format, AIF. Sound Forge is finally available for Mac, so it might save to the compressed format on a Mac.)