Hello retouchers! So, today’s tip regards grayscale and how we can use it to not only check our work, but calibrate our eyes and get greater results overall.
Let’s begin with how to view our image in grayscale using as usual, Photoshop. We could create a new fill layer > solid color, choose white or black as our colour, and change the blend mode to Color. However, it’s not completely accurate. I’ve seen people actually convert the image mode to Grayscale, but that can be problematic with adjustment layers. Like using a black or white layer above everything, the Black & White adjustment layer isn’t accurate either. There are a few different ways but personally I find this better: View > Proof Setup > Custom > Device to Simulate and choose Gray Gamma 2.2 and click OK. Boom! There we go, now we’re looking at our image in grayscale. Don’t worry, to go back to RGB, just use Control + Y to toggle between the two.
So, we’ve heard about this thing called black & white photography and maybe even fooled around with it and it’s trendy vintage look, but black & white photography is more than just black and white, it’s riddled with radical contrast, grain, exposure variations and texture, not to mention historical details (i.e. 1800’s b&w versus 1900’s b&w). Grayscale instead, is simply shades of gray. To understand the difference between black & white and grayscale, let’s call black & white an effect and grayscale one of our most important tools in retouching that ironically isn’t even a Tool in Photoshop.
As most of you regulars here know, I come from an audio background. Though it’s a completely different sector, audio, video and still image production are very much alike in so many ways. Grayscale to a retoucher is like mono to a sound engineer. When we have all of our tracks laid down and we’re in the mixing stage, pressing the Mono button to check the state of our mix is not only good practice, but fundamental. I’ll keep it very simple because no one’s here to learn about correct audio terms and lingo.
Lets say we have two speakers, one on the left and one on the right; they’re one meter from each other. Today we have 2 synth keyboards and we’re just using piano sounds. So, first we play a note on one keyboard. The sound we hear has 3 simple characteristics: amplitude (volume), phase (timing and length of note) and the frequency (the note played). If we just play that note, we’ll hear it in mono. Basically, we’ll just hear it in the imaginary center of the space between the speakers (as if we had a single speaker in front of us). In fact it sounds like we’re not taking advantage of having 2 speakers. Now, we want to hear our sounds nice and wide in stereo with our two speakers because we wanted to one-up our neighbour who uses a gramophone, and in mono, the guy is laughing at us for spending money on two speakers. People presume that by playing both keyboards at the same time, we’ll hear the sounds in stereo, but it’s not that simple. If we play the same note, at the same exact time and at the volume on both keyboards, we’re going to get angry because we’re still going to hear the same sound in the imaginary center of the speakers, but just a bit louder, aka “big mono”. If we want those two sounds to separate between left and right, we need to change at least one of the 3 characteristics of one of the sounds. So, maybe we’ll play the same note at the same volume, but we’ll play one keyboard a quarter of a second after the other one. Wow! Now we hear a sound on the left and one on the right, and nothing in the center, it’s futuristically wide! Sure it’s basically the same sound, but wide. The more we change the characteristics of each sound, the more our stereo image (stereo sound) widens and becomes more complex. Cool! Wait, not so cool. This is just a very simple example and explanation of mono vs. stereo. Mixing a song, is that example, on steroids, to the 10th power, and it only gets worse the more artistic you want to be in your mix. A good engineer makes sure that a song (a mix) sounds great in mono (on a single speaker). Why? Well, reason number one and probably the most important, if you can distinguish sounds, feel the groove, be pulled into the ambience…basically enjoy every aspect of a song with just one speaker, two speakers are only going to add to an already great listening experience. Another reason may be to check complex issues such as frequency cancellation and other specific details that can lower the quality of a mix. For instance, when I use the example of changing the timing of one of the notes played, it could cause phasing issues that will be audible when listening on a Bluetooth speaker. Most people tend to subscribe to stereo sound because they are convinced that 2 is better than one. But the harsh reality is that there are very few times in our lives when we actually take advantage of stereo sound. One instance is when using headphones. Another instance is when we’re sitting/standing in the sweet spot. Now, I’m not much of a purist, but to make my point, the sweet spot is created by an equilateral triangle. Take for example the scenario we used before with the keyboards. If our two speakers are a meter apart, we need to form an equilateral triangle by sitting one meter away from each speaker in order to listen in stereo correctly. If we’re too far back, the stereo image is too narrow; too close and we may as well just put headphones on; too far left and we’re only hearing the left side and so on. When we listen to music in a car…sorry but our expensive speaker set up, along with balance and fader adjustments are almost worthless because the set up is symmetric and you’re only sitting on one side of the car. Moreover, based on the fact that being too far from the speakers narrows the stereo image…unless you’re going blind by sitting a meter away from your 50″ TV, you’re effectively just listening to the sound on your TV in mono. So, suddenly, the old fashioned concept of mono has become more relevant than we thought and we should make sure our mix sounds great in mono because realistically speaking, that’s the way most people are listening to music. I’m always bewildered when I see an EDM producer working so hard to make a track sound wide, panning things left and right and all over the place…the tracks are played primarily in clubs > clubs use mono…
Back to using our eyes! So why grayscale? Well, let’s go default mode with RGB; sometimes we encounter difficulties with contrast. If you can fix all your contrast issue using colors…well congratulations, you can only be defeated by Kryptonite. As for us mere humans, we can’t base everything off colour because, and I’m sorry to say, not all screens project with the best colour contrast levels. Colours can be deceiving, however, the point is that if we can face obstacles using just 2 of three 3 colours in RGB starting at their simplest form, it can only get worse if we’re not careful as our workflow becomes more complex. Blue seems to have its own issues in life, so things don’t look so great if we’re basing the quality of our image solely on colour.
Luminosity however, is nature’s response to the never ending and mind numbing amount of colour combinations. There are more than 16,000,000 colour combinations using hexadecimal alone, compared to 256 shades of gray (0 black to 255 white). Personally, if I need to troubleshoot some contrast issues in an image, I’d rather deal with 256 possible outcomes than 16+ million.
I find that a lot of advice tends to lean more towards colour correction than towards luminosity and it’s values. For the sake of this article, we’re going to throw exposure, aperture and ISO into the same pot and call it all Luminosity. The only reason we’re working on an image in the first place is because a hole opened up on the front side of a camera and LIGHT hit a sensor. Grayscale will help you focus on luminosity issues. There are a lot of tutorials regarding how to make colours pop, but there are few that teach how to really fix the contrast between the shades of light in your image. You can use Masks, Apply Image, Adjustment Layers and so on, but here we’re talking about the best way to gauge the use of any one of those tools. By the way, if you’re working on luminosity and using an adjustment layer to do so for instance, please remember to change the blend mode from Normal to Luminosity.
In music, if we want to prioritize an instrument like the vocals and make it feel like it’s in front of another instrument, we might cut a some of the high frequencies on the intended background instrument, thus making the voice feel more present or prominent. So, if we have an image of the night sky and all the stars basically have the same amount of luminosity due to the fact that we use a good long exposure; how do you suppose we might “prioritize” a constellation? Well, we won’t touch the constellation, we might just lower the luminosity of all the stars that don’t make up the constellation. You’ll notice that we just fixed an entire image without even thinking about colour at all; granted, it’s a simple example, but you get the point. So, in the future, before diving into a sea of colours and spending precious time trying to use them to bring an image to life, check with grayscale and see if the contrast levels are really what’s boring you.
Colour is important, it’s nice, it can completely change the tone of an image. However, we are human beings and we have senses that weren’t really made for extensive use over the course of a few hours. What I’m about to say happens to everybody who uses their senses to do anything. Basically, our senses will deceive us throughout our retouching session, in particular, our vision. Your eye sight goes to hell over time when using screens in badly lit rooms due to artificial depth perception, and here we’re simply talking about how long exposure to colours affects our retouching. If we always eat sugary or salty foods, we’ll start to lose the ability to distinguish certain flavours from others. But if we go two weeks with eating only boiled potatoes, our taste buds will reset and, believe me, half of a small spoon of sugar in our coffee will make us think it’s Christmas! So, after I’ve done all my luminosity work (like dodging/burning, which is my second phase of retouching, after patching and such), I move on to colour issues like skin tones or colour swaps for instance. I don’t use a stop watch, but basically after every step in my colour workflow, I turn on grayscale for a minute to keep my eyes from getting used to the saturation in the image. Naturally, as I go ahead with my retouching, I’m fresh and able to more accurately distinguish the various tones which saves me loads of time in finding key tones and I usually don’t have to dial back my saturation or vibrancy at the end. Sometimes, we just get trigger happy with colours and saturation…it happens. But sometimes we think we’ve been conservative with the choice of colours and amount of saturation, but then we come back the next day, take a look, and wonder what we were smoking when we retouched that image. Unfortunately, we don’t always have the luxury of next day deadlines, so we have to learn to be quick and concise. This is where grayscale helps us. By making habit of checking our work routinely throughout our a session, we can rest assure that our image will look fresh. At worst, the client will question your choice of colours, but not your capability of colouring.
In my experience, if an image didn’t catch my attention in grayscale, there wasn’t a colour that could fix it. But every image that has caught my attention in grayscale, needed very little colour work done at all.
As always, remember, if you’re reading this, you’re asking the right questions!