Photoshop Tutorial: Adding Detail, Texture, Colour and Lens Flares

This won’t exactly be a full-fledged Photoshop tutorial; I just wanted to show a photo I edited in Photoshop and compare it to the original photo I had taken, and describe the general processes and tools that I used in Photoshop to get the various effects. However, to understand this tutorial you need at least basic background in Photoshop. If you have just installed Photoshop and want to try out what it can do, this probably is not the tutorial for you, because it will use some terms that I will casually throw around and not explain in detail. I have also not taken screenshots while I made this edit, so this won’t be a full walkthrough, and I shall only describe the procedures in words.

Anyway, even though I am an amateur at both photography and editing, I hope this example will still serve to show the powers of digital editing and software in particular like Photoshop in bringing out detail, vibrance and richness in photos.

So here are the two images:

Touch of the SunIMG_3702

 

 

 

 

 

 

 

 

 

On the right is the original photo, of a building in Brussels, Belgium. On the left is the result after some hours of Photoshop by an amateur shopper such as me. You can click the images to see larger versions.

The major differences in the two photos are:

  • The edited image is much more detailed, especially the sky and its reflection. The texture of the clouds is now much more prominent and rich. Little of this texture is visible in the original photo, and it might even be hard for the uninitiated to believe that this texture information was at all contained in the original photo. But it was, and such information always is, and with some practice it can very easily and quickly be recovered and enhanced in Photoshop for a much richer look. The human eye gorges on detail. Give it as much as you can.
  • The image is much more colourful. The sky is much more vibrant, the clouds have a warm tinge, and the building itself has exploded into green and blue, very little of which is visible in the original photo.
  • The sun shining off the glass panels of the building is much more prominent.
  • Overall, a semi-realistic, computer-game style vibrance and brightness has been added to the photo, and the relative drabness of the original photo is completely gone.

The question now is, how were these results achieved?

I said at the outset that this is not a full-fledged tutorial, because a step-by-step instruction will be too arduous for me to write and probably pretty fruitless to blindly follow. A lot of editing comes from hunches, intuition and experimentation (which grows the intuition), so I would not encourage anyone to do exactly what I did. However, I could outline the general procedures I used.

  • To add richness and detail

Make layer copies of your image and change its Image > Adjustment > Levels. You will notice that tweaking the levels makes detail and texture emerge at different exposure levels. Meaning that a certain tweak of the sliders will make detail emerge from the overexposed parts such as the clouds, at the cost of completely ruining and blackening the underexposed parts, while others will brighten and add visibility to the underexposed parts at the cost of whitewashing the overexposed parts. The trick is not to go with any one of these settings, but to employ them on different regions in separate layer copies, then merge them using layer masks. Keep checking continuously your result with the original photo to see which areas could be given more detail. Once you spot such a region, make a layer, adjust levels until that part has detail, then use layer masks to remove all but that part (use as soft a brush as possible, because the final output must appear a seamless, single image). Yes, Photoshop is a long, arduous, and highly manual process. One or two button press effects will give you pretty cheap shot results like Picasa and Instagram, but doing that kind of thing in Photoshop will not usually work.

Anyway, other adjustments you could try alongside levels is exposure, shadows/highlights etc. Basically, I would encourage you to fiddle around with all the options under Image > Adjustments for a few days to get an idea of what tool is capable of what effect.

  • To add colour, warmth and vibrance

For these the primary tools are Image > Adjustments > Photo FilterColor BalanceHue/Saturation and Vibrance. Photoshop earlier didn’t have the Vibrance tool, and using the Hue/Saturation tool to change saturation used to leave ugly bright grains in the image that was very unprofessional. Vibrance now does the job more subtly and in a slightly different way, so you can use Hue/Sat only to change the hue if required, not the saturation, but keep in mind that this is a crude tool which used beyond a measured degree will introduce patches of coloured grains in your photo. What I think vibrance concentrates on is increasing the colour diversity of the image rather than linearly increasing the saturation of every colour in the image. So if you have a bit of blue in a lot of green, vibrance will enhance that blue a lot more. When you are using the vibrance tool, use it in conjunction with the saturation slider (usually in opposite directions) to get your desired effect.

Use these colour enhancement tools with great caution, though. There’s a thin line between rich and colourful and a saturated comic disaster.

Again, use the same trick here that I mentioned for adding detail, that of separating into different regions via layer masks and applying vibrance to each region as it suits the region the most.

For warmth, when I need it, I usually use the third warm filter in the Photo Filter dropdown list. I use it on clouds, but also generally on the whole image on most occasions because most of the times a bit of warmth makes the image more attractive. Again, this depends on your image. If you have a photo of a beach with white sand and cool blue water, you probably want to preserve the blues. Tweaking the colour temperature has its effect on all colours of your image, and it effectively decreases your colour diversity. (Think of it as putting a filter of a certain colour on top of your image. Every colour then acquires a tint or shade of that colour and the diversity between them is reduced.) Many photos acquire their ‘wow’ factor from contrast and diversity in colour, so make sure you are not ruining that rare component in your image for the sake of a warm and fuzzy feeling.

I usually use photo filters on separate regions of my image and not on the whole thing (for example, mostly on the clouds here) so as not to lose this diversity. I make layer copies as I said, apply photo filters, then merge with layer masks. However, if different regions of your photo have very different colour temperatures, it might look decidedly unrealistic and just plain wrong. Human beings are very good at figuring out what effect a certain type of lighting in the sky should have on objects in the image, and even a slight deviation will make the photo fall apart. It is important, therefore, to do this differential colour temperature tweak cautiously and with due measure, and know always where to draw the line. The merging of regions with different colour temperatures should be smooth and seamless, like the clouds and the sky in this photo. The photo filter adjustment that gives the clouds that warm glow completely botches the appearance of the sky (expected, since it’s all cool blue), so I had to keep those separate and merge with layer masks using a soft brush.

Remember: a slight departure from reality usually looks good, but only up to the point where you cannot put your finger on what exactly is unreal. When your effects are so loud that the editing you have done becomes obvious, you have failed.

  • To add that glowing look

If you notice closely, you may realize that parts of the image have acquired a glowing look that contributes significantly to the final semi-realistic, sci-fi sort of perception I was heading for. For this you copy your image into a new layer on top, set its blend mode to overlay or soft light or something like that (fiddle around with this), then apply a Filter > Blur > Gaussian Blur to it (you can also use other blurs; I just find it fast and good enough for my purpose).

You can find a detailed walkthrough (with screenshots) of using overlay layers and layer masks in this other tutorial that I wrote.

This process blurs out some detail though, so as before, you may employ this differently for different sections, and using layer masks you can get a final output. Note though, that layers in most of these overlay-type blend modes when stacked on top of each other with overlapping regions, will increase the effect greatly, and not usually to positive outcomes. So if you require separate blur overlays for different regions, you need to merge pairs of normal+overlay layers instead of many overlay layers atop a single normal one.

  • To add that sun glimmer

This was a cheap one: Filter > Render > Lens Flare. It is not as cheap and easy as it might appear though; there are many things to consider even here. There are only some logical regions in your image where there could be a bright spot creating a lens flare. Put it anywhere else, and again the human eye can catch the artifice immediately. I chose here the spot which even without the flare was the brightest part of the building facade.

The brightness of the flare is also important. The secondary rings are good to look at, but they come with the bright bloom at the center which washes out nearby areas and you lose detail. By default there is no way to control the appearance of the flare or its different parts, as it is rendered smack on top of your image and then there’s nothing you can do. But with some innovation, actually there is.

Make a new layer on top of your image and fill it with black, and set its blend mode to screen (experiment with the blend mode). Put your lens flare on this layer, and it will show up faithfully on your image. Now you can tweak the appearance of your lens flare (visibility, warmth, etc.) on the top layer and the changes will show in your bottom layer, the original image. This way you can keep your flare and image separate and editable. Make sure, again, to keep the colour temperature and colour balance of the lens flare concordant with the lighting of the image, or else it will look immediately weird to even a slightly trained eye.

Careful also about choosing the lens type for the flare. I chose a decidedly unrealistic one here (movie prime), because one of my goals as I set out editing this photo was to make it look as semi-realistic as possible. The reason for this is a little involved, and will hopefully be covered in a future post, where I discuss the popular resentment against Photoshopping and editing and general.

Flickr photo view counts: an elementary analysis

I was taking a casual look at the number of views on my flickr photos, when I noticed something that should not appear very surprising: view counts are low for the first few days, then gradually grew to a higher region (around 100 for me). This idea came to me to actually plot the view counts of the photos against the number of days they had been online, to visualize the trend. So I did it using Excel. You just need to enter the date of upload and the current date into date-formatted cells, and then the usual subtraction command happily gives you the number of days between those dates, so that was pretty easy. Here’s the result from my rather limited dataset of 33 photos:

image

There are some statutory notes before drawing any conclusion from this graph. One is that not all photos grab the same attention. Some are better than others, and will digress from a trendline decided simply by the number of days passed, like the highest point in this chart, which is, according to me, the best photo I have posted to flickr so far. It is not expected therefore that a graph like this should show a smooth pattern because there are other factors that affect views, like its quality, and how well it was shared and publicized through various social media etc. Also, as I slowly gather contacts and people who follow my photostream and watch for my uploads, I’ll expect new uploads to get more attention than another of the same quality did in the past.

Even keeping all these in mind, though, there seems to be some degree of rise in this pretty scattered graph. The linear correlation coefficient (although I don’t expect the correlation to be linear) is around 0.39. That’s about a third of the way upwards from totally random. Extending that observation, if I imagine a statistically averaged trendline over many photos of different qualities and different degrees of online publicity, i.e. I want to think only of the effect of days passed, several properties of such a trendline curve logically come to my mind:

  • It shall start from the origin.
  • It shall be monotonically increasing, of course. Photos cannot be unviewed once they’ve been viewed.

Wait, did you fall for that second one? Because I’d be surprised if you didn’t. I fell for that myself, until just some time back when I relented to humor a tiny splinter in my brain that was groaning against this argument ever since I thought about it. The groaning ensued from memories of a related puzzler in ensemble averages that I had encountered in Statistical Mechanics once, and eventually turned out to be quite legit.

The truth is, there’s actually no reason why that averaged curve should necessarily be monotonically increasing. Why? Well, a point on that curve has a certain x-coordinate, and so corresponds to the average over all photos that are a certain number of days old. Another point, with a different x-coordinate, is an average over a different (and completely mutually disjoint) population of photos. And while the average view count of a fixed set of photos must necessarily go up with time (each view count goes up, so sum goes up, so average goes up), nothing can be said of a comparison between view counts of two disjoint sets of photos at different points of time. It might very well be that the photos you posted five years ago have never received the limelight so long that your now awesomely professional photos have hogged in just a few months. Thus, the averaged curve may at times even drop with increasing online age. Which, in fact, my scatter plot seems to indicate to some degree.

Thus, while a time series plot with gradually falling y-coordinates (where this coordinate means something good, like views) is in almost all cases bad news, now I know that in this case it is a most enviable sign of growing reputation.

So we must strike out that second property. On to the next:

  • There will be an initial spike in views as the photo is uploaded and the ripples spread through flickr to your contacts, to other pages, and possibly through linked accounts to other sites. This means a higher slope near the beginning, which decays eventually at a rate that I don’t know anything about at the moment, except that it will probably be of the order of a couple of days.
  • In the long run, when these transient effects have decayed out, the only thing that keeps view counts going up is the fixed background rate at which people chance upon your photos on flickr. I don’t know what this rate is. But whatever it is, barring the reputation effect I mentioned, it can be assumed to be fixed for a flickr profile, unchanging in time. In real life though, it rises when you gather more contacts, increasing the audience that can discover your photos by some avenue. It’s in no way a negligible effect. Reputation and recognition matter. In fact, that’s finally what most people on flickr and elsewhere are striving for. But ignoring that effect, asymptotically the trendline should become a straight line with positive slope.

There are several curves that have all these properties, like the familiar parabola. The actual curve that will fit this hypothetical data is unknown at this point, of course. The trendline I fitted with my dataset was a parabola, with no manipulated parameters, all floating, and it clearly shows the fall towards the end. Although I strongly suspect this could be a contribution from that outlier high point (that’s where the hump of the curve is). With my hopelessly insufficient data, this is all pretty arbitrary at this stage:

image

That’s all I wanted to say, and by itself this is not very interesting stuff, but maybe someone will get some other interesting ideas from this. Like maybe plotting a reputation growth curve calculated from the departure (fall) of this view count curve as compared to the idealized, constant-reputation view count curve which asymptotes to a rising straight line as I mentioned.

Locating Numbers inside Bisected Interval Sequences

I think in a real analysis course in the second semester of my first year, the teacher was discussing the nested interval theorem, when one of his examples or something he was saying struck me, and I thought of this interesting problem. Well, interesting to me.

We pick any fraction, say. Now we look at the interval [0,1]. We divide it into two halves [0,0.5] and [0.5,1] and say, ‘the fraction belongs to this half.’ Say the right half. Then we divide the right half into two halves, check again, and say ‘now it’s in the left half’. We continue like this until we hit the number bang in the middle of an interval.

Now that’s not really a problem, but I thought it would be an interesting thing to look at this sequence of ‘left’ and ‘right’ for a chosen fraction. So I wrote a python program for that. Nothing very amusing came out of that. Then I thought of something else. I took evenly spaced fractions in that interval along the horizontal axis, and plotted the fraction of rights in their respective left-right sequences, on the vertical axis, using matplotlib. Here is the python source code:

#! usr/bin/python
import matplotlib.pyplot as plt
c = 0.
x=[]
y=[]
while c<=1.:
    a = 0.
    b = 1.
    dc = c - a
    d = (b-a)/2
    R=1
    L=1
    while True:
        if dc > d:
            R+=1
            a = a + d
        elif dc < d:
            L+=1
            b = b - d
        elif dc==d:
            break
        d = float(b-a)/2
        dc = c - a
    x.append(c)
    y.append(float(R)/(R+L))
    c+=1e-4
plt.xlabel('Fraction')
plt.ylabel('''Fraction of 'Right's in sequence''')
plt.plot(x, y, marker='.', markerfacecolor='blue', linestyle='None')
plt.show()

This is what I got:

graph1

Now, for example, 0.375 = 0.5 – 0.25 + 0.125. A minus sign means an L, a plus sign is an R. So 0.375 is LR. 0.625, which is the fraction the same distance from the right as 0.375 is from the left, is 0.5 + 0.25 – 0.125. So it’s RL. So as you look at fractions equidistant from 0.5 on either side of it, all the R’s and L’s in their sequence get switched. Therefore, the fraction of R’s in one should be the fraction of L’s in the other, or 1 – fraction of R’s. Thus, you expect the graph to be symmetric about the point (0.5, 0.5). (Think about this, no hurry.) What miffed me at this point, therefore, was that this graph didn’t appear to be symmetric with respect to its center point. There’s some fuzzy mess to the left and some scattered points isolated from the main band that are not symmetric at all.

Then I ran some tests with fractions whose sequences aren’t supposed to end at all. Like what? Like 0 and 1, say. If you’ve followed the algorithm, you can tell that we can never arrive at a cleaving of an interval where the separating number is either 0 or 1, because there’s nothing on one side of these numbers. So 0 should just give me LLLLL… and 1 should give me RRRRR…, never ending. However, guess what I found when I looked at the number of L’s and R’s in their sequence.

0    L: 1074, R: 0.

1    L: 0, R: 54.

So why do the sequences end? That’s fairly simple. It’s because of the limitation of storing and computing floating point numbers in a computer. Notice that with each step of the sequence we are squeezing our number tighter and tighter, into an interval that is halving its length with each iteration. Very soon, our computer (or the interpreter) arrives at a point that numbers so infinitesimally separated in that tiny interval are no longer separate numbers to it, and so it cannot differentiate between our fraction and the mid-point of the interval, and stops.

Exactly how big is this error? It is difficult to tell from looking at these numbers above. One tells you it should be 1/2 1074, the other tells you it’s 1/2 54 (which is closer to where I’d put it, owing to other checks I did and don’t want to discuss here). The final result has to do with all the calculations it is doing at every step, and so all the floating point errors that accumulate at every step. However, I think the only way the answer could still be different for a fraction and its ‘mirror-image’ is if different floating point errors are associated with addition and subtraction, because these two operations have been switched for them.

Notice, though, that the fraction of R’s for 0 is 0, and that for 1 is 1. The symmetry is preserved. So where is the final problem in the plot? Well, we’ve been lucky with these two numbers because one of the counts is 0 for both cases. I’ll give you an example of another case:

0.1    L: 28, R: 26.

0.9    L: 25, R: 27.

In this case, obviously, the symmetry will not be maintained, because the second pair is 25,27 instead of being 26,28. Thus, the graph is no longer symmetric about the center point.

Since I was stubborn about getting a symmetric graph, I decided that I’ll cut off the process before it gets to the ambiguous stage, that is stop with a wide enough interval length, and plot a graph with the truncated sequences. I finally got a symmetric one when I set the interval length at the order of 1e-13. For this, in line 20 (highlighted), instead of elif dc==d you need to write elif abs(dc-d)<=1e-13. Here is the resulting graph:

graph2

Note, however, that this error tolerance is not something fixed. It depends on the resolution (spacing) of fractions you want to do this computation for. In the images you have so far seen, the fractions were multiples of 10-4. You get a better image with one order lower, but for that the error tolerance had to be jacked up to 1e-11:

graph3

Do you see something really interesting in this graph now, in the way that it organizes itself into parallelograms within parallelograms? It’s a highly ordered fractal. I’ve marked them out for clarity:

graph

In other words, the point symmetry is repeated on increasingly smaller scales, as it should. The whole bisected nature of the nested intervals is responsible for this. More parallelograms would be revealed if we kept making our resolution finer, and the horizontal extent of these parallelograms are only exhibiting those nested intervals.

The fraction of rights, however, doesn’t reveal a lot of information. More interesting could be to see how many such bisection steps are required before we converge onto a number. For this you need to modify the source code just a bit. On line 25 (highlighted), substitute float(R)/(R+L) with R+L, and you get this:

graph5

The black dots are the data points, joined by blue lines for clarity. Again, this should have been symmetric about x=0.5 (about a line this time, not about a point), but it isn’t. Notice that the low sequence lengths for numbers such as 0.125 or 0.375 like we discussed don’t even appear. The lowest sequence length we see here is about 35. That’s because these fractions never even arrived in the incrementing loop, although they should have. This is computational error again. I can tell because I have poked around a bit. Try out this python snippet for example:

c=0.
while c<=1.:
	c+=1e-2
	if c==.12:
		print c

By the way, one data point, corresponding to the fraction 0, had to be removed from this graph, because its sequence length was very big, 1074, as we saw before.

If you zoom into the middle of this graph a bit, however, you’ll see the kind of symmetry I had been looking for:

graph6

Do you see why we should have a picture like this? Think about it, it’s not very hard. Meanwhile you can download a wallpaper I made in Photoshop out of the above graph, because I liked it so much.

graphwallpaper

That’ll be it for now. Let me know if you have any ideas or questions about all this.

Mercedez-Benz had five seconds.

This is June 2012. These days when you want to watch a video on YouTube, there are ads which start that you can only skip after 5 seconds. I hate these ads because they eat into my meager bandwidth. I watched one of them to the end, the new Mercedez-Benz ad. Why did I do that?

Mercedez-Benz had five seconds to do something that the user wouldn’t skip. What did they do?

They started with a nice music video-ish clip of a scene moving past the screen, and some nice music in the background. You see what happened there? For a moment you wouldn’t know this is an ad. I didn’t hit skip. I waited. Then I was sucked into the ad, which is what they wanted. I didn’t like the whole ad so much, but that’s irrelevant; they got me to watch it.

I don’t know if this was intentional. Probably not, because they didn’t do this ad just for YouTube. They did it for TV, everywhere. But it worked here.

Here’s the video:

Javascript Slideshow Code

I prefer writing all my web-design code from scratch. So I use only text editors to write HTML directly, and raw JavaScript (no jQuery etc). While working on a javascript slideshow for Artarium, I came across a lot of problems in transitioning the images. My javascript was changing the src for the image, but I needed to also resize it to the dimensions of the new image before displaying it. Finally, after a lot of time, effort and fruitless migrations to online tutorials and troubleshooters, I came up with the exact configuration that works. On a transition the image disappears, resizes, then reappears. In between this, if the next image takes too long to load, you could display some loading animation, as I did. Just put an animated gif there permanently with a lower z-index than the photo. This will cause it to show between transitions. For the transitions itself you could use a CSS3 transition of opacity as I did and not have to worry about further code for transition effects.

Here is the JavaScript code with some explanatory comments. I have added a preloading function and keyboard navigation. I have not explained everything in detail as I have assumed the user will have standard web-designing experience, in which case this should very well suffice.

JavaScript:

function keyNavigate(e) //to enable slideshow navigation using right and left arrow keys
{
    if(e.which==39)
         pre_next();
    else if (e.which==37)
        pre_previous();
}

var i=0,imax=n; //put the # of slideshow images as n here

function imagearray() //prepares things on page load and starts preloading images
{
    preloader=new Image()
    var j=0;
    captions = new Array();
    captions = ['caption1', 'caption2', ... 'caption n'];
    document.photo.src="directory/photo1.jpg"; //assumes photos are in 'directory'
    document.getElementById('navigation-count').innerHTML="0/"+(imax); //sets slide number
    for(j=1; j    {
        filename="directory/photo"+j+".jpg"; //assumes photos are 'photo1.jpg', 'photo2.jpg' etc
        preloader.src=filename;
    }
}

var imgHeight;
var imgWidth;
var newImg;

function resize() //to resize as image changes
{
    imgHeight = this.height;
    imgWidth = this.width;
    if (imgWidth/imgHeight < 2.25) //any desired criterion
    {
        document.photo.style.height='355px'; //or whatever else
    }
    else
    {
        document.photo.style.width="95%";
    }
    document.photo.style.opacity=1; //photo appears only after it has been resized
    document.getElementById('caption').innerHTML=captions[i-1]; //caption changes
    document.getElementById('caption').style.opacity=1; //caption appears
    document.getElementById('navigation-count').innerHTML=(i)+"/"+(imax); //slide count changes
    return true;
}

function pre_next() //to ensure resizing occurs after picture disappears
{
    document.photo.style.opacity=0;
    document.getElementById('caption').style.opacity=0;
    setTimeout("next()",500);
}

function next()
{
    if (i==imax)
    {i=0;}
    i++;
    newImg=new Image();
    newImg.src="directory/photo"+i+".jpg";
    document.photo.src=newImg.src;
    newImg.onload = resize; //resize function is called
}

function pre_previous()
{
    document.photo.style.opacity=0;
    document.getElementById('caption').style.opacity=0;
    setTimeout("previous()",500);
}

function previous()
{
    if (i==1)
    {i=imax+1;}
    i--;
    newImg=new Image();
    newImg.src="directory/photo"+i+".jpg";
    document.photo.src=newImg.src;
    newImg.onload = resize;
}

This JavaScript alone does not suffice. Here’s some things you need to do with the HTML for this to work:

  1. The image element which changes in the slideshow should have a name=”photo” attribute (used on line 17 etc. of the JavaScript).
  2. Add <body onload = “imagearray(), next()”  onkeydown=”keyNavigate(event)”>  to the HTML body. The next() is required to display the first photo and initialize the caption and slide count.
  3. The slide transition occurs by pre_next() and pre_previous() functions. So any event that you want will trigger a transition should call these functions (as used in the keyboard navigation part), not next() or previous().
  4. The ‘caption’ div in the HTML holds the caption, while the ‘navigation-count’ div holds the slide number. Place them as you require.

You can find this code at work in any gallery at Artarium, unless I change it in the future. If you hit a block using this code, there’s nothing a couple of Google searches won’t clear for you. If there’s a problem that persists even after you have done your research thoroughly, leave a comment (be specific) with your e-mail and I promise to try to help.