Blender’s versatile and powerful nodes system enables you to access and manipulate a huge variety of compositional elements to combine rendered images in exactly the way you want. You can use nodes to composite still images or video sequences that can later be edited in Blender’s nonlinear Video Sequence Editor (VSE), which you will learn about in Chapter 11, “Working with the Video Sequence Editor.” Here, you will look at the fundamentals of compositing with nodes, using render layers to re-create a scene from its component render passes. After that, you will follow an in-depth example of one of the most common tasks in live-action video compositing: the task of green screen compositing and garbage matting.
In this chapter, you will learn to
In Chapter 4, “Rendering and Render Engines,” you were introduced to the idea of render layers and render passes. In that chapter, you saw how constituents of a rendered scene such as shadows, specular highlights, and ambient occlusion could be rendered separately in passes. However, it wasn’t clear from that chapter how this could be useful. This is because we hadn’t yet begun talking about compositing. Once you start working with Blender’s node-based compositing system, the usefulness of render layers and render passes becomes very clear.
Compositing is the process of combining data from two-dimensional images to create a new image. There are many situations in which compositing of some kind is necessary. You might want to combine live-action footage with CG. You might want to place a character or other element over a background that was filmed or rendered separately. You might want to break down complex 3D scenes into components to be rendered more efficiently and then build them up again in a comparatively less-resource-intensive 2D composite. You may simply want to tweak the color of parts of the scene or add some atmospheric effects.
To do this, it is often necessary to be able to work with different components of an image separately. For example, you can create an appealing glow effect by isolating the bright specular highlights of an image, blurring them, and overlaying the blurred highlights upon the original image. Although there are several ways to isolate bright points, being able to work with specular highlights on their own is a convenient way of doing this. Likewise, you might want to composite a CG character into a video scene, which requires that the CG character’s shadow appears to fall realistically onto the background video image. In this situation, working with the shadow pass only enables you to tweak the exact influence of the shadow on the background, so that you can get that aspect of the image exactly right without influencing other parts of the image that may need to be tweaked in different ways. You might want to pre-render time- and resource-intensive properties of a video in advance, such as ray-traced reflections and transparency, ambient occlusion, and indirect lighting, while keeping the color of an object so that it can be quickly changed on the fly in the compositor. This can save enormous amounts of rendering time if you need to experiment with different colors for an object in a fully rendered scene.
In Chapter 4 you saw how to separate elements of the render into render passes. This chapter deals in part with how to put these elements back together. The way this is done is to use render layers as input nodes in Blender’s node-based compositor.
Very generally, nodes are a way of organizing and combining data. Blender currently has node-based systems for working with material data, texture data, and composite data. All of these systems share a common structure. Every node represents some data or some operation to be performed on data. Every node in a node graph has either one or more input sockets, one or more output sockets, or one or more of both. The links, or lines between node sockets, define the input-output relationships between the nodes. All input sockets are on the left side of nodes, and all output sockets are on the right side of nodes, so the node graphs always read from left to right
If you’re having trouble getting your head around the idea of composite nodes, you can start by thinking of them as something similar to 2D layers in Photoshop or GIMP but without the constraint of having one directly on top of the next. Unlike 2D layers, composite nodes are nonlinear and structurally flexible. If 2D layers are a train track, nodes are a system of streets.
The best way to understand composite nodes is to work with them in rendering a scene. In the following example, you’ll look at composite nodes and render passes within the context of a 3D scene that incorporates a variety of light effects including diffuse and specular lighting with lamps, ambient occlusion and environment lighting, and indirect lighting by emitting objects. The scene also includes ray-traced reflections. It is instructive to break the render down into individual render passes and then try to reconstruct the original render using the compositor.
You can create your own 3D scene with these effects to follow along here, or you can use the scene I’ve prepared for you in the downloadable files for this chapter, called scene.blend. However, be aware that if your scene includes effects not discussed here, such as transparency and refraction, it will require further work to reconstruct.
Open the .blend file with the 3D scene and open a Node Editor window from the menu, as shown in , or by pressing Shift+F3 while hovering your cursor over a window.
You will be using composite nodes here (as opposed to material or texture nodes), so click the icon in the header of the Node Editor window, and then check the Use Nodes check box in the header. Also, in the Post Processing panel of the Render properties window, turn on compositing. When you do this, your window should appear as shown in .
Two nodes appear in the Node Editor. The node on the left is an input node. You can tell because it has sockets on only the right side, representing output sockets. This node will take a render layer input and feed its data into the compositor.
The node on the right is an output node. You can tell because it has sockets on only the left/input side. This node is a Composite node. It receives data from the node system and outputs the data as a finished, composited image. A node system can have only one Composite node.
In this very simple node setup, the image data from the render layer in the Render Layers node (called RenderLayer) is sent directly to the Composite node. The resulting compositor output will be exactly what was rendered on the render layer going into the node system.
Render layers are set in the Layers panel of the Render properties window, shown in . By default, a single render layer exists, called RenderLayer. The field at the top of the panel lists the render layers and includes check boxes to set them as active for rendering or not.
The next field down is the render layer name. Below this are several banks of layer buttons that look similar to the scene layer buttons in the 3D viewport header.
When you render now, do so by either clicking the Image button in the Render properties area or pressing F12. The scene will be rendered and sent to the compositor and then output from the compositor as the final render. The nodes will appear something like , and the composited output can be viewed as the Render Result image in the UV/Image Editor, as shown in . The image is repeated in color in the color insert of this book.
Render passes enable you to render components of the full render individually, so that they can be accessed independently. In this section, you’ll render the various passes separately and then re-composite them in the compositor to mimic the original combined render.
This is intended only as an exercise and a way to look at the contributions of individual passes in isolation. You won’t often have cause to re-create a combined render identically with render passes in real life. Usually, you will carry out operations on individual render passes or use the data from render passes in different contexts to create something that would not have been possible in a combined render. However, in order to do this, you need to know what data is available to you in render passes.
Another thing to note is that internally, Blender arrives at the combined pass in a slightly different way than how that it arrives at a composited image of individually rendered passes. That means that in theory, some effects will render differently in the combined pass than in a composited output. While it is almost always possible to re-create a combined pass by compositing constituent passes, some cases require significant tweaking to get exactly right. In particular, images that make use of transparency and refraction can be a challenge. Fortunately, the compositor continues to evolve and improve. For the time being, we’ll stick to an example that can be reconstructed fairly simply, in the file scene.blend.
In order to render individual passes so that they can be accessed in the compositor, you need to check the passes you want in the Passes area on the Layers panel in the Render properties. Check Color, Diffuse, Specular, Shadow, Emit, AO, Environment, Indirect, and Reflection, as shown in .
When you re-render (press F12) the scene, you’ll see that your Render Layers input node now has output sockets corresponding to each of the available render passes. shows examples of three different passes: the Indirect lighting pass, the Shadow pass, and the Diffuse pass. These images are repeated in color in the color insert of this book.
You can also look at the different passes in the UV/Image Editor window by choosing the pass you want to see from the Pass drop-down menu in the header, as shown in . This figure shows the Environment lighting pass.
As mentioned previously, you can have only one Composite node in your node system. This node represents the final output of the compositor, and it wouldn’t make sense to have more than one. However, it is often the case where you want to look at an arbitrary node in the system and send its output to an image that can be viewed in the Image Editor or exported. To do this, use a different type of output node called a Viewer node. A Viewer node is added with the Add menu, which is opened by pressing Shift+A, as shown in .
As you can see in , the Viewer node is very similar to the Composite node, but you can have as many Viewers as you wish. To connect the output of a node to a viewer, hold the left mouse button and drag the mouse from the output socket of the node to the input socket of the Viewer node. Alternately, Shift+Ctrl+left-clicking any node will send that node’s image output to the active Viewer node. Yet another way to add a connection between nodes is to select any two nodes and press the F key, which is analogous to how you would create an edge between two vertices when mesh modeling.
Render Layer input nodes have a few drawbacks. The main problem with Render Layer nodes is that they are not persistent. As you probably know from experience rendering images in Blender, you need to output a rendered image and save it to a file in order to have it later. If you render an image in Blender and then shut down and restart the .blend file, the render buffer is cleared, and the image must be re-rendered. The same is true of render layers. If you reopen this .blend file right now, you will have lost the contents of the input node (and the compositor’s output will also be blank). Rendering these constituent render layers is likely to be the most time-consuming part of the process, so it is not optimal to have to re-render every time you open the file.
The solution to this is to output the render layer to an image file and then to use an Image input node instead of a Render Layer node as your input node. But you can’t simply use a .png file for this, because .png and other standard image files have no way to internally represent separate render layers. You must use the MultiLayer format for this. MultiLayer is based on the .exr file format, but it is designed specifically for use in Blender’s compositor. Let’s try using the MultiLayer format now:
RGBA images can be composited together by carrying out mathematical operations on the color or alpha values for each pixel of each image. Each render pass is represented as an RGBA image, so we use standard image operations to combine the passes. The two main operations used in this example are multiplication and addition.
Multiplication is appropriate for render passes that darken an image. Multiplication takes the RGB values of each pixel of one image and multiplies them by the corresponding values of the corresponding pixel of the second image. The output is the image made up of these product pixels.
Because all RGB values range from 0.0 to 1.0, multiplication can only leave a value the same or darken it. Any pixel multiplied by a black pixel (RGB value 0, 0, 0) will result in a black pixel. Any pixel multiplied by a white pixel (RGB value 1, 1, 1) will result in a pixel of the same color as the original pixel. A pixel multiplied by another pixel that is neither black nor white will have its RGB values diminished according to the RGB values of the second pixel. This is why multiplication is appropriate for render passes that darken an image. The main two render passes that do this are the Shadow pass and the Ambient Occlusion (AO) pass.
First, let’s work with the Shadow pass:
shows the Diffuse pass, the Shadow pass, and the result of multiplying the two passes together. As you can see, we’ve taken the first step in reconstructing the combined pass (sometimes also called the beauty pass). These images are repeated in color in the color insert of this book.
The AO pass behaves similarly to the Shadow pass:
Now, we turn to the lightening factors. All of the remaining render passes in some way add illumination to the scene, so they will all be added in some way (although a few will need some further processing before being added, as you will see).
The image is repeated in color in the color insert of this book, and you should refer to that, because color here becomes an issue. As you can see, something appears to have gone wrong. The reflections on the monkey head do not appear as you might have expected them to appear. Rather than being purple, they seem to be tinted green.
This is a result of how the Reflect pass works. Although it is an additive pass (meaning that it is intended to be added to the composite) it contains both positive and negative values. These are calculated based upon the tonal values of the fully lit image. Because we haven’t yet added environment light, the Reflect pass is subtracting too much from the RGB values of the surface, affecting its color. When we add the environment light, this will be resolved. Of course, if we had added these passes in a different order, with the Reflect pass last, this apparent problem would not have come up.
Incidentally, it is possible to work around this problem by creating a dummy object with a solid black material and deriving the Reflect pass only from the dummy object, rendered on a separate render layer. But since this is not a problem here, we’ll simply continue by adding environment light:
The result is shown in (repeated in color in the color insert). It’s not pretty—it looks like glow-in-the-dark toys from the seventies. For one thing, the image seems far too white. For another thing, the monkey head still appears to be green. Clearly the Environment pass was not what we wanted here.
Here’s why the Environment pass is not quite what you should add here. The Environment pass provides global values for the environment lighting, but the actual tonal energy that should be added to the image depends on the material colors for each object. That is to say, before the environment lighting is added, it must be multiplied by the Color pass, as shown in and repeated in the color insert of the book. This makes sense, because the monkey head obviously needs some purple.
We now have an image that looks pretty good. We can compare the output of our Composite node network with the original Combined pass in , repeated in color in the color insert.
It looks pretty good, but the two are not the same. It is clear that the red light from the Indirect lighting pass is considerably brighter in the composited image. But by now, you probably have a pretty good guess about why this might be. In fact, like the Environment lighting pass, the Indirect lighting pass also needs to be multiplied by the color values to get the correct additive values. The setup for this is shown in . As you can see, once you’ve done this, your compositor output will be identical to the Combined pass.
In fact, this is probably also how the Diffuse pass should be, but at present Blender automatically incorporates color information into the Diffuse pass. If, for whatever reason, you need diffuse values separate from the material colors, you would need to create solid-white dummy objects and render the Diffuse pass from those.
To simplify node networks, you can select a group of nodes and group them into a node group, which is represented visually by a single node, as shown in .
To do this, select the nodes by using the B key and border selecting with your mouse; then press Ctrl+G to create a group (Alt+G ungroups the grouped nodes).
You can edit the internal nodes of a node group by pressing the Tab key to open the group for editing, as shown in .
At this point, you should have a pretty clear idea of what render passes contribute. This will be important when you go on to apply specific compositing effects. In the next section, you’ll step away from render passes and take a look at a completely different use of the compositor.
A big part of video compositing is the task of extracting elements from their original surroundings so as to be able to composite them freely into different visual environments. There are several ways to do this, depending on what kind of source material you have to work with. Typically, the process involves the creation of a matte, which is a special image used to suppress parts of the original image and allow other parts of the image to show through.
The Blender composite node system enables you to take multiple images, videos, renders, or other 2D information sources as input and then perform a wide variety of operations on them to combine them in an endless number of ways. The operations are performed in a nonlinear way and represented as nodes on a graph that can be edited directly. In this section, you will see how this system can be used to perform the common task of pulling a green screen matte from a live video. After you understand how to do this, many other uses of the composite node system should become clear.
When you know in advance that you will need to composite a character or object into a different background scene, the typical approach is to shoot the foreground action against a colored background screen that can be easily eliminated by compositing techniques such as those described in this section. The most common type of colored screen currently used is a green screen.
To shoot green screen video, you need access to a green screen. Most multimedia production studios, including those of many universities and technical colleges, have all you need to take green screen footage. Alternatively, it is not especially difficult to build your own green screen equipment. You can find instructions on the Web for doing this.
For the tutorials in this chapter, you can find a brief clip of green screen video in the form of a sequence of JPEG images in the greenscreen subdirectory on the accompanying CD. The clip is from footage shot by Norman England for his film The iDol () and features a character walking toward the camera, as shown in (also repeated in color in the color insert of this book). There are a couple of challenges that this particular clip presents for the compositor, and you’ll see what those are and how they can be solved in Blender over the course of this chapter.
Creating, or pulling, a green screen matte is not difficult, but it requires that you familiarize yourself with Blender’s composite node system.
Let’s discuss a few general points about composite nodes before we begin.
The node graph, sometimes called a noodle, is made up of nodes of several distinct types. Depending on the node type and its specific properties, each node has at least one input socket or one output socket and may have multiple input and/or output sockets. Input sockets are on the left side of the nodes, and output sockets are on the right side of the nodes.
Nodes are connected to each other by curved lines, or links, that extend from an output socket of one node to an input socket of the other node.
To connect two nodes, hold down the left mouse button while you move the mouse from an output socket of one node to the input socket of another node, where you want the connection to be established.
To break a connection, hold down the left mouse button while moving your mouse in a cutting motion across one of the connecting lines between nodes.
To set up the nodes for pulling the green screen matte, open a fresh scene in Blender and follow these steps:
It’s important to be able to see what you are working with. The Composite node represents the final rendered output, and there should be only one of these connected to the node graph. However, you can have as many Viewer output nodes connected to the graph as you want, and whichever one you select at any moment will display immediately in the UV/Image Editor window when Viewer Node is selected from the drop-down menu. The Viewer node is added by pressing Shift+A and choosing Output > Viewer from the Add menu.
The first thing you will use multiple Viewer nodes for is to take a close look at the individual color channels of the image. This is very helpful to get a sense of the actual values that you are working with, and it can help to clarify how green screen matte pulling works.
As you’d expect, the highest-contrast channel is the green channel, which has a high-intensity background thanks to the green screen. The red channel, on the other hand, is much more uniformly distributed. The process of pulling a green screen matte depends on this difference in intensity ranges between the green and red channels.
Now you can use the difference in channel intensities to pull the matte itself. Blender’s node system includes specific nodes for creating mattes in fewer steps, but in order to understand exactly what is happening, it is good to do it yourself by hand. Also, walking through the process will make the subsequent garbage-matting step easier to understand. The idea here is to subtract the red channel of the image from the green channel by using a Subtract node.
The green screen matte you pulled in the previous section is a pretty good start, but it is immediately clear that it will not be sufficient to distinguish the character from the background. The problem is that the green screen does not perfectly cover the entire background. There are pieces of hardware and other non-green areas that have been marked as white in the matte. The solution to this is a process called garbage matting. It should soon be obvious to you how it got its name.
In the simplest cases, garbage matting may just be a matter of covering areas of the matte with blocks of black. However, our example has some added complications. The main problem is that the character is approaching the camera. As he comes closer to the camera, his silhouette interacts directly with the debris in the background, as you can see in . This results in a green screen matte like the one shown in , where there is no automatic, color-based way to distinguish between the foreground figure and the background garbage. Not only does the background need to be matted out, but it also needs to be done in a way that distinguishes the correct outline of the foreground figure as it moves.
This problem can be fixed by using a Curve object to create a simple animated garbage matte. The solution is a simple form of rotoscoping, which involves creating images by hand on a frame-by-frame or nearly frame-by-frame basis. The AnimAll add on enables you to do this more easily than was possible in the past. However, as I write this, powerful new tools for masking in the compositor are being developed. If you are working in Blender version 2.64 or later, mattes for compositing should be made in the Movie Clip editor as described in Chapter 10, “Advanced 3D/Video Compositing.”
To set up the animated garbage matte in Blender version 2.63 or previous, follow these steps:
To see the video from the Camera view, you must import the video as a background image sequence:
Now that you’ve set up the curve and activated the add-on, you can animate the garbage matte to cover the necessary background throughout the entire clip. To do that, follow these steps:
As you saw, the garbage matte is created in the 3D space. It can be rendered simply by unchecking Compositing in the Post Production panel of the Render properties and rendering in the ordinary way. Remember, Render Layers input nodes are used to incorporate renders from the 3D view into the node network. Render layers enable you to organize your rendered scene in such a way that different parts of the same scene can be used as separate input nodes in the compositor. Whatever information you want to be included in a single node should be organized into a distinct render layer. Objects are divided among render layers by their placement on ordinary 3D viewport layers.
In this example, the garbage matte is on layer 1 in the 3D viewport. By default, the first render layer is set to include objects on all 3D viewport layers. In this example, there won’t be any other 3D content to worry about, so the default Render Layers setup is fine.
The solution now is simply a matter of multiplying the color values of the alpha-inverted black and white garbage matte image and the original green screen matte you pulled previously. If these two mattes are multiplied together, only the areas where both mattes are white (that is, have 1 values) will be white. All other areas will be multiplied by zero and so will turn out black.
You now have a nice green screen matte with a solid black background and a solid white foreground in the shape of the character. You can group the whole node setup by selecting all of the nodes between the input and output nodes and pressing Ctrl+G. This matte will be used to set the alpha value of the foreground image so that the white areas (the character) are fully opaque (alpha 1) and the background areas are fully transparent (alpha 0). The way to do this is to use a Set Alpha node, which you can add by pressing Shift+A and choosing Converter > Set Alpha from the Add menu, as shown in . Connect the original image to the Image socket and connect the matte to the Alpha socket.
To test the effect, add a solid color input as a background by pressing Shift+A, choosing Input > RGB from the Add menu, and then adding an AlphaOver node by choosing Color > AlphaOver and connecting them all, as shown in (reproduced in the color insert of the book).
Premultiplication controls the ordering in which multiple overlaid alpha values are calculated. Premultiplication values that are incorrectly set can cause unwanted artifacts or halo effects. In this example, there are problems with the unconverted premultiplication values that result in some artifacts on the colored background and around the edge of the figure. These are fixed by the use of the Convert Premul option. You can see the difference between the two rendered outputs in (reproduced in the color insert of the book).
Checking the alpha overlaid foreground image over a colored background is also a good way to highlight the problem of color spill that often occurs when doing this kind of compositing. Because the subject is being shot originally against an expansive, bright-green background, there is a green tint to the reflected light from the background as well as background light that shows through transparent or semitransparent areas of the subject such as hair. You can clearly see the undesirable green spill in , which is repeated in the color insert of the book. You can zoom in on the Node Editor’s backdrop image with the V key to see this edge detail and zoom back out with Alt+V.
The solution to this problem is to adjust the color channels directly and to bring down the level of green in the image overall.
To tweak the individual R, G, B, and A channels of an image directly, use a Separate RGBA Converter node, described previously, to split up the values, and then use another Converter node, Combine RGBA, to put the values back together as a completed color image.
In this case, you need to bring down the overall green level. However, there is a danger to bringing the level too low. Remember, the character is holding a green object. Adjusting the green channel affects the color of green things disproportionately, so an adjustment that might not make the image as a whole look unnatural may easily alter the color of the doll so that it no longer looks green. There must remain enough energy in the green channel that this object still looks green.
There’s no single right answer to how to do this, but a simple solution is to average the energy levels of all three channels and to use this average as the green channel. This significantly reduces the strength of the green channel but ensures that the green channel remains stronger than the other two channels in places that are predominantly green to begin with. This can be done by using two Math nodes set to Add and one Math node set to Divide, as shown in . You can create math nodes by pressing Shift+A and choosing Converter > Math from the Add menu. Collecting those nodes in a single node group and connecting the red and blue channels straight through yields the node setup shown in . The difference between the image before and after adjusting the green channel can be seen in , also repeated in the color insert of the book.
You can use either the UV/Image Editor or the Nodes window itself to look at viewer output. By clicking the Backdrop option in the Nodes window header and selecting the View Alpha option, you can see the results of the composite directly in that window, as shown in . The V key and Alt+V hotkey combination, respectively, allow you to zoom this backdrop out and in.
You can use the composited image as the foreground to a more involved scene in the compositor by introducing a background, as shown in (and repeated in color in the book’s color insert). The image file is a sky map created by BlenderArtists.org member M@dcow and freely available at . You’ll find the sky_twighlight.jpg file among the downloadable files that accompany this book.
There are a number of areas where a matte can be fine-tuned. You can add a Blur node from the Add menu under Filter > Blur. Blur nodes can be used to soften the edge of the matte. Another filter node, Dilate/Erode, can be used to expand or shrink the white area of the matte, which can be useful for tightening, or choking, the matte around the subject. Blurring can be used on the matte itself before using the matte to set the alpha value of the foreground image or later in the composite. The Blur and Dilate/Erode nodes are shown in .
When you have composited the foreground image into a specific background, you can then do some further color adjustments. In (and repeated in color in the book’s color insert), you can see the results of using a slightly purple screen over only the most brightly lit highlights of the foreground image. If you experiment with the nodes presented in this chapter, you should be able to work out how to do this. The actual node setup is included as an exercise in “The Bottom Line” section at the end of this chapter.
Another thing you can do is to export the video and use it as an alpha overlay in the Sequence Editor. You can see how to do this in Chapter 5, “Getting Flexible with Soft Bodies and Cloth.” If you render the image to an output and plan to use the alpha values, make sure that you render to a format that can represent alpha values, such as PNG (.png) or Targa (.tga). Also make sure that you have selected RGBA output in the Format tab of the Render buttons area, rather than the default RGB, which does not encode alpha information.
As you’ve worked with slightly larger node networks, you may have noticed that changing values, reconnecting nodes, or making other adjustments or modifications can take a while because of the need for Blender to recalculate the effects of all the nodes each time a change is made. You can mitigate this problem by muting selected nodes when you don’t need them. You can mute and unmute nodes by pressing the M button with your cursor over the node. Muted nodes’ names appear in square brackets on the top of the node, and a red curved line is drawn across the face of a selected muted node.
This chapter has barely scratched the surface of what you can do with the Blender node system in terms of video compositing. There is a great deal more to learn about Blender’s compositing functionality, but it is even more important to study the fundamentals of how 2D images are composited with each other. Coupled with what you have learned here about working with nodes, a thorough grounding in compositing will give you all the knowledge you need to get exactly the effects you are after. For this information, there is no better place to look than Ron Brinkmann’s The Art and Science of Digital Compositing, Second Edition: Techniques for Visual Effects, Animation and Motion Graphics (Morgan Kaufmann, 2008). This is truly the bible of compositing and is a must for anybody serious about understanding what they are doing and why in compositing. There is also an excellent website full of information about the book and about compositing at . I highly recommend checking it out as you continue to deepen your skills in compositing with Blender.