Книга: Mastering Blender
Назад: Chapter 8: Bullet Physics and the Blender Game Engine
Дальше: Chapter 10: Advanced 3D/Video Compositing

Part III

Video Post-production in Blender

Chapter 9

Compositing with Nodes

Blender’s versatile and powerful nodes system enables you to access and manipulate a huge variety of compositional elements to combine rendered images in exactly the way you want. You can use nodes to composite still images or video sequences that can later be edited in Blender’s nonlinear Video Sequence Editor (VSE), which you will learn about in Chapter 11, “Working with the Video Sequence Editor.” Here, you will look at the fundamentals of compositing with nodes, using render layers to re-create a scene from its component render passes. After that, you will follow an in-depth example of one of the most common tasks in live-action video compositing: the task of green screen compositing and garbage matting.

In this chapter, you will learn to

Compositing with Render Layers and Passes

In Chapter 4, “Rendering and Render Engines,” you were introduced to the idea of render layers and render passes. In that chapter, you saw how constituents of a rendered scene such as shadows, specular highlights, and ambient occlusion could be rendered separately in passes. However, it wasn’t clear from that chapter how this could be useful. This is because we hadn’t yet begun talking about compositing. Once you start working with Blender’s node-based compositing system, the usefulness of render layers and render passes becomes very clear.

What Is Compositing?

Compositing is the process of combining data from two-dimensional images to create a new image. There are many situations in which compositing of some kind is necessary. You might want to combine live-action footage with CG. You might want to place a character or other element over a background that was filmed or rendered separately. You might want to break down complex 3D scenes into components to be rendered more efficiently and then build them up again in a comparatively less-resource-intensive 2D composite. You may simply want to tweak the color of parts of the scene or add some atmospheric effects.

To do this, it is often necessary to be able to work with different components of an image separately. For example, you can create an appealing glow effect by isolating the bright specular highlights of an image, blurring them, and overlaying the blurred highlights upon the original image. Although there are several ways to isolate bright points, being able to work with specular highlights on their own is a convenient way of doing this. Likewise, you might want to composite a CG character into a video scene, which requires that the CG character’s shadow appears to fall realistically onto the background video image. In this situation, working with the shadow pass only enables you to tweak the exact influence of the shadow on the background, so that you can get that aspect of the image exactly right without influencing other parts of the image that may need to be tweaked in different ways. You might want to pre-render time- and resource-intensive properties of a video in advance, such as ray-traced reflections and transparency, ambient occlusion, and indirect lighting, while keeping the color of an object so that it can be quickly changed on the fly in the compositor. This can save enormous amounts of rendering time if you need to experiment with different colors for an object in a fully rendered scene.

In Chapter 4 you saw how to separate elements of the render into render passes. This chapter deals in part with how to put these elements back together. The way this is done is to use render layers as input nodes in Blender’s node-based compositor.

What Are Nodes?

Very generally, nodes are a way of organizing and combining data. Blender currently has node-based systems for working with material data, texture data, and composite data. All of these systems share a common structure. Every node represents some data or some operation to be performed on data. Every node in a node graph has either one or more input sockets, one or more output sockets, or one or more of both. The links, or lines between node sockets, define the input-output relationships between the nodes. All input sockets are on the left side of nodes, and all output sockets are on the right side of nodes, so the node graphs always read from left to right


When we say node graphs read from left to right, we mean logically. The actual placement of the nodes is completely free. It’s a good idea try to maintain left-to-right placement of your nodes to keep the graphs readable.

If you’re having trouble getting your head around the idea of composite nodes, you can start by thinking of them as something similar to 2D layers in Photoshop or GIMP but without the constraint of having one directly on top of the next. Unlike 2D layers, composite nodes are nonlinear and structurally flexible. If 2D layers are a train track, nodes are a system of streets.

A Closer Look at Render Layers and Nodes

The best way to understand composite nodes is to work with them in rendering a scene. In the following example, you’ll look at composite nodes and render passes within the context of a 3D scene that incorporates a variety of light effects including diffuse and specular lighting with lamps, ambient occlusion and environment lighting, and indirect lighting by emitting objects. The scene also includes ray-traced reflections. It is instructive to break the render down into individual render passes and then try to reconstruct the original render using the compositor.

You can create your own 3D scene with these effects to follow along here, or you can use the scene I’ve prepared for you in the downloadable files for this chapter, called scene.blend. However, be aware that if your scene includes effects not discussed here, such as transparency and refraction, it will require further work to reconstruct.

Open the .blend file with the 3D scene and open a Node Editor window from the menu, as shown in , or by pressing Shift+F3 while hovering your cursor over a window.

Accessing the Node Editor window

c09f001.tif

You will be using composite nodes here (as opposed to material or texture nodes), so click the c09g001.tif icon in the header of the Node Editor window, and then check the Use Nodes check box in the header. Also, in the Post Processing panel of the Render properties window, turn on compositing. When you do this, your window should appear as shown in .

Activating nodes

c09f002.tif

Two nodes appear in the Node Editor. The node on the left is an input node. You can tell because it has sockets on only the right side, representing output sockets. This node will take a render layer input and feed its data into the compositor.

The node on the right is an output node. You can tell because it has sockets on only the left/input side. This node is a Composite node. It receives data from the node system and outputs the data as a finished, composited image. A node system can have only one Composite node.

In this very simple node setup, the image data from the render layer in the Render Layers node (called RenderLayer) is sent directly to the Composite node. The resulting compositor output will be exactly what was rendered on the render layer going into the node system.

Render layers are set in the Layers panel of the Render properties window, shown in . By default, a single render layer exists, called RenderLayer. The field at the top of the panel lists the render layers and includes check boxes to set them as active for rendering or not.

The Layers panel

c09f003.tif

The next field down is the render layer name. Below this are several banks of layer buttons that look similar to the scene layer buttons in the 3D viewport header.

Scene The bank of buttons under the label Scene is the same bank from the 3D viewport header. It shows which scene layers are visible and active for rendering. By default only scene layer 1 (the upper-left button in the bank of buttons) is selected.
Layer To the right of this bank of buttons is another bank of buttons under the label Layer. These represent which scene layers are rendered in this render layer. By default (and in the figure) these are all selected. This means that this render layer will include data from all scene layers.
Mask Layers The Mask Layers bank includes buttons for layers whose content will not be rendered but instead will be replaced by alpha zero (transparent) areas, preventing things behind the mask objects from being rendered. To the left of the Mask Layers bank are fields for Light and Material groups, which enable you to restrict the render layer to rendering only specific groups of lights or materials.
Include Check Boxes Below this are the Include check boxes, which enables you to select which components of the image are rendered. These options are more general than render passes. You can choose to render or not render solid objects, halo materials, z transparency, sky color, edges, or strands here.
Passes Check Boxes At the bottom of the panel are the Passes check boxes. Here is where you choose which passes to render separately. You will see more about passes in the following section. By default, only the Combined pass and the Z pass are selected. The Combined pass includes all passes at once and does not allow access to the individual pass data in the compositor. The Z pass includes depth information.

When you render now, do so by either clicking the Image button in the Render properties area or pressing F12. The scene will be rendered and sent to the compositor and then output from the compositor as the final render. The nodes will appear something like , and the composited output can be viewed as the Render Result image in the UV/Image Editor, as shown in . The image is repeated in color in the color insert of this book.

A render layer input node

c09f004.tif

The render layer in the UV/Image Editor

c09f005.tif

Reconstructing an Image with Render Passes

Render passes enable you to render components of the full render individually, so that they can be accessed independently. In this section, you’ll render the various passes separately and then re-composite them in the compositor to mimic the original combined render.

This is intended only as an exercise and a way to look at the contributions of individual passes in isolation. You won’t often have cause to re-create a combined render identically with render passes in real life. Usually, you will carry out operations on individual render passes or use the data from render passes in different contexts to create something that would not have been possible in a combined render. However, in order to do this, you need to know what data is available to you in render passes.

Another thing to note is that internally, Blender arrives at the combined pass in a slightly different way than how that it arrives at a composited image of individually rendered passes. That means that in theory, some effects will render differently in the combined pass than in a composited output. While it is almost always possible to re-create a combined pass by compositing constituent passes, some cases require significant tweaking to get exactly right. In particular, images that make use of transparency and refraction can be a challenge. Fortunately, the compositor continues to evolve and improve. For the time being, we’ll stick to an example that can be reconstructed fairly simply, in the file scene.blend.

In order to render individual passes so that they can be accessed in the compositor, you need to check the passes you want in the Passes area on the Layers panel in the Render properties. Check Color, Diffuse, Specular, Shadow, Emit, AO, Environment, Indirect, and Reflection, as shown in .

Selecting render passes

c09f006.tif

When you re-render (press F12) the scene, you’ll see that your Render Layers input node now has output sockets corresponding to each of the available render passes. shows examples of three different passes: the Indirect lighting pass, the Shadow pass, and the Diffuse pass. These images are repeated in color in the color insert of this book.

You can also look at the different passes in the UV/Image Editor window by choosing the pass you want to see from the Pass drop-down menu in the header, as shown in . This figure shows the Environment lighting pass.

Individual render passes

c09f007.tif

Viewing passes in the Image Editor

c09f008.tif

Viewer Nodes

As mentioned previously, you can have only one Composite node in your node system. This node represents the final output of the compositor, and it wouldn’t make sense to have more than one. However, it is often the case where you want to look at an arbitrary node in the system and send its output to an image that can be viewed in the Image Editor or exported. To do this, use a different type of output node called a Viewer node. A Viewer node is added with the Add menu, which is opened by pressing Shift+A, as shown in .

Adding a Viewer node

c09f009.tif

As you can see in , the Viewer node is very similar to the Composite node, but you can have as many Viewers as you wish. To connect the output of a node to a viewer, hold the left mouse button and drag the mouse from the output socket of the node to the input socket of the Viewer node. Alternately, Shift+Ctrl+left-clicking any node will send that node’s image output to the active Viewer node. Yet another way to add a connection between nodes is to select any two nodes and press the F key, which is analogous to how you would create an edge between two vertices when mesh modeling.

Viewer and Composite nodes

c09f010.tif

Using Multilayer EXR Images for Input Nodes

Render Layer input nodes have a few drawbacks. The main problem with Render Layer nodes is that they are not persistent. As you probably know from experience rendering images in Blender, you need to output a rendered image and save it to a file in order to have it later. If you render an image in Blender and then shut down and restart the .blend file, the render buffer is cleared, and the image must be re-rendered. The same is true of render layers. If you reopen this .blend file right now, you will have lost the contents of the input node (and the compositor’s output will also be blank). Rendering these constituent render layers is likely to be the most time-consuming part of the process, so it is not optimal to have to re-render every time you open the file.

The solution to this is to output the render layer to an image file and then to use an Image input node instead of a Render Layer node as your input node. But you can’t simply use a .png file for this, because .png and other standard image files have no way to internally represent separate render layers. You must use the MultiLayer format for this. MultiLayer is based on the .exr file format, but it is designed specifically for use in Blender’s compositor. Let’s try using the MultiLayer format now:

1. Select MultiLayer from the drop-down menu in the Output panel of the Render properties, as shown in .

Outputting to MultiLayer format

c09f011.tif
2. In order to render the render layer to a MultiLayer .exr image, you need to deselect the Compositing option in the Post Processing panel of the Render properties area, as shown in . By default, Compositing is checked, meaning that if composite nodes are found in the .blend file, the render output will be the compositor output. When you uncheck this, Blender uses the internal renderer’s own output as the final render output, just as it would if there were no composite nodes set up.

Deselecting the Compositing option

c09f012.tif
3. Now render the scene by pressing F12 and save to an image by pressing F3. This will save your render to an .exr format image file.
4. You can now use this as an input for the compositor by adding an Image type input node from the Add menu, as shown in . The node will look like . Note that all of the desired output sockets are present.

Adding an Image input node

c09f013.tif

The Image input node

c09f014.tif
5. Delete the Render Layer node by selecting it and pressing the X key; then connect the Image node to the node system, as shown in .

Replacing the Render Layer node with the Image node

c09f015.tif
6. Similar to the Fill tool for modeling, the Make Links tool (F) fills a link between two nodes. However, Make Links is only an educated guess, and it may connect two nodes using an incorrect input or output. Try selecting both your Image and Compositing nodes and hitting F to connect them.

Combining Passes

RGBA images can be composited together by carrying out mathematical operations on the color or alpha values for each pixel of each image. Each render pass is represented as an RGBA image, so we use standard image operations to combine the passes. The two main operations used in this example are multiplication and addition.

Multiplication

Multiplication is appropriate for render passes that darken an image. Multiplication takes the RGB values of each pixel of one image and multiplies them by the corresponding values of the corresponding pixel of the second image. The output is the image made up of these product pixels.

Because all RGB values range from 0.0 to 1.0, multiplication can only leave a value the same or darken it. Any pixel multiplied by a black pixel (RGB value 0, 0, 0) will result in a black pixel. Any pixel multiplied by a white pixel (RGB value 1, 1, 1) will result in a pixel of the same color as the original pixel. A pixel multiplied by another pixel that is neither black nor white will have its RGB values diminished according to the RGB values of the second pixel. This is why multiplication is appropriate for render passes that darken an image. The main two render passes that do this are the Shadow pass and the Ambient Occlusion (AO) pass.

First, let’s work with the Shadow pass:

1. Add a Mix node by choosing Color > Mix from the Add (Shift+A) menu, as shown in .

Adding a Multiply Mix node

c09f016.tif
2. Choose Multiply from the drop-down menu, set the Fac value to 1.0, and connect the input node’s Diffuse output socket and the Shadow output socket to the first and second input sockets on the Multiply node.
3. Send the output from the Multiply node to a Viewer node, as shown in .

A Multiply node with Diffuse and Shadow inputs

c09f017.tif

shows the Diffuse pass, the Shadow pass, and the result of multiplying the two passes together. As you can see, we’ve taken the first step in reconstructing the combined pass (sometimes also called the beauty pass). These images are repeated in color in the color insert of this book.

The Diffuse pass multiplied by the Shadow pass

c09f018.tif

The AO pass behaves similarly to the Shadow pass:

1. Factor in AO by duplicating the Multiply node (Shift+D), as shown in . Note that when there is no second image input, the Multiply node multiplies the first image by a solid color. If you set this to white, the Multiply node will not change the value of the first input.
2. Connect the output of the first Multiply node and the output of the AO socket to the input sockets of the second Multiply node, as shown in . The result is shown in .

Duplicating the Multiply node

c09f019.tif

Multiplying the AO pass

c09f020.tif

With AO multiplied

c09f021.tif

Addition

Now, we turn to the lightening factors. All of the remaining render passes in some way add illumination to the scene, so they will all be added in some way (although a few will need some further processing before being added, as you will see).

1. To get started, duplicate one of the Multiply nodes and select Add from the drop-down menu.
2. Connect the output socket of the second Multiply node to the first input socket of the Add node, as shown in . Note that changing the color of the second input socket to black will ensure that the output of the Add node is identical to the input image, because the values of the two pixels are added together, with a maximum resulting value of 1.

Adding an Add node

c09f022.tif
3. Connect the Specular pass socket from the input node to the second input socket of the Add node, as shown in . Your image now has specular highlights, as shown in .

Adding specular highlights

c09f023.tif

The image with specular highlights

c09f024.tif
4. Duplicate the Add node and feed the output of the Indirect lighting socket to the bottom input, just as you did the Specular socket, as shown in . This will add the light on the other objects from the red glow of the sphere, although it does not add the glow to the sphere itself, which is provided by the Emit pass.

Adding Indirect lighting

c09f025.tif
5. Duplicate another Add node and add the Emit pass. The result so far will appear like . The image is repeated in color in the color insert of the book.

The image with emitted light

c09f026.tif
6. Duplicate the Add node again and add the Reflect pass. The result should appear as shown in .

The image with reflections added

c09f027.tif

The image is repeated in color in the color insert of this book, and you should refer to that, because color here becomes an issue. As you can see, something appears to have gone wrong. The reflections on the monkey head do not appear as you might have expected them to appear. Rather than being purple, they seem to be tinted green.

This is a result of how the Reflect pass works. Although it is an additive pass (meaning that it is intended to be added to the composite) it contains both positive and negative values. These are calculated based upon the tonal values of the fully lit image. Because we haven’t yet added environment light, the Reflect pass is subtracting too much from the RGB values of the surface, affecting its color. When we add the environment light, this will be resolved. Of course, if we had added these passes in a different order, with the Reflect pass last, this apparent problem would not have come up.

Incidentally, it is possible to work around this problem by creating a dummy object with a solid black material and deriving the Reflect pass only from the dummy object, rendered on a separate render layer. But since this is not a problem here, we’ll simply continue by adding environment light:

1. Duplicate another Add node.
2. Add the Environment pass, as shown in .

The result is shown in (repeated in color in the color insert). It’s not pretty—it looks like glow-in-the-dark toys from the seventies. For one thing, the image seems far too white. For another thing, the monkey head still appears to be green. Clearly the Environment pass was not what we wanted here.

Adding environment light

c09f028.tif

The image with the Environment pass added

c09f029.tif

Here’s why the Environment pass is not quite what you should add here. The Environment pass provides global values for the environment lighting, but the actual tonal energy that should be added to the image depends on the material colors for each object. That is to say, before the environment lighting is added, it must be multiplied by the Color pass, as shown in and repeated in the color insert of the book. This makes sense, because the monkey head obviously needs some purple.

We now have an image that looks pretty good. We can compare the output of our Composite node network with the original Combined pass in , repeated in color in the color insert.

Multiplying Environment by Color

c09f030.tif

Comparing the composite with the Combined pass

c09f031.tif

It looks pretty good, but the two are not the same. It is clear that the red light from the Indirect lighting pass is considerably brighter in the composited image. But by now, you probably have a pretty good guess about why this might be. In fact, like the Environment lighting pass, the Indirect lighting pass also needs to be multiplied by the color values to get the correct additive values. The setup for this is shown in . As you can see, once you’ve done this, your compositor output will be identical to the Combined pass.

In fact, this is probably also how the Diffuse pass should be, but at present Blender automatically incorporates color information into the Diffuse pass. If, for whatever reason, you need diffuse values separate from the material colors, you would need to create solid-white dummy objects and render the Diffuse pass from those.

Node Groups

To simplify node networks, you can select a group of nodes and group them into a node group, which is represented visually by a single node, as shown in .

To do this, select the nodes by using the B key and border selecting with your mouse; then press Ctrl+G to create a group (Alt+G ungroups the grouped nodes).

Multiplying Indirect light by Color

c09f032.tif

Grouping nodes

c09f033.tif

You can edit the internal nodes of a node group by pressing the Tab key to open the group for editing, as shown in .

Editing grouped nodes

c09f034.tif

At this point, you should have a pretty clear idea of what render passes contribute. This will be important when you go on to apply specific compositing effects. In the next section, you’ll step away from render passes and take a look at a completely different use of the compositor.

Pulling a Green Screen Matte with Nodes

A big part of video compositing is the task of extracting elements from their original surroundings so as to be able to composite them freely into different visual environments. There are several ways to do this, depending on what kind of source material you have to work with. Typically, the process involves the creation of a matte, which is a special image used to suppress parts of the original image and allow other parts of the image to show through.

The Blender composite node system enables you to take multiple images, videos, renders, or other 2D information sources as input and then perform a wide variety of operations on them to combine them in an endless number of ways. The operations are performed in a nonlinear way and represented as nodes on a graph that can be edited directly. In this section, you will see how this system can be used to perform the common task of pulling a green screen matte from a live video. After you understand how to do this, many other uses of the composite node system should become clear.

Working with Green Screen Video

When you know in advance that you will need to composite a character or object into a different background scene, the typical approach is to shoot the foreground action against a colored background screen that can be easily eliminated by compositing techniques such as those described in this section. The most common type of colored screen currently used is a green screen.

To shoot green screen video, you need access to a green screen. Most multimedia production studios, including those of many universities and technical colleges, have all you need to take green screen footage. Alternatively, it is not especially difficult to build your own green screen equipment. You can find instructions on the Web for doing this.

For the tutorials in this chapter, you can find a brief clip of green screen video in the form of a sequence of JPEG images in the greenscreen subdirectory on the accompanying CD. The clip is from footage shot by Norman England for his film The iDol () and features a character walking toward the camera, as shown in (also repeated in color in the color insert of this book). There are a couple of challenges that this particular clip presents for the compositor, and you’ll see what those are and how they can be solved in Blender over the course of this chapter.

A video shot against a green screen

c09f035.tif

More on Codecs
It is important for you to be aware of codecs if you plan to shoot and edit or composite video. Not all video codecs can be decoded by open-source software, and professional-grade-equipment makers often assume that you will be using specific proprietary tools to work with your video. The footage included on the CD was originally shot on a Panasonic VariCam high-definition (HD) video camera. This camera records video by using the DVCProHD codec. Proprietary decoding software for this codec is available for Mac and Windows, but I am not aware of any open-source options for working with this codec.
It is always a good idea to know what codec you will be working with before you shoot, and the restrictions are even tighter if you are working in an open-source or Linux-based pipeline. Do your research before spending money and time on equipment rental and shooting.

Working with Composite Nodes

Creating, or pulling, a green screen matte is not difficult, but it requires that you familiarize yourself with Blender’s composite node system.

Let’s discuss a few general points about composite nodes before we begin.

The node graph, sometimes called a noodle, is made up of nodes of several distinct types. Depending on the node type and its specific properties, each node has at least one input socket or one output socket and may have multiple input and/or output sockets. Input sockets are on the left side of the nodes, and output sockets are on the right side of the nodes.

Nodes are connected to each other by curved lines, or links, that extend from an output socket of one node to an input socket of the other node.

To connect two nodes, hold down the left mouse button while you move the mouse from an output socket of one node to the input socket of another node, where you want the connection to be established.

To break a connection, hold down the left mouse button while moving your mouse in a cutting motion across one of the connecting lines between nodes.

To set up the nodes for pulling the green screen matte, open a fresh scene in Blender and follow these steps:

1. Open a Node Editor window in a fresh Blender scene, as shown in . In the header, select the Composite Nodes button and click Use Nodes, as shown in . The default composite node setup will appear with a single Render Layers input node on the left and a Composite output node on the right, as shown in .
2. In the header, select the Composite Nodes button and click Use Nodes. The default composite node setup will appear with a single Render Layers input node on the left and a Composite output node on the right.
3. At this point, you don’t yet need a Render Layers input node, so click that node and delete it with the X key. Instead, use the original green screen video as the main input. Images and video can both be imported into the node system by using an Image node. Add that by pressing Shift+A and choosing Input > Image.
4. Click Open on the node.

Selecting the Node Editor window

c09f036.tif

Selecting the Composite Nodes button and Use Nodes

c09f037.tif

Default composite node setup

c09f038.tif
5. In the file browser window, navigate to the directory where you have copied the green screen image sequence from the CD. Select the first image, called 0001.jpg, as shown in , and then click the Open Image button.
6. Choose Image Sequence from the Source drop-down menu and enter 240 in the Frames field.
7. Check the Auto-Refresh check box. The resulting node setup will look like . Try advancing 10 frames by holding Shift and pressing the up-arrow key to make sure that the image updates.

Selecting an image sequence

c09f039.tif

The loaded image

c09f040.tif

It’s important to be able to see what you are working with. The Composite node represents the final rendered output, and there should be only one of these connected to the node graph. However, you can have as many Viewer output nodes connected to the graph as you want, and whichever one you select at any moment will display immediately in the UV/Image Editor window when Viewer Node is selected from the drop-down menu. The Viewer node is added by pressing Shift+A and choosing Output > Viewer from the Add menu.

The first thing you will use multiple Viewer nodes for is to take a close look at the individual color channels of the image. This is very helpful to get a sense of the actual values that you are working with, and it can help to clarify how green screen matte pulling works.

1. To do this, first press Shift+A and choose Converter > Separate RGBA, as shown in . The new Converter node has a single Image input socket and four output sockets representing the red, green, blue, and alpha (RGBA) channels.
2. Add three new Viewer nodes by duplicating the first one you added and pressing Shift+D, and connect them to the three color channels, as shown in (reproduced in color in the color insert of the book).

Adding a Separate RGBA node

c09f041.tif

Viewing the individual color channels

c09f042.tif

As you’d expect, the highest-contrast channel is the green channel, which has a high-intensity background thanks to the green screen. The red channel, on the other hand, is much more uniformly distributed. The process of pulling a green screen matte depends on this difference in intensity ranges between the green and red channels.

Now you can use the difference in channel intensities to pull the matte itself. Blender’s node system includes specific nodes for creating mattes in fewer steps, but in order to understand exactly what is happening, it is good to do it yourself by hand. Also, walking through the process will make the subsequent garbage-matting step easier to understand. The idea here is to subtract the red channel of the image from the green channel by using a Subtract node.

1. Add this node by pressing Shift+A and choosing Color > Mix from the menu and then choosing Subtract from the drop-down menu on the node. Because there is proportionately less red in the background, this step results in darkening the foreground considerably more than the background, as you can see in . This correctly separates the foreground from the background.

Subtracting the red channel from the green channel

c09f043.tif
2. Next add a ColorRamp node by pressing the spacebar and choosing Add > Converter > ColorRamp.
3. Instead of manually connecting the color ramp, you can insert it directly into the noodle by dragging it over the link between your Subtract node and your Viewer node. Use this as shown in to invert the light and dark areas and to push the contrast to solid black and white.
4. By adjusting the location of the value change along the length of the color ramp, you can adjust where along the grayscale range the black/white split occurs. This has the effect of enlarging or shrinking the black and white areas in the image. The resulting green screen matte should look something like the one shown in .

Inverting and pushing the contrast with a ColorRamp node

c09f044.tif

The basic green screen matte

c09f045.tif

Using the AnimAll Add-on

The green screen matte you pulled in the previous section is a pretty good start, but it is immediately clear that it will not be sufficient to distinguish the character from the background. The problem is that the green screen does not perfectly cover the entire background. There are pieces of hardware and other non-green areas that have been marked as white in the matte. The solution to this is a process called garbage matting. It should soon be obvious to you how it got its name.

In the simplest cases, garbage matting may just be a matter of covering areas of the matte with blocks of black. However, our example has some added complications. The main problem is that the character is approaching the camera. As he comes closer to the camera, his silhouette interacts directly with the debris in the background, as you can see in . This results in a green screen matte like the one shown in , where there is no automatic, color-based way to distinguish between the foreground figure and the background garbage. Not only does the background need to be matted out, but it also needs to be done in a way that distinguishes the correct outline of the foreground figure as it moves.

A complication for garbage matting

c09f046.tif

The problematic green screen matte

c09f047.tif

This problem can be fixed by using a Curve object to create a simple animated garbage matte. The solution is a simple form of rotoscoping, which involves creating images by hand on a frame-by-frame or nearly frame-by-frame basis. The AnimAll add on enables you to do this more easily than was possible in the past. However, as I write this, powerful new tools for masking in the compositor are being developed. If you are working in Blender version 2.64 or later, mattes for compositing should be made in the Movie Clip editor as described in Chapter 10, “Advanced 3D/Video Compositing.”

To set up the animated garbage matte in Blender version 2.63 or previous, follow these steps:

1. To use the AnimAll add-on, you must first activate the add-on. You’ll find it in the Addons panel of the User Preferences window under the Animation add-ons, as shown in .

Activating the AnimAll add-on

c09f048.tif
2. As mentioned previously, you will create the garbage matte by using curves and the AnimAll add-on. Curves are 3D objects, so the garbage matte itself will be in the form of a 3D render. First, open a 3D viewport in Blender and enter Camera view by pressing 0 on the number pad.
3. In the Dimensions panel of the Render properties window, set the output dimensions to HDTV 1080p, as shown in . This is to ensure that the dimensions of the rendered matte are the same as the dimensions of the green screen footage.

Setting the correct dimensions for the camera

c09f049.tif

To see the video from the Camera view, you must import the video as a background image sequence:

1. In the Properties Shelf of the 3D viewport, which you can toggle by pressing the N key, check the Background Images check box, and load the image sequence.
2. In the file browser, navigate to the greenscreen directory, select the first image (0001.jpg), and then click OpenImage and import the image.
3. Change the Axis to Camera and the Source to Image Sequence, and type 240 into the Frames field, as shown in . When you have done this, the image will appear in the Camera view, as shown in .

Importing the background image sequence

c09f050.tif

The 3D camera view with background image

c09f051.tif
4. Make sure that your 3D cursor is located a reasonable distance in front of the camera, because this determines where new objects will be added. While in the Camera view, add a Bezier circle as shown in by pressing Shift+A and choosing Curve > Circle.

Adding a Bezier circle

c09f052.tif
5. Check Align To View in the Tool Shelf of the 3D viewport, which you can toggle with the T key, as shown in . The resulting circle should look similar to the one shown in (a color version is found in the color insert of the book). You should also see the AnimAll panel in the Tool Shelf.

Checking Align To View

c09f053.tif

The Bezier circle and AnimAll panel

c09f054.tif

If the size of your circle in the Camera view is somewhat different from what is shown in the figure, don’t worry. You will be rescaling the curve shortly anyway. Just make sure that the curve is far enough away from the camera that you can see and work with the whole curve easily.

6. Enter Edit mode with the Tab key. Select all the control points of the curve and press the V key to snap the control points into Vector mode, as shown in . This will force each control-point handle to point at the adjacent one, causing the curve to take the shape of a square.

Snapping the control points to angles with the V key

c09f055.tif
7. Rotate the curve 45 degrees and scale the curve along the local x- and y-axes by pressing the S key followed by double X and double Y, respectively, to get the shape shown in .
8. In the Curve properties area, set the curve to be a 2D curve, as shown in .

Rotating and scaling the curve along the local x- and y-axes

c09f056.tif

Setting the curve to 2D

c09f057.tif
9. In the Object properties area, ensure that the Curve object is set to Solid draw type, as shown in . Press the Z key to see the curve in Solid viewport shading mode. This will be the garbage matte and will eventually be used to black out the unwanted areas of the green screen matte.

Setting the object to Solid draw type

c09f058.tif
10. Select the inside (right-hand) edge of the curve by pressing the B key and using the Border select tool, and then subdivide by pressing the W key and choosing Subdivide, as shown in .

Subdividing the inside edge

c09f059.tif
11. Set the Number Of Cuts to 8 in the Subdivide operator pane on the Tool Shelf, as shown in . The subdivided curve, shown in wireframe, should appear like (reproduced in color in the color insert of the book).

Setting the number of cuts

c09f060.tif

The subdivided curve in wireframe

c09f061.tif

Animating the Garbage Matte

Now that you’ve set up the curve and activated the add-on, you can animate the garbage matte to cover the necessary background throughout the entire clip. To do that, follow these steps:

1. Position the points so that the matte covers everything on the left side of the background. Advance to a frame just before the outline of the figure begins to intersect with the piece of hardware you want to matte out, and position the curve’s control points so that they separate the figure from the hardware, as shown in .
2. Insert a keyframe for this curve shape by pressing the Insert button on the AnimAll panel, as shown in .
3. Hold Shift while pressing the up-arrow key on your keyboard to advance 10 frames at a time. Each time you advance, adjust the positions of the curve’s control points so that the matte covers the background and follows the contours of the foreground figure in places where background garbage and the figure intersect, as shown in . It’s not necessary for the matte to follow the entire contour of the figure—it’s only necessary to distinguish in those places where the background garbage overlaps with the figure. When the figure has filled the screen, your matte should be keyed completely out of the Camera view.

Setting the first keyframe shape

c09f062.tif

Inserting a keyframe in the AnimAll panel

c09f063.tif
4. When you’ve finished, scrub back through the sequence to make sure your matte is accurate between the 10-frame increments, and make the necessary adjustments. You can toggle the Bezier handles into view to get more control over the shape of the matte where necessary.

A small bit of garbage on the right side of the frame is also in need of matting. Setting up the matte to do this is basically the same process as the left-side matte but on a smaller and simpler scale. Setting up this matte is presented as an exercise at the end of this chapter.

5. After you have set the necessary keyframes at 10-frame intervals starting at frame 1, go back and start at frame 5 and advance through the sequence again at 10-frame intervals, adjusting any places where the matte pulls away from where it should be.

Masking the area around the edge of the figure

c09f064.tif
6. When you have finished doing this, go through the entire sequence frame by frame to make any further adjustments and add whatever further keyframes are necessary. The final matte should look something like the one shown in .

The complete animated garbage matte

c09f065.tif

Using Render Layers and Nodes

As you saw, the garbage matte is created in the 3D space. It can be rendered simply by unchecking Compositing in the Post Production panel of the Render properties and rendering in the ordinary way. Remember, Render Layers input nodes are used to incorporate renders from the 3D view into the node network. Render layers enable you to organize your rendered scene in such a way that different parts of the same scene can be used as separate input nodes in the compositor. Whatever information you want to be included in a single node should be organized into a distinct render layer. Objects are divided among render layers by their placement on ordinary 3D viewport layers.

In this example, the garbage matte is on layer 1 in the 3D viewport. By default, the first render layer is set to include objects on all 3D viewport layers. In this example, there won’t be any other 3D content to worry about, so the default Render Layers setup is fine.

1. Return to the green screen node setup you created previously. In the Node Editor window, add a Render Layers node by pressing Shift+A and choosing Input > Render Layers, as shown in . By default, this node will represent render layer 1.

Adding a Render Layers node

c09f066.tif
2. The solid Curve objects in your 3D scene have alpha (opacity) values of 1.0. The background sky has an alpha value of 0. By connecting the Alpha output socket of the Render Layers node to a Color input socket of a Color Invert node (accessible from the Add menu by choosing Color > Invert) and checking RGB, you convert this to a black and white color image with the garbage matte objects in black and the background in solid white, as shown in .

The solution now is simply a matter of multiplying the color values of the alpha-inverted black and white garbage matte image and the original green screen matte you pulled previously. If these two mattes are multiplied together, only the areas where both mattes are white (that is, have 1 values) will be white. All other areas will be multiplied by zero and so will turn out black.

1. Add a Multiplication node by pressing Shift+A and choosing Color > Mix from the Add menu and then selecting Multiply from the drop-down menu on the node.
2. Make sure the Factor value is 1.0; then connect both mattes to this node’s input sockets, and connect the output socket of this node to the Composite node, as shown in . The resulting rendered matte should look like .

Inverting the alpha channel and converting it to black and white

c09f067.tif

The node setup so far

c09f068.tif

The intersection (product) of both mattes

c09f069.tif

Using Alpha Overlays and Premultiplication

You now have a nice green screen matte with a solid black background and a solid white foreground in the shape of the character. You can group the whole node setup by selecting all of the nodes between the input and output nodes and pressing Ctrl+G. This matte will be used to set the alpha value of the foreground image so that the white areas (the character) are fully opaque (alpha 1) and the background areas are fully transparent (alpha 0). The way to do this is to use a Set Alpha node, which you can add by pressing Shift+A and choosing Converter > Set Alpha from the Add menu, as shown in . Connect the original image to the Image socket and connect the matte to the Alpha socket.

To test the effect, add a solid color input as a background by pressing Shift+A, choosing Input > RGB from the Add menu, and then adding an AlphaOver node by choosing Color > AlphaOver and connecting them all, as shown in (reproduced in the color insert of the book).

Adding a Set Alpha node

c09f070.tif

Node setup with a colored background and an AlphaOver node

c09f071.tif

Premultiplication

Premultiplication controls the ordering in which multiple overlaid alpha values are calculated. Premultiplication values that are incorrectly set can cause unwanted artifacts or halo effects. In this example, there are problems with the unconverted premultiplication values that result in some artifacts on the colored background and around the edge of the figure. These are fixed by the use of the Convert Premul option. You can see the difference between the two rendered outputs in (reproduced in the color insert of the book).

Without and with the Convert Premul option set

c09f072.tif

Spill Correction and Cleaning Up

Checking the alpha overlaid foreground image over a colored background is also a good way to highlight the problem of color spill that often occurs when doing this kind of compositing. Because the subject is being shot originally against an expansive, bright-green background, there is a green tint to the reflected light from the background as well as background light that shows through transparent or semitransparent areas of the subject such as hair. You can clearly see the undesirable green spill in , which is repeated in the color insert of the book. You can zoom in on the Node Editor’s backdrop image with the V key to see this edge detail and zoom back out with Alt+V.

Green spill

c09f073.tif

The solution to this problem is to adjust the color channels directly and to bring down the level of green in the image overall.

Tweaking Color Channels

To tweak the individual R, G, B, and A channels of an image directly, use a Separate RGBA Converter node, described previously, to split up the values, and then use another Converter node, Combine RGBA, to put the values back together as a completed color image.

In this case, you need to bring down the overall green level. However, there is a danger to bringing the level too low. Remember, the character is holding a green object. Adjusting the green channel affects the color of green things disproportionately, so an adjustment that might not make the image as a whole look unnatural may easily alter the color of the doll so that it no longer looks green. There must remain enough energy in the green channel that this object still looks green.

There’s no single right answer to how to do this, but a simple solution is to average the energy levels of all three channels and to use this average as the green channel. This significantly reduces the strength of the green channel but ensures that the green channel remains stronger than the other two channels in places that are predominantly green to begin with. This can be done by using two Math nodes set to Add and one Math node set to Divide, as shown in . You can create math nodes by pressing Shift+A and choosing Converter > Math from the Add menu. Collecting those nodes in a single node group and connecting the red and blue channels straight through yields the node setup shown in . The difference between the image before and after adjusting the green channel can be seen in , also repeated in the color insert of the book.

Calculating a new green channel

c09f074.tif

Red and blue channels connected

c09f075.tif

Renders with original and adjusted green channels

c09f076.tif

Finishing Off the Matte

You can use either the UV/Image Editor or the Nodes window itself to look at viewer output. By clicking the Backdrop option in the Nodes window header and selecting the View Alpha option, you can see the results of the composite directly in that window, as shown in . The V key and Alt+V hotkey combination, respectively, allow you to zoom this backdrop out and in.

Viewing the composite as a backdrop

c09f077.tif

You can use the composited image as the foreground to a more involved scene in the compositor by introducing a background, as shown in (and repeated in color in the book’s color insert). The image file is a sky map created by BlenderArtists.org member M@dcow and freely available at . You’ll find the sky_twighlight.jpg file among the downloadable files that accompany this book.

Composited video with a background image

c09f078.tif

There are a number of areas where a matte can be fine-tuned. You can add a Blur node from the Add menu under Filter > Blur. Blur nodes can be used to soften the edge of the matte. Another filter node, Dilate/Erode, can be used to expand or shrink the white area of the matte, which can be useful for tightening, or choking, the matte around the subject. Blurring can be used on the matte itself before using the matte to set the alpha value of the foreground image or later in the composite. The Blur and Dilate/Erode nodes are shown in .

Blur and Dilate/Erode nodes

c09f079.tif

When you have composited the foreground image into a specific background, you can then do some further color adjustments. In (and repeated in color in the book’s color insert), you can see the results of using a slightly purple screen over only the most brightly lit highlights of the foreground image. If you experiment with the nodes presented in this chapter, you should be able to work out how to do this. The actual node setup is included as an exercise in “The Bottom Line” section at the end of this chapter.

Composited image with further color adjustments

c09f080.tif

Another thing you can do is to export the video and use it as an alpha overlay in the Sequence Editor. You can see how to do this in Chapter 5, “Getting Flexible with Soft Bodies and Cloth.” If you render the image to an output and plan to use the alpha values, make sure that you render to a format that can represent alpha values, such as PNG (.png) or Targa (.tga). Also make sure that you have selected RGBA output in the Format tab of the Render buttons area, rather than the default RGB, which does not encode alpha information.

Muting Nodes

As you’ve worked with slightly larger node networks, you may have noticed that changing values, reconnecting nodes, or making other adjustments or modifications can take a while because of the need for Blender to recalculate the effects of all the nodes each time a change is made. You can mitigate this problem by muting selected nodes when you don’t need them. You can mute and unmute nodes by pressing the M button with your cursor over the node. Muted nodes’ names appear in square brackets on the top of the node, and a red curved line is drawn across the face of a selected muted node.

Learning More about Compositing

This chapter has barely scratched the surface of what you can do with the Blender node system in terms of video compositing. There is a great deal more to learn about Blender’s compositing functionality, but it is even more important to study the fundamentals of how 2D images are composited with each other. Coupled with what you have learned here about working with nodes, a thorough grounding in compositing will give you all the knowledge you need to get exactly the effects you are after. For this information, there is no better place to look than Ron Brinkmann’s The Art and Science of Digital Compositing, Second Edition: Techniques for Visual Effects, Animation and Motion Graphics (Morgan Kaufmann, 2008). This is truly the bible of compositing and is a must for anybody serious about understanding what they are doing and why in compositing. There is also an excellent website full of information about the book and about compositing at . I highly recommend checking it out as you continue to deepen your skills in compositing with Blender.

The Bottom Line

Use the Blender composite node system to pull a green screen matte. When you know in advance that you will be compositing a character or object into a scene, a common technique is to shoot the original video of the foreground figure against a colored background screen. This makes it possible to eliminate the background quickly and easily by using color channels in the node compositor.
Master It Using a nodes setup based on the example in this chapter, add a background image from an image file to create a fully composited image such as the one shown in . You can use the sky_map.jpg file included on the CD.
Use the AnimAll add-on for garbage matting. Background clutter in the original footage can cause imperfections in the matte. To block these out, garbage matting is used. In cases when the garbage matte must interact with the silhouette of the foreground figure, some hand keying or rotoscoping may be necessary.
Master It Using a 3D curve animated with the AnimAll add-on, add a second garbage matte to the video example used in this chapter to cover the debris in the background of the right side of the frame.
Manipulate the video’s color channels to reduce color spill. Any time you composite footage shot under one set of lighting conditions with footage shot under another set of lighting conditions, color mismatches can occur. This is particularly the case when a green screen is used, which can cause a green cast or spill in the original footage. To eliminate this, it is possible to work directly with the color channels to adjust the mix of red, green, and blue energy in the image to better match the background.
Master It In the composited image you made in the first “Master It” exercise of this chapter, create a slightly purple screen that affects only the most brightly lit highlights of the foreground figure to attain results similar to those in .

lookforrent
Буду знать! Оцените туристический портал lookfor.rent
JbnvJinge
12 month loans cash now cash loans in winchester tn cash advance in dubai
androidinfoSa
Знать достаточно свежие публикации у сфере планшетов и наблюдать презентации планшетов Андроид пользователи смогут на разработанном сайте запись телефонных звонков , который окажет помощь для Вас находиться в теме последних выпусков мировых марок в операционке Android и продажи задекларированной устройств. Популярный ресурс выдает потребителям совершенно популярные предметы обсуждения: мнение экспертов про телефоны, оценка пользователей, обновление, апки для персональному смартфону, ОС Андроид, ответы на популярные вопросы также различные основные содержание, какими интересуются регулярно. Стоит коротко увидеть новый телефон и выделить уникальные характеристики? Вовсе не AndroidInfo.Ru преград - у основной строчке возможно кликнуть модель либо ключевое слово затем одержать с вашего задания подходящую параграф совместно с фотоотчетом плюс описанием преобладающего функций. В случае если юзер есть несомненного ценителя выпусков смарт устройств по операционке Андроид Android , здесь регистрация поможет юзерам ни разу не выпустить каждую единую добавленную новость у области умных систем. Будет изобилие всего увлекательного также развивающего для всем ценителей инноваций новой эры.
Anciwhish
buying paper custom written papers
DbgvAmurn
dissertation research research methodology dissertation

© RuTLib.com 2015-2018