Книга: Mastering Blender
Назад: Part III: Video Post-production in Blender
Дальше: Chapter 11: Working with the Video Sequence Editor

Chapter 10

Advanced 3D/Video Compositing

Some of the most exciting recent advances in Blender have centered on the brand-new Clip Editor, the development of which has been a central goal during the production of the latest Blender Foundation Open Movie, Tears of Steel (formerly known as Project Mango). The Clip Editor includes advanced camera and object-tracking functionality, as well as dedicated masking tools that greatly improve on the methods described in Chapter 9, “Compositing with Nodes.” Taken together, these capabilities have catapulted Blender into being a viable stand-alone VFX tool. For anyone interested in incorporating CG content into live-action video, these tools are vital.

In this chapter, you will learn to

Camera Tracking and the Movie Clip Editor

The newest addition to the suite of Blender Editor window types is the Movie Clip Editor (from here on “Clip Editor”), which enables a variety of intensive operations on moving video clips. In the Clip Editor, you can set up camera and object tracks, adjust the effect of camera distortion, and create masks that can be used in the compositor. Access the Clip Editor by selecting it from the window type header menu, as shown in .

Accessing the Clip Editor

c10f001.tif

There’s not much to see in the Clip Editor without a clip loaded, as you can see from , so let’s take a tour.

The Clip Editor

c10f002.tif
1. Click Open to load a video clip. Any ordinarily supported format for video in Blender can be used. I’ve provided a reduced video clip in the form of a PNG file sequence. You can find this among the downloadable files for this book. The original video was filmed in 5K video on a Red Epic camera and exported to OpenEXR format at over 12 MB per frame. Because of bandwidth and storage requirements, I am not providing the originals on the publisher’s website.
2. Load up the PNG sequence. It should look like . The clip video is in the main window of the editor. The Tool shelf on the left and the Properties shelf on the right can be toggled with the T key and the N key, respectively, just as in other similar editors.

The Clip Editor with a movie clip loaded

c10f003.tif
3. In order to play the clip from the timeline play button, you need to set the timeline Playback options to include Clip Editors, as shown in . When you’ve done this, you can play the video by clicking the Play button on the timeline. When you play the file, Blender will try to cache the frames in order to access them more quickly later.

Setting the timeline’s Playback to Clip Editor

c10f004.tif
4. At the bottom of the Clip Editor window is a dark blue strip that turns light blue as the frames are cached. In , you can see that frames up until frame 88 have been cached. If the clip does not cache completely, it may be that you don’t have enough memory allocated to Clip Editor caching. You can adjust this in the System User Preferences.

Caching indicator

c10f005.tif
5. For this clip, the end frame is 195. If you advance to this frame, you can simply press the E key to set it as the end frame (likewise, you can set the start frame by pressing the S key).

Staying on Your Toes with Development
The functionality described in this chapter is brand new and under intensive development as part of the Mango Project Open Movie, Tears of Steel. I’m including it because it is absolutely vital for using Blender as a general tool for live-action VFX. However, in order to best describe these important developments, I am using the latest available graphicall.org builds at the time of writing. Because of this, there are almost certain to be small inconsistencies with how the software looks and functions when the official build of 2.64 is released, which may be a month or two off still. If you’ve gotten this far in this book, I’m sure you can handle this.

Camera Tracking from Video

In order to composite 3D content into a video like this one, it’s necessary to create an accurate representation of the camera’s movement within Blender. When you render out the 3D content using Blender’s virtual camera, the placement and rotation of the 3D content in the frame must match the placement and rotation of where the content should be in the live-action video.

This can be calculated by the relative 2D movement of 2D points in the video that correspond to points in the real 3D space. Specifically, the phenomenon of parallax is used. For the same lateral camera movement, points nearer to the camera will move farther across the screen than points more distant from the camera.

I specifically mention lateral camera movement, because certain common camera movements do not have this characteristic of parallax. In particular, tripod pans, where the camera rotates at a fixed point, do not yield parallax information. Imagine standing in a park full of trees. Your friend hides behind a tree where you cannot see her. In order to catch a glimpse of your friend, you must move laterally. If you only rotate in place, you will not see your friend. In fact, if you had friends hiding from you behind every tree in the park, you would never find any of them by rotating like a camera on a tripod. Only lateral motion increases the spatial information available to you.

The same is true of camera tracking. In order to re-create a 3D environment automatically, the video provided must include parallax information, i.e., lateral movement. This isn’t a major problem. If you have a shot that is purely a tripod pan, you don’t actually need to do camera tracking. Simply compositing 3D content as though you were working with a still image will work fine. Blender can handle shots that have both lateral movement and panning, but it does not get the parallax information necessary to re-create the space from the panning movement.

The points used for tracking are represented in the Clip Editor as tracking markers. These are set by hand or automatically to correspond with recognizable points on the video. When I say “recognizable,” I mean points that Blender’s computer vision pattern-recognition algorithm can identify as being the same feature from frame to frame. Features are real-world things that the algorithm attempts to recognize. A bunch of black pixels surrounded by a field of yellow pixels in frame 35 is likely to be the same feature as a similar bunch of black pixels surrounded by yellow in frame 36, even if the specific pixels are different because the feature moved.

There are basically two ways to go about camera tracking in Blender. You can use automatic feature detection and then correct the many errors by hand, or you can do feature selection by hand, which should result in fewer errors from the start but will proceed more slowly to build up a sufficiently large feature set. Both methods can be painstaking, depending largely on the content of the video you’re trying to track. I will describe using hand-selected features.

Anatomy of a Good Feature

A good feature should be recognizable in 2D. That is, it should be composed of contrasting pixels in the image. It must also represent a specific, unique 3D point in the real world. For example, an intersection between two overhead cables might form a single point in a 2D image, but if the cables are separated in space, the intersection does not represent a unique 3D point. This would not be a good feature. The kind of intuition and common sense a human being can use when selecting features to track is the advantage of selecting features by hand. shows several more examples of good tracking points (on the left) that represent specific 3D points and bad tracking points (on the right) that will change with the camera movement because they do not correspond with a real 3D point. Specular highlights and curved surfaces can also be a source of bad features.

Good tracking points (left) and bad tracking points (right)

c10f006.eps

Tracking Markers and Track Settings

To track a particular point in the video, you need to place a tracking marker on that point. You place a tracking marker by pressing Ctrl and clicking the left mouse button on the point you want to track. shows a newly placed tracking marker. The two squares over the video frame are the tracking marker itself. The inner square is the pattern area and the larger square is the search area. The pattern area represents the actual pattern that the algorithm will try to track from frame to frame. This pattern is displayed in the Track viewport in the upper portion of the Properties shelf to the right. The search area represents the area within which the algorithm will look to try to find the pattern when the pattern moves from frame to frame.

Adding a tracking marker

c10f007.tif

On the left side of the figure, in the Tool shelf, you can see a panel called Tracking Settings. These settings determine the settings for any newly placed markers. The Tracking Settings on the Tool shelf do not affect existing selected tracking markers.

Once a marker has been created, you can adjust the settings for that marker (when it is selected) by using the Tracking Settings panel on the Properties shelf, on the right side of the window. To repeat: The Tracking Settings on the Tool shelf do not affect existing selected tracking markers.

The Tracking Settings are as follows:

Pattern Size/Search Size These values determine the size of the pattern square and the search square when the tracking marker is created. Once the marker is created, the size and shape of a model can be edited by hand and may change automatically in some tracking conditions.
Motion Model This selection sets the assumption the algorithm makes for how the pattern changes on the screen. As the camera changes position from frame to frame, the appearance of the pattern area on the screen changes. If the pattern is on a plane perpendicular to the view of the camera and the camera only moves laterally, only the pattern’s location on the screen will change. But if the camera rolls from side to side or moves in other ways, or if the pattern itself is on a plane that is not perpendicular to the camera, the pattern will rotate or change in other ways. A very powerful setting in this menu is the Perspective setting, which corrects for perspective changes for patterns on surfaces where the camera’s movement creates significant parallax.
Prepass This option activates a preprocessing step that slows down the tracking process but can greatly increase its accuracy and ability to track. For footage with motion blur this is particularly helpful.
Normalize The Normalize option adjusts for changes in light and shadow. For example, imagine a video where a tile on the floor is perfect for tracking; however, during the video an actor walks by it and casts a shadow over it. Normalize is slower to calculate, but it will adjust for this change and value and allow the point to be tracked.
Use Mask Enables the use of a mask during the tracking process itself. Masking portions of the video will prevent tracking in the masked-out areas. Later in this chapter, you will learn how to set up a mask that can be used in this way.
Correlation Determines how much similarity the algorithm requires from one frame to the next when trying to match tracking patterns. Lower correlation values will make the algorithm more robust but may reduce accuracy. If footage is blurry or movement is fast, lower values may be called for to force the tracker to try to continue tracking even when the pattern is hard to recognize.
Frames Limit This value explicitly sets the number of frames to track at a time. Normally, the tracker will attempt to track the whole footage to the end (or the beginning, depending on the direction the tracking is going). If the footage is difficult to track, it is sometimes better to track bit by bit in case the track gets confused.
Margin This is a value in pixels that sets how far from the edge of the frame a track should continue to try to follow a pattern. Tracks can sometimes get confused at the edge of the frame, particularly if the footage is difficult to track and correlation is set low. This value gives an explicit cutoff point for the algorithm to stop trying as it nears the edge of the frame.
Match At each frame, the algorithm searches for a pattern that matches what it has stored as the tracking pattern. What that pattern exactly is depends on your selection in this menu. The default is Keyframe, which sets a single pattern when you create or modify a tracking marker and attempts to match the subsequent frames to that original pattern. The other option is Previous Frame. With this selected the algorithm attempts, at each frame, to match the pattern most similar to the pattern that was matched in the previous frame. This option may be more robust in cases when motion blur is heavy or when gradual changes occur to the pattern over the course of the footage, but it can also make it easier for the track to lose its way and gradually drift away from the original pattern.

Tracking

To make the selected marker(s) begin tracking, click the right or left arrow on the Track panel on the Tool shelf. The right arrow will make the tracking progress forward; the left arrow will make tracking progress backward. There is no difference whatsoever in how the tracking happens. You can track forward or backward and from any point in the video sequence. If you start at frame 1, you can track the whole sequence forward. If you start at the last frame, you can track the whole sequence backward. You can start in the middle and track forward and then return to the middle and track backward from the same start point. You can track forward from the beginning until the tracker fails, then track backward from the end until the tracker fails, and then work on the problem areas in between. Tracking tracks only selected markers, so toggle Select All with the A key beforehand.

The smaller right and left arrows to the right and left of the bigger arrows enable you to track one frame at a time in either direction.

Clear After clears all tracked points for the selected track for all frames after the current frame. Clear Before clears the tracked points for all frames previous to the current frame. Clear clears all frames’ tracking information for the current track. Join combines two separate tracks. If the two tracks have overlapping tracking information, Join will use the median points between the two tracks for the overlapping portion. Join should be used to join two tracks that operated on the same pattern and represent the same point in space.

Getting Good Tracks

If you use the provided footage and set up a new track as described previously, you will probably find that your initial attempt at tracking it does not go well. shows the first track I did on this footage, which failed after successfully tracking only six frames. Indeed, this footage is not the simplest footage to track. There are a number of challenging characteristics of the footage, probably the biggest one being the fairly quick movement of the camera in parts.

The quickest way to get a huge improvement in tracking is to activate Prepass, as shown in . Don’t forget to make this change on the Properties shelf to change the selected track. You might also want to change it on the Tool shelf, to ensure that Prepass is always checked for future tracks. In my work, I have so far not identified any situation where it’s better not to have Prepass selected. I’ve seen some significant slowdown in tracking when dealing with 5K footage, but the slowdown is well worth the improvement in tracking.

A short failed track

c10f008.tif

Checking Prepass

c10f009.tif

With Prepass selected, you’re almost guaranteed to get much better results with all other values the same. For the pattern I worked with in the example, I got a much better track. You can see a segment of the track in . The current frame is where the track marker is, and the red and blue edges extending to the left and right represent previous and subsequent frames, which have all been tracked.

This track went well from the beginning of the footage until frame 107. shows the track’s behavior on frame 107 and 108. Can you see what has happened here?

A segment of a successful track

c10f010.tif

A segment of a successful track

c10f011.tif

The problem here is that two very similar looking patterns can be found on the screen within the search area. The algorithm gets confused and jumps away from the correct pattern (the banner on the left) to the incorrect pattern (the banner on the right). It is absolutely necessary that active tracks stick to the same point in space throughout their lifespan, so jumping from one point to another is no good.

In this case, a simple fix is to adjust the search area so that it no longer includes this competing pattern. The search area can be moved by dragging the box on its upper-left corner. Backing up to frame 107 and moving the search box as shown in should do the trick. After that, I can simply track forward again and the tracking will begin fresh from the current frame, overwriting the problem track from before.

Adjusting the search area

c10f012.tif

This time, the track manages to get as far as frame 117 before stopping again. There are a couple of things I could try to get this track to advance further. I could change the size or shape of the track pattern itself by dragging the four corners of the pattern marker. This creates a new keyframe pattern and is often effective at jumpstarting a stalled track.

I’m not going to do this here, though. Instead, I’ll just jump to the last frame of the sequence and try tracking backward. When I advance to the frame, the marker is disabled. I place the marker and then enable it by clicking the check box beside Enabled in the Marker panel of the Properties shelf, as shown in .

It turns out that the backward track happens to make it from the end of the footage all the way to frame 19, just one frame separated from the point where the previous tracking had failed. shows the situation on frames 19, 17, and 18. Note that the track marker is disabled on frame 18.

Enabling the track marker

c10f013.tif

The track at frames 19, 17, and 18

c10f014.tif

To connect these two track segments, I can simply go to frame 18, enable the track, and position the marker as shown in .

Enabling the marker on frame 18

c10f015.tif

Using alternate motion models is also a powerful way to get accurate tracks. shows a feature from the street used as a tracking marker. Because the street is oblique to the camera, foreshortening and perspective cause the pattern to warp on the screen. Using the Perspective motion model, the shape of the pattern area is automatically updated from frame to frame to keep the tracking pattern as uniform as possible. In the figure, you can see frames 18 and 72. The pattern area shape has followed the correct shape of the feature.

There are several other important ways to optimize tracking. If your footage is moving fast and the pattern is getting out of the search area from frame to frame, try enlarging the search area. A larger pattern area can be helpful in adding context, but be careful that the pattern area does not contain background noise that changes from frame to frame. The algorithm will try to identify the entire pattern, so if only the middle of the pattern stays consistent, for example, the algorithm will fail. Try to enclose the biggest area possible that remains stable and visible throughout the footage.

The Enable check box is very important. If a trackable feature goes offscreen or is obscured by another object, you should disable the track for that feature for the frames where the feature is not visible and then enable the track again for the frames where the feature is visible. It’s okay to have a single track switch from disabled to enabled multiple times. When a track is disabled, it doesn’t matter where it is; it will not contribute to calculating the camera movement. But on frames where the track is enabled, it must be accurately placed, or it will increase the camera movement error.

Finally, if all else fails, you can place the marker by hand. The Blender camera tracker is extremely powerful, and with the right settings it will often get good tracks automatically even in quite adverse conditions. However, for some cases of extreme motion blur, or when a marker goes briefly out of view or is otherwise a problem for the algorithm, you may be able to identify the correct location yourself. In this case, you can simply enable the marker and drag it to the correct point. If you find yourself doing a lot of hand marker placement, though, you should experiment further with the automatic settings, because it’s likely that getting the settings right will save you some work.

Using Perspective motion mode

c10f016.tif

Camera Solving

The whole point of setting tracking markers is to use their motion to reproduce the 3D space and the camera’s movement within the space. How many tracks are required to do a good job at this depends on your scene. I’ve found that for most shots I’ve tracked, somewhere in the neighborhood of 20 tracks is about right. The tracks should be good quality, and they should represent as much of the screen space and as much of the 3D space as possible. You should have tracks in the foreground and tracks in the background to give the algorithm sufficient parallax information to work with.

Keyframes

The only hard and fast rule about track counts is that you must have at least eight common tracks that are enabled in each of the two frames you have entered as keyframes in the Solve panel in the Tool shelf, shown in . So what are these and how should you choose them?

The Solve panel and Keyframe fields

c10f017.tif

In re-creating the space and the camera motion for the full clip, the Blender camera-solving algorithm bases its calculations on some basic parallax information that is gained from comparing just two frames in the clip. You must set these frames by hand. This can have a significant effect on the results of the camera-solver algorithm, so it’s important to choose good keyframes. This is a bit of an art, though, because there’s no hard and fast rule for choosing the best keyframes. Try to choose two frames that have a simple, lateral camera movement between them and that share at least eight tracks providing a good representation of foreground and background points. The keyframes shown in worked well for this clip.

Keyframes for parallax calculation

c10f018.tif

Note that rotation-only camera motion such as a tripod pan does not provide parallax information and cannot be used to re-create a 3D scene. You can get accurate fixed-camera rotation using the Tripod Motion option, but this will not build a 3D representation of the scene. When possible, you should always take extra footage of any scene you want to track with some lateral camera motion. You can build the 3D representation with the parallax of this extra footage.

Camera Information

The camera solver requires accurate information about the focal length and the sensor size of the real-world camera the footage was taken with. Blender has some built-in refinement capabilities, but this will usually not entirely take the place of knowing the specs of the camera and lens used to shoot the footage. There is a collection of presets with many of the best current video cameras included. The footage included was taken on a Red Epic camera, and I used these values to start with. I didn’t know the exact focal length, though, so I had to rely on Blender’s refinement functionality to get it right. In addition to specifying a Red Epic camera in the Clip Editor, I changed the 3D view’s camera to a Red Epic in the Properties window.

Checking the Tracks

Before you try solving the camera, it’s a good idea to give the markers a once-over with the best motion tracker in the world: your own good-old human brain. You can do this by watching only the movement of the track points themselves. Mute the image by pressing M, and disable the display of patterns, search areas, and paths in the Properties shelf. shows the tracked footage with the image background, with the image muted, and with pattern, search, and path display disabled.

Play the footage like this, so you can see only the track points moving. You should notice two things: First, the 3D space should be implied by the movement of the points. The human visual system is incredibly good at reconstructing spaces and shapes from moving points, and if it looks like a mess to you, it’s going to look like a bigger mess to the camera solver. Second, problem tracks should pop out fairly obviously. Watch the footage a few times all the way through and pay attention. If a single track seems to slide in a strange direction or get stuck, then you need to look more closely at that track. Most likely, the track is coming loose from its feature and needs to be disabled for the frames where it’s not accurate, retracked, placed by hand, or deleted.

When you’ve done all this, click Camera Motion on the Solve panel of the Tool shelf. When you do this, the camera-solver algorithm will try to reconstruct the 3D scene and the camera motion. The result of the algorithm is the creation of 3D markers, which represent points in the 3D space that should correspond as closely as possible to the 2D tracking markers you placed.

Troubleshooting the Solve

Viewed through the camera, the distance between a 3D marker and its corresponding 2D tracking marker in pixels is its solve error for a given frame. The average of the solve errors over all the frames for all the markers is the average solve error for the scene. This average solve error will appear in the header of the Clip Editor. You can look at solve errors for individual markers by checking Names and Status on the Display panel in the Properties shelf.

The footage with and without image and detailed marker display

c10f019.tif

Your average solve error for the scene ideally should be less than 0.5 pixels. A solve error greater than 3 pixels is unacceptable. A solve error between those values is dicey but may be usable depending on the situation. For one thing, the significance of a pixel width depends on the resolution of your footage. A pixel’s worth of error on 5K footage is less of a problem than a pixel’s worth of error on much smaller footage.

Higher solve errors are likely indicative of patches of total wrongness. In these cases, if you try to reconstruct the scene, you’ll see the camera jumping around haphazardly, and the reconstructed 3D scene will go haywire on certain frames.

Big solve errors can be the result of small problems. If your tracked scene looks okay, the problem is likely the camera settings. Try adjusting the settings, and choose Focal Length, Optical Center, K1, and K2 from the Refinement menu on the Tool shelf. The camera solver sometimes requires some jiggling to get the focal length estimated properly, so try setting the focal length at various values by hand and rerunning the camera solver.

Like bad apples, one bad track can screw up the camera solution for all the rest. The camera solver tries to make all the tracks make sense together. If one of the tracks is crazy, for example, if it’s enabled when it should be enabled and is sliding across the space, this will have a corrupting effect on all the other tracks.

If 3D markers are set to display, they will show up in red if their error value is greater than 1 and green if their error value is less than 1. You can see where they are in relation to the 2D markers, as shown in .

A 3D marker with an error of greater than 1

c10f020.tif

Take a look at the markers like this. If the reconstructed 3D marker is more accurately placed than the 2D marker with respect to the feature it’s supposed to be tracking, then you can move the 2D marker by hand to be closer to the 3D marker. Otherwise, you should fix other problems in the tracked scene to get a more accurate camera solution.

Another place to look for problems is the graph view of the Clip Editor, shown in .

This shows you the speed of motion along the x- and y-axes of each 2D marker in red and green, respectively. It also shows a blue line that indicates the average solve error. Look for places where this blue line is high or where the red or green lines seem anomalous or more sudden than usual. These could indicate problems but not necessarily. If a marker goes from disabled to enabled, the change will show up as a sudden jump upward or downward, but this doesn’t indicate a problem.

The Clip Editor graph view

c10f021.tif

Another thing that can cause problems for a track is lens distortion. Although Blender does refine distortion parameters (the K1 and K2 values listed in the Refinement menu), it is sometimes helpful to tweak distortion by hand. You can do this by selecting Distortion mode in the Clip Editor header menu. In Distortion mode you can display a grid overlay like the one shown in . As you can see, the grid is bulging outward in the figure, indicating that there is a mismatch in the scene reconstruction and the footage distortion. Adjust the K1, K2, and K3 parameters to straighten the grid as much as you can and try running the camera solver again.

Grid display in Distortion mode

c10f022.tif

Once you have your scene tracked to a sane solve error, you can try to whittle it down to an optimal value. For the scene in the example, I was able to get the error down to only about 1.5 pixels by tweaking values. As a last step I went in and simply deleted the tracks with the highest error rates, getting the total down to less than 0.5 pixels.

Problems can creep in at any point in the process, so the first couple of tries tracking scenes might take a lot of tweaking. The tips in this section should help you get a well-tracked scene with a little effort.

Setting Up the Scene in 3D

The next thing to be done is to create the basic scene in 3D. This is very simple, because Blender has a built-in mechanism to do it for you. You’ll find what you need on the Clip panel shown in . With your camera selected, click Set As Background to put the clip into the 3D viewport background; then click Setup Tracking Scene to create a 3D scene that matches your tracked scene, complete with tracked camera, foreground and background 3D objects, and the appropriate composite node setup.

The Clip panel

c10f023.tif

Reconstruction Mode

Choose Reconstruction mode from the header menu to see more options for reconstructing the 3D space. You can set up some basic orientation references to start with. To set the floor plane, select three markers corresponding with points on the floor plane and click Floor in the Orientation panel, as shown in . Likewise, you can set the origin (using a single marker) and the x- and y-axes. In the Geometry panel above, you can add a linked empty to a selected track and even create a mesh with vertices at the positions of selected tracks.

Setting the floor plane

c10f024.tif

Now go to the 3D viewport to take a look at the scene. It will look something like from outside the camera and from the inside camera view.

The 3D scene

c10f025.tif

Camera view

c10f026.tif

For simple scenes, most of what you need here has been set up automatically. If you poke around a bit, you’ll see that the plane and the cube are on separate render layers, called Background and Foreground, respectively. Review Chapter 4 of this book, “Rendering and Render Engines,” if you need to bone up on how render layers work.

In you can see that I’ve resized the cube and placed it directly on the floor plane off to the side of the scene. This is an easy place to composite it, because it won’t get concealed by anything during the course of the clip, so compositing it is just a simple matter of dropping it on top of the scene.

Resizing and placing the cube in solid and wireframe view

c10f027.tif

The Node Setup

Go into the Node Editor window and click the composite node view. The automatic node setup is shown in .

As you can see, there are three main input nodes consisting of the movie clip itself, the Background render layer, from which AO and Shadow passes are extracted and multiplied to darken the image appropriately, and the Foreground render layer, which is given a slight motion blur and then laid over the whole image using Alpha Over. Rendering this setup as is results in an image like .

The node setup

c10f028.tif

Rendering the automatic node setup

c10f029.tif

From here, of course, you can set up your own lights (or switch to the Cycles render engine and use image-based lighting as described in Chapter 4) to get things just right. But I’d rather not leave this image quite yet, because the cast shadow from the default lamp is a bit unsightly. It would be better in this case just to start with environment lighting as the only light source and build from there, so I delete the point light from the 3D scene and disconnect the Shadow pass from the Multiply node to let the AO pass go through unaltered, as shown in . The resulting render can be seen in , and this is a good enough point to stop at. There’s only so much you can do with the default cube, after all.

Snipping the connection with the Shadow pass

c10f030.tif

The final render lit only by environment lighting and AO

c10f031.tif

Masking in the Clip Editor

Masking is an important function of any compositing software, and now that Blender has entered the arena as a high-powered VFX tool, it’s more crucial than ever that quality masking tools be available. In Chapter 9 you saw the old way to make masks. As I write this, the new Clip Editor masking functionality has been committed to the Blender code trunk for less than a week and won’t be out in an official build until Blender 2.64, which is likely to be at least some months off, although it will probably be out by the time the book goes to print. Although there may still be times when the method of animating curves described in Chapter 9 will be useful, the new method described here is without question the right way to do masking for compositing.

Bear in mind that this is bleeding-edge functionality that has been neither officially released nor officially documented at the time of this writing, and as such it is likely to change in some details by the time it is released. What you see on your screen may differ slightly from what you see on the pages of this book. I’m including it because it’s important functionality and because I have confidence that if you’ve come as far as this chapter in your Blender work, you can adapt to any small differences.

Editing the Mask

To create a mask in the Clip Editor, you need to switch to Mask Editing mode using the drop-down menu in the header with the little face mask icon, as shown in .

Entering Mask Editing mode

c10f032.tif

Once you’re in Mask Editing mode, the header will have a new drop-down menu for Mask datablocks, shown in . As in other cases in Blender, if no datablock exists you can click New to create one. You will need to create a new Mask datablock now by clicking New.

Create a new mask layer.

c10f033.tif

So now there is a mask, but it has no control points, so there’s nothing to see. You can begin to add control points to draw the mask by holding Ctrl and left-clicking on the image window. shows the mask with two control points.

A mask is fundamentally a Bezier curve, but its controls are simplified for ease of use (at the expense of some kinds of detailed shape control). You’ll notice that instead of two adjusting handles extending tangentially to the curve, each control point has only one handle extending perpendicularly from the curve. When you pull the handle point away from the control point, the curve becomes gentler. When you push the handle point closer to the control point, the curve gets tighter. Pressing the V key will bring up the Handle Type menu, which enables you to choose from the default Aligned handle type, the Vector handle type for sharp corners, and the Auto handle type, which will automatically generate gentle curves based on the mask shape. shows the finished mask, masking off the man’s right leg.

Beginning to draw the mask

c10f034.tif

A finished mask

c10f035.tif

In the Spline panel on the Tool shelf click Toggle Cyclic to connect the end points, as shown in .

The mask set to be cyclic

c10f036.tif

On the Properties shelf you’ll find several panels relating to mask properties. Mask Settings contains the start and end frames for the mask; Mask Layers gives you tools to manage multiple independent layers of masks, including adjusting each layer’s opacity and determining how the layers blend together. Mask Display controls how the mask looks to you. In , it’s set to display as a dashed line. The Active Spline panel shows information about the mask curve, including whether it is cyclic and whether it will be treated as a filled curve. At the time of this writing, the fill is not displayed in the Mask Editor area, but it comes out in the final mask. If you don’t have Fill checked, the final mask will be only the outline. The Active Point panel has settings for Handle Type (the same as the Handle Type menu accessed with the V key) and Parent, which you’ll read about later in this section.

The shape of the mask can be keyframed by pressing the I key with the mouse over the window with the control points selected that you want to keyframe. shows three frames’ worth of animated mask. A new mode called Mask can be found in the Dope Sheet window that enables you to work directly with Mask keyframes.

Mask Display settings

c10f037.tif

Keyframe animating the mask

c10f038.tif

Compositing with Masks

The whole point of creating masks is to use them for compositing, and so it’s to be expected that the new mask functionality should be tightly integrated with the compositor. To use the mask you made in the compositor, you simply add a Mask Input node, as shown in . The output of the Mask node is a simple black and white mask that can be multiplied with any value to control the composite layers as you wish. shows the output of the Mask node displayed in the Viewer node.

Adding the Mask node to the composite setup

c10f039.tif

Other Mask Features

In addition to being much simpler and easier to work with than the old 3D curve method of creating masks, the new Mask tools have some nice additional features built in. Anti-Alias, Feather, and Motion Blur options are automatically included, and you can activate each of them in the compositor with just a click of a check box, as you can see in .

Feathering, the degree to which the mask blurs out at the edge, does not need to be uniform around the mask. In fact, you can easily control the feathering distance at any or all points around the shape of the mask in the Mask Editor. Holding the Shift key and dragging your mouse outward from the edge of the mask will create feathering control points. If you do this along an existing control point, a uniform feathering distance will be created, with separate feathering control points at each handle, as shown in . If you drag out a feathering control point from the curve at a place where no control point exists, a feathering area will be created between the two nearest control points, and the new feathering control point will adjust that feathering area.

The mask in the compositor

c10f040.tif

Feathering and motion blur

c10f041.tif

Once feathering control points have been created, you can adjust them individually, as shown in , to control the sharpness of the mask edge at each point.

Adding manual feathering control

c10f042.tif

Controlling feathering sharpness by point

c10f043.tif

As I mentioned, as I write this, the Mask functionality is all brand new. The feathering controls have been around for only a matter of days or weeks. For this reason, they are not currently possible to keyframe. I am sure that by the time this feature sees an official release, these feathering control points will be keyable like anything else.

One more feature that can greatly simplify the task of animating masks is to be able to parent a mask or any control points to a tracked marker. As a simple example, I’ll create a new Object track for the man’s belt buckle. Although 3D object tracking is not covered in this book, the process is basically identical to camera tracking, and the underlying math is the same. To track a new object, enter Tracking mode in the Clip Editor and click the plus symbol next to the list on the Objects panel in the Properties shelf. I’ll name this one Man because I want to track the position of the man walking down the street. I then create a tracking marker and run the tracker just as I did for the camera tracks, as shown in . Because the track was done for the Man object, there is no conflict with the tracks I created for the camera.

Object tracking the man’s belt buckle

c10f044.tif

Now the control points of the mask can be parented to the tracking marker in a way analogous to the way 3D objects are parented to each other in the 3D space. Simply select the control point(s) you want to parent, then Shift-select the tracking marker you want to parent them to, and press Ctrl+P to parent. To parent the whole mask, I pressed the A key to select all points; then I parented them to the belt buckle track as shown in . Advancing about 30 frames you can see that the mask has followed the track nicely (). Of course, there would still be a lot of editing to do to make it match the shape of the leg on the new frame, but there’s less editing than would have been necessary without the parenting.

As you can see from the fairly simple examples in this chapter, Blender’s Clip Editor functionality represents a truly revolutionary advance in Blender’s usefulness as a VFX tool. Combined with the realism that can be obtained with the Cycles renderer and image-based lighting, this functionality will enable Blender to be effectively used in a whole new category of productions.

Parenting the mask to the tracking marker

c10f045.tif

The mask following the parent marker

c10f046.tif

The Bottom Line

Track camera movement in the Movie Clip Editor. Blender’s new camera-tracking functionality enables you to capture camera movement data directly from live-action video clips by placing tracking markers at key points in the 2D video and setting Blender loose to calculate the dimensions of the space and the rotation and location of the camera throughout the clip.
Master It Track the camera in a video of your own. For easiest tracking, try to use a video with lots of lateral camera movement and many high-contrast, motionless visible points to use as tracking markers. Also, be sure you have as much data about your camera and the field of view used for the video as possible. Use the documentation of your camera or Google to find out the sensor size and make a note of the zoom position you used. Try to get your track down to an error value of less than one pixel for sure and less than half a pixel if possible.
Composite 3D content into camera-tracked video. With a combination of camera tracking and masking you can create sophisticated composites of 3D content and live video.
Master It In the example in this chapter, you saw how a cube could be placed into a live-action video scene. I placed the cube on the left side of the sidewalk closest to the storefronts to avoid it being walked past by the man talking on the phone. Track and reconstruct the scene yourself, but this time place the cube on the left side of the sidewalk, next to the subway entrance. Use masking to enable the man walking by to obscure the cube as he passes.
Use masking in the Clip Editor. Blender’s new Clip Editor masking functionality has vastly improved on the previous approach of using a Curve object in the 3D space. Using masking, you can control exactly which portions of an image are used in a composite. You can even control the degree of feathering of a mask on a control-point level.
Master It Using the video clip provided for Chapter 9, redo the garbage-matting tutorial using the masking functionality of the Clip Editor.
Назад: Part III: Video Post-production in Blender
Дальше: Chapter 11: Working with the Video Sequence Editor

lookforrent
Буду знать! Оцените туристический портал lookfor.rent
JbnvJinge
12 month loans cash now cash loans in winchester tn cash advance in dubai
androidinfoSa
Знать достаточно свежие публикации у сфере планшетов и наблюдать презентации планшетов Андроид пользователи смогут на разработанном сайте запись телефонных звонков , который окажет помощь для Вас находиться в теме последних выпусков мировых марок в операционке Android и продажи задекларированной устройств. Популярный ресурс выдает потребителям совершенно популярные предметы обсуждения: мнение экспертов про телефоны, оценка пользователей, обновление, апки для персональному смартфону, ОС Андроид, ответы на популярные вопросы также различные основные содержание, какими интересуются регулярно. Стоит коротко увидеть новый телефон и выделить уникальные характеристики? Вовсе не AndroidInfo.Ru преград - у основной строчке возможно кликнуть модель либо ключевое слово затем одержать с вашего задания подходящую параграф совместно с фотоотчетом плюс описанием преобладающего функций. В случае если юзер есть несомненного ценителя выпусков смарт устройств по операционке Андроид Android , здесь регистрация поможет юзерам ни разу не выпустить каждую единую добавленную новость у области умных систем. Будет изобилие всего увлекательного также развивающего для всем ценителей инноваций новой эры.
Anciwhish
buying paper custom written papers
DbgvAmurn
dissertation research research methodology dissertation