Now that you’ve created a character and a simple environment for use in the game engine, the next thing you need to know is how to add user interactivity. The Blender Game Engine (BGE) has several ways in which interactivity can be programmed into a game environment. In this chapter, you will learn about Blender’s built-in game design tools: logic bricks.
Logic bricks have the advantage of using Blender’s built-in graphical user interface to implement interactive functionality. For nonprogrammers, this affords an easy way to get started with game creation without needing to know any programming. For greater scalability and versatility, Python can also be used, which is the topic of Chapter 16, “Python Power in the Blender Game Engine.” Using Python in the game engine also relies on logic bricks to a certain extent, so regardless of how you ultimately want to program your Blender games, you will need to be familiar with the logic brick system.
In this chapter, you will learn to
Logic bricks are the BGE’s way of controlling how user input controls in-game events and how those events in turn control other events. Using logic bricks, you can make a character respond to your keyboard input, trigger a level change when a certain number of points are collected, and even set up some basic artificial intelligence (AI) behavior for your character’s enemies.
In this chapter, you’ll also read about working with rigged characters, properties, and other functionality specific to interactive content and game creation. Only the very basics of working with rigid bodies and game physics are touched on here. If you are interested in setting up more-sophisticated rigid body interactions or in using BGE rigid body behaviors in your Blender animations, I recommend that you refer to Chapter 8, “Bullet Physics and the Blender Game Engine,” for more information on those topics.
The idea behind logic bricks is simple. Logic bricks enable you to associate cause-and-effect relationships and action-reaction behaviors with objects in the game environment. In this chapter, you’ll learn how to work with logical relationships that are built into the BGE logic bricks system. In Chapter 16 you’ll learn how to expand the logic to include Python scripts, leading to countless new possibilities for game design.
As mentioned previously, the logic brick system enables units of logic to be associated with 3D objects. Every logic brick is associated with a single 3D object. Logic bricks may affect more than one object’s behavior, however, and they may “communicate” with each other (as you will see later in the “Using Properties, Messages, and States” section). For this reason, there is not always a fixed, “correct” way to set up game logic. When you get a sense of how logic bricks work, you will develop intuitions about which objects should have which logic bricks associated with them.
After you have placed your character in the scene, you need to add user controls to enable the player to move the character around that scene. Because both the motion of the character and the defined actions can be controlled through the armature, the sensible way to set up these logic bricks is to associate them with the character’s Armature object.
Before starting on game work, be sure to select Blender Game from the Render Engine drop-down menu in the Info header at the top of your work area.
To get started with logic bricks, enter the Logic Editor window by selecting the joystick icon from the Editor Type menu, as shown in . Then select the Armature object in your scene (in Object mode), and you will see the panels shown in .
The leftmost portion of this panel is where you'll deal with setting properties later on in this chapter.
The remaining portion of the Logic buttons area is devoted to the logic bricks proper. Logic bricks fall into three distinct types, which are organized in columns from left to right. The logic brick types are as follows:
To set up the basic walking motion of the character, you’ll use all three of these logic brick types. To do this, follow these steps:
If you press P now, you should find that your character now is able to walk around the scene. However, the vertical placement of the character does not change, so it may seem that your character is walking on air, and furthermore walls and obstacles do not put up any resistance. To remedy this, you must enable some dynamic properties on your character.
Game physics settings are found in the Physics properties area, in the same place that physics settings for fluid, smoke, and collision were found in Part II of this book. However, for setting game physics, the render engine must be set to Blender Game. If you haven’t set the render engine in this way, a different set of physics options will be available.
When you add an object to a scene, it is automatically created with certain basic physical properties, because the Physics button is pressed by default. This mimics the behavior of earlier Blender versions that did not have the capability to completely disable all physics effects. In this default state, an object will not move or be influenced by forces such as gravity; however, the object will act as an obstacle for other physical objects. With the Physics button deselected, there will be no interaction between the object and other objects; physical objects will pass through this object as though it is not there.
When Physics is selected, you then have the option to make the object an actor, which means that forces and interaction will be calculated with respect to this object and its surroundings. Dynamic objects are able to move in response to forces on them. If Actor is selected, the object is visible by the “near” and “radar” detection methods. The Ghost option prevents the object from being an obstacle and enables other objects to pass through it without resistance. These options are not mutually exclusive. The Ghost option is different from having no physics applied at all because it allows for the possibility that the Ghost object could also be dynamic, whereas no dynamic behavior is possible if Physics is fully disabled.
If the object is set to be dynamic, then other parameters related to its mass and collision boundaries also can be set. Finally, if the object is dynamic, it becomes possible to set the object as a Rigid Body object. This means that not only will it move in space in response to directional forces, but it will also have a full complement of angular rigid body forces calculated, resulting in a much more realistic tumbling and bouncing motion. In the case of rigid body simulation, the type of collision boundary selected also will have a considerable impact on the behavior of the object. You can read more about this in Chapter 8.
In the present case, rigid body dynamics are not necessary for the character. The character should settle on the floor and treat walls as obstacles, but for the present game the character doesn’t need to be able to tumble and roll around (which would make it difficult to control with the simple walking motion you’ve set up so far). For this reason, it is enough to make the character dynamic only, without rigid body physics. To set this up, follow these steps:
If you press P while you are in Camera mode, or if you create a free-standing executable game using the game engine, the game will be displayed from the point of view of the active camera in the scene. There are several ways to ensure that the camera stays focused on your character. The simplest way is to simply parent the camera to the armature with the camera pointing at the armature. If you do this, the camera’s motion will be controlled directly by every movement of the armature. This is okay for some purposes, but the resulting camera movement is very stiff and the effect can be unnatural. For some 3D games, it is good to have a camera that can follow the character loosely and respond in a more natural way to the character’s movement, generally staying at a set distance but responding in a more flexible way to the character’s movements than if it were parented. This is what the Camera actuator is for.
The Camera actuator is a logic brick that can be set on a Camera object, as shown in . In this example, I use an Always sensor so that the Camera actuator is always active. The Camera actuator itself has a field for the name of the object the camera should be pointing toward, in this case Armature, and also fields for the height it should try to maintain, the minimal distance from the object it should be allowed to come, and the maximum distance from the object it should be allowed to get. Setting these values as shown in the figure will result in smoother and more-natural-looking camera behavior than you would get by simply parenting the objects.
In addition to sensors, controllers, and actuators, other features help make the Blender logic brick system a powerful programming language all in itself. These include properties, messages, and the state system, a result of the Yo, Frankie! open game project. Properties enable you to store and change values in the game engine environment. They serve a purpose directly analogous to variables in an ordinary programming language. Messages provide another way for logic bricks to communicate with other logic bricks, even when they are not directly connected. They can be useful for synchronization or cases when a logic brick should have an effect that is broadly recognized by other logic bricks. States enable a kind of meta-level of control over sets of logic bricks and can be used to enable or disable whole collections of logic bricks at once.
The game you’re putting together in this chapter is a simple maze game in which the goal is to collect Cone objects while avoiding evil wizard enemies. This section describes a simple setup for the bad guys that will provide a challenge to navigate without being too confusing to set up.
It is loosely based on a much more complex tutorial example provided by BlenderArtists.org user Mmph! His example uses a large number of logic bricks to create a rudimentary but convincing form of AI (artificial intelligence) and is very much worth checking out at . The method described in those tutorials is quite clever; however, it pushes the boundaries of what is advisable to do with logic bricks. For effects as sophisticated as AI, Python scripting is probably the least-cluttered and easiest way to work. Nevertheless, the simplified bad-guy movement logic presented here should give you a clear idea of how properties work and how they can be used to control characters’ behavior.
For the following tutorial, you can either append the BadGuy object to your own scene or use the badguy_nologic.blend file itself to follow the tutorial. The end result of the following steps can be found in the file badguy_logic.blend. The initial setup looks as shown in .
The path nodes are simply red cubes that have been resized to be about the height of the bad-guy character. As usual in the game engine, do your resizing in Edit mode and leave the object scale at 1. In the Mesh buttons, I’ve applied an empty UV texture to make the cones shadeless in the game engine, and I’ve colored them red using vertex painting. I suggest you create one first and add the logic described in the next section, and then copy it three times. Logic bricks are copied along with objects, so this approach will save you having to set up the logic for each path node individually.
The path nodes will be used to guide the bad guys’ movements around the board. At any given time, each bad-guy character will be set to track to a single node and move in the direction of that node. When the bad guy runs into a node, it will then switch to tracking toward the next node in the path. When the bad guy hits the last node, it will track back to the first node, completing a cycle around the course.
You need to set up some physical characteristics. Specifically, the path nodes should be set as static Ghost actors, as shown in . This will enable them to be passed through by other objects but also ensure that their collision boundaries are calculated when necessary.
As shown in , the path node uses two properties. You can add these by clicking Add Game Property. Set the property type by using the drop-down menu for the property. Fill in the name and start value of the property in the appropriate fields. The Info icon button to the right of each property will toggle, displaying on and off. If the Info icon button is clicked, the property’s value will be displayed in-game if the Show Debug Properties option is chosen from the Game menu on the User Preferences header.
The top property shown is named visible. It is an integer; therefore, Integer is selected from the property type drop-down menu. Its start value is 0. This property will be used to toggle the visibility of the path nodes in-game, in case this is necessary for troubleshooting. The second property is called pathnode. This property is used only to identify that the object is a path node. The property’s value will never be checked, only whether the object has this property. For this reason, it is okay to leave the type as the default float type, and the default value of 0.000.
The logic to toggle the visibility of the path nodes is shown in . The first sensor is a Keyboard sensor that responds to the I key. When the I key is pressed, it triggers a Property actuator. Property actuators come in three flavors: Assign, Add, and Copy. Assign is used to assign an arbitrary value to a property, Add is used to increment or decrement a property value, and Copy is used to copy a value from another property to a target property. In this case, you’ll use the Add option with a value of +1. This means that when the I key is pressed, the value of visible will be incremented by one.
The remaining sensors are all Property sensors, meaning that they trigger events whenever the relevant property has a specific value. The first one represents the case when the visible value is 0. It connects to a Visibility actuator set to Invisible. So by default, the path nodes will be invisible, because the start value of visible is 0. The next sensor down represents the case in which the value of visible is 1. This triggers a Visibility actuator set to Visible. Finally, the Property sensor for the case when the value is 2 triggers another Property actuator, this time an Assign actuator, which assigns a 0 value to the property, thus resetting the toggle.
After you’ve set up the logic on the first path node, copy the object three times and place the objects around the maze as shown in . Name the objects 1, 2, 3, and 4 so that they are positioned in numerical order.
To keep things reasonably uncluttered, I’ll describe the bad-guy logic in several steps. The first thing to do is to set up the necessary physics and properties, as shown in . Like the main character, the bad guy will be a dynamic actor, but without rigid body physics. It will have two integer properties. The targ integer property represents the target node that the bad guy is presently moving toward. The inc property will eventually be used to determine whether the bad guy is traversing the path nodes in incrementing order or in decrementing order, thus moving clockwise or counterclockwise around the path. The wizard property will enable other game objects, particularly the main character, to identify the object as a bad-guy wizard. Finally, the bump property will be used to determine when a collision should make the bad guy reverse its direction.
The basic move logic for the bad guy in the forward (incrementing) direction is shown in Figures 15.24, 15.25, 15.26, and 15.27. These four figures all show the same logic but with different bricks open for viewing. In , you can see the Always sensor, which is connected to the Motion actuator. This ensures that the bad-guy wizards continue moving forward at all times.
In , the sensor is activated if the targ property’s value is 1 and triggers an Edit Object actuator with the Track To option selected. The target object in the OB field is 1. This means that while the value of targ is 1, the bad guy will continue to aim (and move) in the direction of the object called 1. Analogous logic bricks must be added, as shown, to correspond with cases when the targ value is equal to 2, 3, and 4.
shows the means by which the targ value itself is incremented. A Ray sensor is used to determine whether the BadGuy object has struck a node. If so, a Property actuator of type Add increments the targ value by one (+1). When the targ value reaches 5, a Property logic brick of the Assign type is used to assign the value of 1 to the targ property, resetting it.
The logic for this is shown in .
The final bad-guy logic will toggle the clockwise/counterclockwise direction of the bad guys by incrementing the inc value. This will happen whenever the bad guy runs into an object with a bump property (this of course includes other bad-guy objects). Because they are dynamic objects, the bad guys could bump against each other and be thrown off course or could be pushed into a position where they cannot reach the next path node. In this case, it is desirable to add a random switch so that from time to time they change directions arbitrarily. In this way, they are much more likely to free themselves if they become jammed somewhere, and it also leads to less-predictable patterns of movement.
To do this, you add logic to the Empty object as shown in . This logic has a Random sensor timed to emit once every 100 frames on average. It triggers a Message actuator with the subject line switch. In turn, the sensor shown in will fire when this message is emitted. In the final bad-guy motion logic, this random message triggers an increment or decrement in the targ value, depending on whether the bad guy is already moving in an incrementing or decrementing direction.
The full bad-guy motion logic is shown in . It is very much like the logic explained previously, except that logic is added for the cases in which the bad guys are traveling in the reverse (decrementing) direction around the path. There are too many logic bricks here to show them all unfolded, and furthermore, as you can see, the logic bricks here are already pushing the limits of what logical connections can be easily understood at a glance. I recommend that you take a close look at the file game.blend on the website for this book if you need further insight into how this logic works.
Note that if you watch the bad guys in action from the top view, the weaknesses of the very rudimentary path-finding algorithm used here quickly become apparent. The bad guys occasionally get stuck, and their movement becomes predictable. Nevertheless, the basic techniques described in this section will enable you to implement much more robust and sophisticated path-finding algorithms of your own.
If you open the game.blend file on the website for this book, you’ll see the final game setup along the lines of . (The color insert in this book shows these figures in color.) I’ve placed three BadGuy objects on the path, one of them going the opposite direction from the other two (that is to say, one of them has an initial inc property value of 1, whereas the others have a value of 0). You’ll also see eight yellow cones and three green balls.
The object of the game is to collect the cones without hitting a wizard. When you collect all eight cones, you’ll see a congratulatory message, and the game will end. If you hit a wizard, the game will end and the message won’t be so congratulatory. The green balls make you bounce up and down, enabling you to get a view of the layout of the maze and the locations of the cones, and also protect you from the wizards. Of course, the effect of the green balls is temporary.
To set up these game-play features, you need to add logic to the cones and balls and add a bit more logic to the Empty object. The temporary protection you get from picking up the green balls will require the use of states in the character logic.
The logic associated with the Cone object is simple, as you can see in . A Near sensor is used, with a distance of 0.25 to determine when the character is close enough to touch the cone. The character is recognized by the char property. When the conditions on the sensor are met, two actuators are activated. The first is an Edit Object actuator set to End Object, which removes the Cone object from the scene. The second actuator is a Message actuator, which broadcasts a message with cone_touch as its subject. This message will be received by the Empty object and will result in incrementing the point counter on the Empty object, as shown in .
The logic on the ball is almost the same as the logic on the cone, as you can see in . The only difference is the subject of the message that gets sent. In this case, the subject is ball_touch. This will be received by the armature and will initiate the special protected state as described in the next section.
You’ve already seen the States panel in the Controllers column of the Logic buttons, but so far you haven’t made any use of it. States determine which controllers are active at any given time. Each controller and its connected sensors and actuators are associated with one state. Any combination of the 30 possible states can be active at any given time. States can be activated or deactivated by using a State actuator.
To see the States panel, click the small, round plus icon to the left of the controller logic brick. In , you can see the States panel. Each of the 30 light-gray squares represents a state. Darkened states such as the upper-left state in the figure are selected, meaning that their logic is currently visible. States can be selected or deselected analogously to layers in the 3D window, using Shift+LMB to select multiple states. States with a dot in them are states that have associated logic bricks. In the figure, the leftmost three states in the upper row all have logic bricks associated with them. The black dots represent states that are part of the initial state mask. That is, these are the states that will be active when the game begins. You can select the current initial states by clicking the Ini button to the right of the states, and you can select all states by clicking the All button. You can set the currently selected mask to be the default state mask by selecting Store Init State.
To get the temporary protection effect from the green balls, three states will be used. The first state will contain the main logic described previously for the character. It will also contain the logic for what to do when a green ball is touched. The first state will be active at initialization. When the green ball is touched, the second state is deactivated (subtracted) and the third state is activated (added), as shown in .
The second state, shown in , will contain the logic for ending the game when the character hits a wizard. This state will also be active on initialization, because this is the default behavior. When this state is deactivated, the character can collide with the wizards without ending the game.
The third state will contain an Always sensor connected to a Motion actuator that makes the character bounce up and down for as long as the state is active. It also contains a Delay sensor set to 1,000 frames that will connect to two State actuators: one that reactivates the second state and one that deactivates its own state, state 3. The state 3 logic is shown in .
By selecting all three states, you can view all the logic for all three states. This can also be done by deselecting the State visualization filter at the top of each logic brick column. With all the states visible, the logic for the character looks like what’s shown in .
There are several useful techniques for creating special effects with textures that are not at all obvious to a casual user of Blender. This section describes two of the main ones: the use of textures to create dynamic text, and animated textures.
You can use 3D text in the Blender game engine, but you have to convert it to a mesh in advance with Alt+C in Object mode. If you do this, be sure that the normals are pointing in the correct direction. Otherwise, your text, like any mesh face, will not be visible in the game engine by default. This may be a reasonable way to add text to your game. However, if you need your text to be dynamic, that is, if you want to be able to assign string values to the text on the fly as the game is played, mesh text will not work. For this, you need to use a texture.
To use texture-based text, you must have the font you want to use in a correctly formatted image texture. You can use the file arialbd.tga on the website for this book. Alternately, you can use any TrueType font to create your own texture file via the FTBlender application that can also be found on the website.
Preparing the font is simple when using FTBlender on Windows. Simply unzip the file FTBlender.zip from the website. The directory you create will contain a file called ftblender.exe and another file called ftpblender.blend. Place the TrueType font file in the same directory as these two files, as shown in .
Open the ftpblender.blend file by double-clicking, and execute the Python script in that file with Alt+P. The ftblender.exe program will automatically be called and will create the appropriate layout for the font image file. Press F2 to save the image file as a Targa file, as shown in .
To use a textured face for dynamic text, follow these steps:
The wizard bad guys in the game have a particular quality, as you can see by looking at or by running the game from the website for this book. Between their hands is an animated arc of electricity. This is accomplished by using an animated texture on a single face.
You’ve already seen how to create UV-mapped textures for the game engine and other purposes. It is important to realize that the same model can have more than one UV map associated with different portions of its mesh. When you do UV unwrapping with the E key in the UV/Image Editor, only selected faces are unwrapped and included in the mapping. In , the bad-guy body mesh is selected in Edit mode. The entire mesh except for the polygon face between the wizard’s hands is selected. The resulting UV unwrapping, along with its texture, is shown in .
An animated texture in the Blender game engine works by encoding several individual frames of animation in the same image. In the current example, the frames are positioned side by side in the texture-mapped image. The final animated texture consists of 10 frames. A single frame of the “electricity” animation is shown in . Nine other similar images were created in GIMP and saved as PNG files with alpha values (the checkerboard background represents transparency). The images are each 420 pixels wide.
To animate these in the game engine, you need to create a new image file in which all 10 of the frame images are positioned side by side. They must be positioned exactly, down to a single pixel width, and so the final animated texture file will be 4200 pixels wide. In , the model is shown with the single face selected and active.
After you have all this set up, your animated texture will come to life when you press P to start the game engine.
No game-creation tool would be complete without some way to incorporate interactive sound effects. Blender offers powerful options for working with sound. In this section, you will learn the basics of how to add a 3D sound effect.
To set up a simple 3D sound effect, follow these steps:
You’re all finished setting up the 3D sound. When you run the BGE now, you will find that the sound’s volume is dependent on the proximity of the Cube object to the camera. Be sure that you are in the Camera view (press 0 on the number pad) when you enter the game play mode.
You now know the basics of working with sound. Experimenting with the options available on the Sound actuator will deepen your knowledge. In Chapter 16 you will learn about accessing sound via the Python GameLogic API.