Tools and Content Creation

In order to create a video game, most programmers are using a game engine.

A game engine is a software framework designed for the creation and development of video games.The core functionality typically provided by a game engine includes a rendering engine (“renderer”) for 2D or 3D graphics, a physics or collision detection (and collision response), sound, scripting, animation, artificial intelligence, networking, streaming, memory management, threading,localization support,scene graph, and may include video support for cinematics.


In the same way that programmers are using Game engine in order to code quicker and focus in the game creation, they can use tools and toolchains. Most of AI design tool use simple techniques as finite movements and pathfinding, decision trees or state machines.

In Unity there is a special tool to make your own state machines, making creation of behaviours easier :


Official Unity tutorial on State machine

Toolchains will enable content creation to be standardised (with models, textures, sounds, levels, animations etc…). Toolchains will allow designers to modify data rather than the code itself.

However tools and toolchains have limits because code cannot be easily edited, and it can require specific programming for one character and also limits level creation speed and character reuse.

Here is an example of a middleware : RAIN AI for unity

RAIN is a free middleware that enables developers, designers, and artists to create intelligent and interactive characters for their game.

Here is a quick overview :

RAIN includes a lot of features as :

  • Movement and pathfinding (and also behavior) : giving us opportunity to use either waypoints to create path or navigation mesh system for detailed movement. Moreover a built-in behavior Tree editor can help you to attached a behavior to your character with movement such as patrolling, wondering, hiding, attacking, chasing…
  • Behavior & Animation : RAIN proposes its Behavior Tree editor to bring characters to life allowing you to quickly and easily add movement, animation, sounds effets and sensory awareness. RAIN also supports built-in integration Unity’s State Machine


  • Perception : RAIN includes perception features that allow you to easily make your character lively. Through its perception, the character will become aware to environment events such as close footsteps, raindrops and even injured allies.

perceptionperceiption (1).pngperceiption-2

  • Performance : RAIN also provides a tool to check game performance.


Additional information can be found on the official rain website or on the Rain’s wiki.

References :

Here is a shortlist of middleware :

All RAIN pictures are from : Official rain website and RAIN AI Engine preview

Rain’s wiki.

Official Unity tutorial on State machine

Tower Defense AI : Designing game AI & Knowledge representation

Designing game AI

Pathfinding in Tower Defense game

Enemies’ movement is the main key in Tower Defense game since you can make your whole game with only enemies going towards their goal.

Since I already spoke about path finding in previous posts, I will just explain which techniques are usually used for pathfinding in Tower Defense game and how they are executed:

Waypoints movements:


This method cannot be considered as pathfinding method, because monsters are only  following a certain path. Enemies will just head towards the next waypoint until it has reached the end, since levels are usually statics (predefined), waypoints are alternatively  be replaced by series of movements (turn right – go straight – turn left – go straight). The advantage of this technique is that code can be really light and so is the execution (no computing calculations). This method is perfect for statics maps.

If you have interactive maps you have to implement pathfinding algorithms (usually A* or Dijsktra). I will not explain in detail how Dijkstra and A* are working but if you want to read more about A* and Dijsktra you can read my previous post : About Pathfinding in AI

A* :

A* can easily find the best path for your monsters quickly.A* will not visit every tile of your map but only the most relevant ones (based on heuristics) and will end if the goal is found. This algorithm is perfect for your game if all monsters are spawning from the same point to the same end point for an interactive map (like obstacles which can be added by the player on the map).


Dijkstra :

Dijkstra will also find the best path but will visit more nodes than A*. However if you have a lot of enemies spawning from the different points but going to the same end point for an interactive map , it may be actually cheaper (since A* is needs more operations per node) to run this algorithm backwards.


A second alternative is to run A* backwards and modify the ending condition to find more than one end point.

a)    With the game idea for the final assignment in mind, create a list of all the different types of AI that are needed to realise this. Consider what techniques you want to use for decision making, pathfinding etc. if required.

Enemy AI

Since best pathfinding is the unique condition to realise a basic tower Defense, usually enemies AI in tower Defense are rather limited, however you can still make a lot of different monsters which will not behave the same (monsters that are speeding up enemies, healers…)

The following pictures will show you ideas that I have for my game :

 1. Standard monster


This monster will be the most basic one. It will just find the shortest path from its starting point to the end and follow the path.

A* will be used for the pathfinding.

2. Half-Blind monster


This monster will not see obstacles until a certain moment but will still move from its starting point to the end point.

This monster’s path was inspired by the previous lab, for its path, it is just a small variation of A* where the algorithm will first prioritize the path straight to the goal before considering obstacles.

3. Flying monsters


This monster will fly from its starting point to the end point.

There are 2 possibilities for this monster, we can either move its straight to the direction from its spawn (no algorithm needed) or apply A* algorithm and add some obstacles in the sky like trees or high buildings, for this, a simple boolean can be added for flyable tiles.

We can also add enemies like birds which will avoid some tiles around a scarecrow or amphibians that will be able to swim faster in ponds/rivers, there are many variants that can be designed.

4. Turrets destroyers


This monster has more health than others, it will destroy your turrets.

For this enemy, I am thinking of adding 2 or 3 spawns in the map so that enemies will be able to randomly spawn from different points. This monster will try to break the closest turret using A*, I may add a group behaviour like if there are already 3 monsters trying to destroy one turret, the monster will attack another one.

5. The sneaky thief


This thief will try to steal your resources and will try to disassemble your turrets

For this enemy, the AI will be a little more complex. He will try to steal resource in your camp (especially money) when you won’t be next to it, otherwise he will focus on your best turrets (most valuable one or the one that has the best fire rate). He will also flee from you if he sees you. A* will be used for his movements and for his behaviour I’ll use Unity animator tool based on state machine model.

Depending on the player’s experience, I may not add all of the following AI or change behaviours of some monsters but among them, numbers 1 , 3, 4 and 5 are sure to be implemented in the game.


Once I will have basic enemies of a classical tower Defense, I may want to add survival elements. For example, the player will be able to hit some enemies, will need to eat/drink; also food can be dropped when monsters are killed and can be eaten or sold. Therefore I thought about some more AI which are the following :

4. Wolves


Wolves will eat meat on the ground but attack you if you are too close to them.

Wolves will simply eat meat close to them, they will usually move in group and will approach and attack the player if he’s getting to close to them but also chase it. If the player is attacking a wolf next to another, the other one will also attack the player. Either A* can be used for its pathfinding, but also random wandering where wolf will just eat food within a certain range.

5. Little robot


This nice robot will help you to gather food dropped by dead monsters.

For this AI the problem is a bit complex. The first thing to think about is where the food will be dropped (can two pieces of meat be on the same tile or only one piece of meat per tile). If food is randomly dropped in the map, then we cannot use the map graph to perform algorithms but that’s not convenient for now because I have to create another graph. So considering that food can only be dropped in one node of the graph, I will have several pieces of food dispatched in several nodes. The problem is the following: the robot will leave the camp, gather each piece of food and then return to the camp. The second thing to think about is the knowledge of the robot, will he be aware of all pieces of food and its position in the world or not.

  • Considering the robot doesn’t know the position of each piece of food :

The first approach is to predefine a tour and then apply A* on each piece of food within the range of the robot.


The robot will just follow the path and if there is food in the scanner range he will take it (by applying A*) or leave it (if there is no path).

However this approach is not good because it will not consider obstacles in the map, to solve that maybe the solution is to do a semi-random wander around the map or visit each walkable tiles on the map but that may not be the most optimal solution at all.

  • Considering the robot does know the position of each piece of food :

This problem can be assimilated to the travel salesman problem (which is the following : A salesman has a list of cities he wants to visit and we wants to find the shortest tour that visits each cities only once (with one difference that is the robot can actually return to one tile that he has been). There is currently no exact solution for this problem in a fast way, the exact complexity for this algorithm is n!, but the Held–Karp algorithm solves the problem in time O( n^2*2n ).

To make the robot more realistic we can add a flying scanner that will scan all pieces of food before hand in the map.

The easiest way to solve this (using heuristic) is to choose the nearest neighbour using A* on each piece of food until all pieces have been collected. This is not the best optimization but this method is only 25% longer than the shortest path possible on average [see The Traveling Salesman Problem: A Case Study in Local Optimization].

On top of that algorithm, we can apply the 2-opt algorithm which consist of reconnecting edges in an (Hamiltonian) cycle if the new path is shorter. Nevertheless, the complexity is still high ( worst case is O(n^3) ).


From my point of view, it is not necessary to apply the 2 opt algorithm since the player will be able to gather food by himself (before he will be able to afford the robot) and if some nodes disappear, the robot will have to reapply the algorithm (which will consume a lot of time/memory).

The nearest neighbour method seems to be a good solution for this problem, to improve this we can also implement (on Unity) A* pathfinding on separated thread from the game code.

Knowledge representation

First what is knowledge representation for AI ?

Knowledge representation is the amount and type of data that the robot will have. Depending on this, the robot will be able to plan, remember and/or predict things. For example, we can simulate the sight of the AI by drawing a range around its head. Knowledge representation is especially useful to design emotions (confusion, surprise, fear…) and design a particular behaviour for each emotion it has.

Knowledge representation can be assimilated to memory storage where data (knowledge) will be stored; it can have several representations like (working memory, short term memory, long term memory or episodic memory) and the AI may be able to select the most important data over others (chasing one type of enemy in priority).

However, knowledge representation can be a disadvantage for the game since it will add more complexity, be inefficient or just not be relevant for the game.

In tower Defense, there are usually not much knowledge representation. Enemies will already be aware of the path they will take towards the goal (to keep the game interesting).

However, some of my enemies might have some knowledge : wolves will be sensitive to the player’s threat, the robot will have to be acknowledged of food position (flying scanner) before being able to build a path and the sneaky thief will be able to detect the player from a certain distance.

References :

Dijkstra’s algorithm (wikipedia)

A* search algorithm (wikipedia)

Travelling salesman problem (wikipedia)

2-opt (local search algorithm wikipedia)

Paper by Mahsa Sadat Emami Taba (2009) Solving Traveling Salesman Problem With a Non complete Graph

Paper from Johnson, D. S.; McGeoch, L. A. (1997). The Traveling Salesman Problem: A Case Study in Local Optimization

Course by Martin Burtsche High speed 2 opt tsp solver

Book by Ray Barrera, Aung Sithu Kyaw, Clifford Peters,Thet Naing Swe   Unity AI Game Programming (second edition)

Simple decision making examples

Decision making task

Today, we will answer to a simple decision making exercise :

a) Draw a decision tree for an AI bot that must exhibit the following 
i)      If there is no enemy in sight, random wander
ii)     If an enemy is in sight and has more health than bot, then flee
iii)    In an enemy is in sight and has less health than bot, then attack
iv)     If the bot is at less than 50% health take healing
b) Draw a state machine for a guard with the following behaviours:
i)      Patrol
ii)     If an intruder is heard, search for a few minutes then resume patrol
iii)    If an intruder is seen, alert other guards and then attack
iv)     If the intruder is captured or killed, resume patrol
c) With the previous state machine, adapt it to be a hierarchical state 
i)      Expand the search state to move closer to the sound, then if nothing is
        seen, to randomly select a direction to search. If a sound is heard again 
        move in that direction. Maintain this pattern for 5 minutes before 
        resuming patrol state.
ii)     Expand the alert/attack state to use missile weapons if the target is 
        beyond a certain range otherwise select a melee weapon and close to 
        melee range.

a) Decision tree example for an AI bot


Actions are framed while conditions are surrounded.

In order to make this decision tree work, at the end of each actions, we have to repeat this sequence of choices usually by using a loop where the bot will check every time the decision tree to avoid it being stuck in one action. For example, here, if the enemy has more health, bot will attack it, but when its health will be lower than enemy’s health, the bot will flee and then heal himself when the enemy will not be in sight. Decision tree are good representation for a simple AI, and they are organized in a hierarchical order.

b) State machine example for a guard

statemachine.jpgSame here, we have to repeat actions until something happens to make this state machine working. The difference with the previous decision making pattern is that the AI will carry on doing the same thing until something will trigger another state. Here, if the guard sees an intruder while patrolling, he will still be alerting others guards until one guard (or more ?) is alerted. State machine can be more realistic than decision tree because the bot will memorize that a certain event happened and will perform a certain action for a while. Indeed, if you provoke a bull, it will still be angry for a while even if you stopped provoking him.

c) Hierarchical state machine example for a guard


And this is how state machines become hierarchical state machines so that several actions can still be within a same state.

The pros for States Machines is that there are simple, easy to design and can still make your AI expressive but the cons are that the numbers of states and transitions can grow really fast and there is not shades in transitions or actions (is the noise I’m hearing far away ? I want to attack the nearest enemy)

For more complex AI the following methods can be used like Fuzzy logic or Neural Networks.

References :

State Machine article

Fuzzy logic (wikipedia)

what is fuzzy logic

About Pathfinding in AI

Implementing Pathfiding

I finally want to make a first person view Tower Defense so I began to generate the map and I took elements from the previous tutorials to spawn enemies, but, the main issue was to decide how to make the pathfinding. Indeed pathfinding is a crucial element for enemies AI in many games where monsters or enemies are following the
player, or going to a particular point while avoiding obstacles.

Obstacles could be different for different units. For example, an air force unit might be able to pass over a mountain, while the ground or artillery units need to find a way around it

Implementing Enemy Pathfinding

After making some researches, it seems that A* and Dijkstra are the most used algorithms for pathfinding in AI but there are differences between theses.

Dijkstra is a greedy algorithm which explore all possibilities (nodes) to find the best path possible from one point to another.

A* is considered a “best first search” because it greedily chooses which vertex to explore next. It will find the best path according to the value of f(v) [f(v) = h(v) + g(v)] – where h is the heuristic and g is the cost so far..

A* is both complete (finds a path if one exists) and optimal (always finds the shortest path) if you use an Admissible heuristic function. If the function is not admissible it will not provide the optimal path.

A* complexity depends on the chosen heuristic but usually the complexity is much inferior to Dijkstra which is running in O(|E|+|V| log |V|) ( E is the number of edges and V the number of nodes). Indeed, A* is faster than Dijkstra, it requires and more operations so more memory per node but it explores a lot less nodes and the gain is good in any case.

To conclude, the choice between those 2 algorithms depends on how you will use the algorithm in your game, but usually A* is mostly used because of its performance and accuracy.

If you need the best path possible, you can either code Dijkstra or A* algorithm, but if the chosen graph is getting huge, you better have to choose A* algorithm ( since Dijkstra will visit all nodes of the graph). Therefore Dijkstra can be considered as a “special case of A*” where the heuristic is 0 (no estimation of the goal’s distance from the start point).

But then when would I choose Dijkstra ?

You may use Dijkstra if you need to explore almost all the nodes in your graph. Since Dijkstra will provide all path from the start point to another in certain case it can be worth to do it.


source :

For example, if you have a fixed environment and several characters to move from the same point towards different end points. In the previous example, the shortest path from A to H is = A – C – D – E – G- F – H, but you can also have the shortest path from A to B : A – C – D – B.

I decided to code A* algorithm since in my future game, enemies will only have to move from one point to another but I still keep Dijkstra in mind.

First path-finding try

First I needed to get the nodes from the map since I actually have tiles on my map.


Each tiles have the same dimensions (x,y,z) : (4,1,4)


The map I have generated is a 10 by 10 tiles map, so here is the actual graph that I should get :


So each tile has a maximum of 8 neighbors and if the neighbor is walkable, a path exist (edge) between the tile and its neighbor.

Here is my representation of my nodes :


So I tried to code A star algorithm by myself using pseudo code :


I mainly used theses websites to code my algorithm:

Then for the heuristic part, I used a simple diagonal heuristic found on youtube :


Source :

So here is the result:

The yellow tile is the starting point, the black one is the end, the blue tiles show the path and the red tiles are the obstacles.


I realized that my algorithm was not the most efficient one because I didn’t choose the best heuristic or there was a mistake in my algorithm.

So I made more research and I found a really good article on heuristic here:

It is said “If h(n) is sometimes greater than the cost of moving from n to the goal, then A* is not guaranteed to find a shortest path, but it can run faster.”

So I tried several heuristics but it didn’t change.

And after some more research, I figure out that my algorithm wasn’t providing the optimal path but was faster than the basic A* :

Source :*_search_algorithm

Indeed it was not exploring as many nodes as the original A* algorithm does. This algorithm is actually a Best-First Search algorithm which will prioritize closest nodes using heuristics.

And that’s because instead of exploring all the nodes in the open (using FIFO) list I was choosing the closest node from the end point and visiting its neighbors first.


And then my algorithm worked fine :


Here is my complete code if you want :