|
| 1 | +# A User Guide *Beyond GOAP* [DRAFT] |
| 2 | + |
| 3 | +## What is GOAP? |
| 4 | + |
| 5 | +GOAP (Goal oriented action planning) refers to a family of planning AIs inspired by [Jeff Orkin's GOAP](http://alumni.media.mit.edu/~jorkin/gdc2006_orkin_jeff_fear.pdf). |
| 6 | + |
| 7 | +In GOAP, an agent are assigned a goal (escape a room, take cover, knock a target down, heal, ...) and utilize actions whose *preconditions*, *cost* and *effects* are known to fulfill the goal condition. |
| 8 | + |
| 9 | +A search algorithm (such as A*) resolves the action sequence which constitutes a path to the goal. |
| 10 | + |
| 11 | +A *heuristic*, estimating the extra cost to reach the goal, is often provided. |
| 12 | + |
| 13 | +## While you GOAP |
| 14 | + |
| 15 | +While reading, also check the [Sentinel demo](https://youtu.be/mbLNALyt5So) and associate [project files](https://github.com/active-logic/xgoap-demos). |
| 16 | + |
| 17 | +## Planning Agent/Model |
| 18 | + |
| 19 | +This library provides a solver and APIs to help you implement your own planning AIs. |
| 20 | + |
| 21 | +The critical part of your AI is the model, aka 'agent'; the model represents an AI's knowledge of the environment they operate in, including itself. |
| 22 | + |
| 23 | +- Your model is a class implementing `Agent` or `Mapped` (to express planning actions, aka 'options'). |
| 24 | + |
| 25 | +The solver needs to generate and organize copies of the model object, therefore cloning, hashing and comparing for equality are common operations applied many times over. |
| 26 | + |
| 27 | +- Minimally, tag your model [*Serializable*](https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/concepts/serialization/) or (much better) implement `Clonable`. |
| 28 | +- Override `Equals()` and `GetHashCode()` (sloppy hashcodes decrease performance) |
| 29 | + |
| 30 | +### Planning actions |
| 31 | + |
| 32 | +Specify planning actions (options) via `Agent` and/or `Mapped` (or both!) |
| 33 | + |
| 34 | +Usually, `Agent` suffices - easier to implement, and (currently) faster. |
| 35 | + |
| 36 | +```cs |
| 37 | +Option[] Agent.Options() => new Option[]{ JumpForward, Shoot, ... }; |
| 38 | + |
| 39 | +public Cost JumpForward{ ... } |
| 40 | +``` |
| 41 | + |
| 42 | +Use `Mapped` when you need to parameterize planning actions: |
| 43 | + |
| 44 | +```cs |
| 45 | +(Option, Action)[] Mapped.Options(){ |
| 46 | + var n = inventory.Length; |
| 47 | + var opt = new (Option, Action)[n]; |
| 48 | + for(int i = 0; i < n; i++){ |
| 49 | + var j = i; // don't capture the iterator! |
| 50 | + opt[i] = ( () => Use(j), |
| 51 | + () => client.Use(inventory[j]) ); |
| 52 | + } |
| 53 | + return opt; |
| 54 | +} |
| 55 | +``` |
| 56 | + |
| 57 | +In the above example: |
| 58 | +- An inventory is considered |
| 59 | +- An option is generated for each item in the inventory |
| 60 | +- Options are mapped to game action via `(Option, System.Action)` tuples |
| 61 | + |
| 62 | +NOTE: *In the above we are careful* not *to use the iterator `i` inside the lambda, otherwise all invocations of the lambda function would end up using a value of `n-1`*. |
| 63 | + |
| 64 | +`Mapped` options are flexible and type safe, giving you complete control over how a planning action maps to a game action; `Agent` is much faster to implement. |
| 65 | + |
| 66 | +### The Clonable interface |
| 67 | + |
| 68 | +Implement `Allocate()` to create a model object instance. The purpose of this function is to perform all memory allocations upfront, not determine state. |
| 69 | + |
| 70 | +Implement `Clone(T storage)` to copy model state. This function **must** assign all fields (to avoid leaking dirty state). |
| 71 | + |
| 72 | +```cs |
| 73 | +class MyModel : Clonable<MyModel>{ |
| 74 | + |
| 75 | + T byRef; // Assuming T : Clonable<T>; not required but handy |
| 76 | + int byValue; |
| 77 | + |
| 78 | + MyModel(){ |
| 79 | + byRef = new T(); // allocate everything |
| 80 | + // byValue = 5; // let's not do extra work |
| 81 | + } |
| 82 | + |
| 83 | + public MyModel Allocate() => new MyModel(); |
| 84 | + |
| 85 | + public MyModel Clone(MyModel storage){ |
| 86 | + this.byRef.Clone(storage.byRef); // Don't shallow copy |
| 87 | + byValue = 5; // Set all fields |
| 88 | + } |
| 89 | + |
| 90 | +} |
| 91 | +``` |
| 92 | + |
| 93 | +Designed for instance reuse, this API enables optimizations, such as pooling. |
| 94 | + |
| 95 | +Note: *The `Allocate` API is required because newing a `T : class, new()` object resolves to an `AlloceSlow` variant (the name says it all)* |
| 96 | + |
| 97 | +### Test your model |
| 98 | + |
| 99 | +Beyond GOAP cleanly separates your planning model from the game engine and/or actor (the object implementing actual game actions). This allows putting your model under test even before integrating an actual game actor. |
| 100 | + |
| 101 | +## Integration |
| 102 | + |
| 103 | +With a working model handy, you want to plug this into your game/simulation. The library provides a simple integration, mainly intended for (but not tied to) Unity3D. |
| 104 | + |
| 105 | +The integration implements a two step *(planning -> action)* cycle: |
| 106 | + |
| 107 | +1 - A plan is generated |
| 108 | +2 - The *first* action in the plan is applied |
| 109 | +(Rinse and repeat until the goal is attained) |
| 110 | + |
| 111 | +We might plan once and step through all steps; however since world state changes dynamically, re-planning often keeps our agents on track. |
| 112 | + |
| 113 | +NOTE: *In the future the integration will give you more control over how often replanning is applied* |
| 114 | + |
| 115 | +To use the integration, subclass `GameAI`, as explained below. |
| 116 | + |
| 117 | +### Subclassing `GameAI` |
| 118 | + |
| 119 | +A `Goal` consists in a function, which verifies whether an instance of the model `T` satisfies the goal, and a heuristic function 'h', which measures the distance/cost between a model state designated as 'current' and the goal. |
| 120 | +Sometimes you don't have a heuristic, or can't come up with anything just yet. That's okay (still, a heuristic dramatically speeds up planning). |
| 121 | + |
| 122 | +[`GameAI`](../Runtime/GameAI.cs) specifies a handful of functions that you need to implement in order to get your game actors going: |
| 123 | + |
| 124 | +- Supply a goal for the agent to strive towards. |
| 125 | +- Link your planning model |
| 126 | +- (optionally) implement an `Idle()` mode. |
| 127 | +- Implement `IsActing()` to indicate when the actor are available for planning. |
| 128 | + |
| 129 | +The `Goal()` method (assume `x` of type `T`): |
| 130 | + |
| 131 | +```cs |
| 132 | +override public Goal<T> Goal() => ( |
| 133 | + x => cond, // such as `x.someValue == true` |
| 134 | + x => heuristic // such as `x.DistTo(x.target)` or null if unavailable |
| 135 | +); |
| 136 | +``` |
| 137 | + |
| 138 | +Your implementation of `T Model()` should a model instance which represents the current state of the agent and their environment, for example: |
| 139 | + |
| 140 | +```cs |
| 141 | +// Model definition |
| 142 | +class MyModel{ float x, float z; } |
| 143 | + |
| 144 | +// inside MyAI : GameAI<MyModel> |
| 145 | +override public MyModel Model(){ |
| 146 | + return new MyModel(transform.position.x, transform.position.z); |
| 147 | +} |
| 148 | +``` |
| 149 | + |
| 150 | +While `IsActing()` returns false, the planner will be running and evaluating the next action; how you implement this (whether event based, or testing the state of the game actor...) is entirely up to you; likewise the `Idle()` function. |
| 151 | + |
| 152 | +```cs |
| 153 | +override public bool IsActing() => SomeCondition() && SomeOther(); |
| 154 | +``` |
| 155 | + |
| 156 | +### Providing counterparts for planning options |
| 157 | + |
| 158 | +Since planning actions aren't 'real' game actions, your `GameAI` implementation must supply these. |
| 159 | + |
| 160 | +- With `Agent`, all planning actions must have same-name, no-arg counterparts in `GameAI`. |
| 161 | +- With `Mapped`, one approach consists in defining an interface, which specifies methods to be implemented both as planning actions, and as game actions. The [Baker](`../Tests/Models/Baker.cs`) example illustrates this approach. |
| 162 | + |
| 163 | +## Running your AI (Unity 3D only) |
| 164 | + |
| 165 | +Once you have implemented your `GameAI` subclass, it can be added to any game object (In Unity, `GameAI` derives from `Monobehaviour`). |
| 166 | + |
| 167 | +Additionally, tweaks are available... |
| 168 | + |
| 169 | +- *verbose* - gives you basic information (in the console) about what actions are applied to the game AI |
| 170 | + |
| 171 | +Then, under 'solver params': |
| 172 | + |
| 173 | +- *Frame budget* - max number of planning actions per game frame. |
| 174 | +- *Max nodes* - max number of states that should exist within the planner at any given time. |
| 175 | +- *Max iter* - the max number of iterations allowed to find a solution; after which the planner just bails out. |
| 176 | +- *Tolerance* - represents how closely the heuristic should be followed. For example if you don't care about a $10 difference (if 'cost' represents money) or a 0.5 seconds delta (if 'time cost' is the heuristic), set this to $10 or 0.5 seconds. |
| 177 | +Leaving this number to zero forces a full ordering, which significantly slows down the planner; but if you set this too high, you weaken the heuristic (which is also slower!) so there's no point in cranking it up. |
| 178 | +- *Safe* - If your actions are cleanly implemented, a failing action won't mutate model state; then, uncheck this and get a small performance bonus. If unsure, leave unchecked. |
0 commit comments