In previous posts, I've talked about how a Behavior Tree is made up of Behavior nodes, assembled together with the connective tissue of sequence and selector nodes. This is a great start, but it can be useful to provide a means for suspending traversal through the tree, doing part now, and meaningfully be able to come back later, especially if some behaviors take a while.
We used Unity on PC for Battletech, which gave us a certain structure to work with. You can set up "Game Objects" that get periodic update calls, so that you can do AI or other internal logic for each object. You have to finish what you're doing pretty quickly so that other things, like rendering, like other stuff on the computer outside the game, can continue to operate. There are certainly approaches you can take to make this a little less intrusive, so that you can pretend you have the whole computer to yourself, but even these have costs.
The way that I connected the AI system with the rest of the BattleTech game was that the game would inform a unit that it was time for that unit to move. In BattleTech, there was some opportunity for a player to move their units in any order, or to defer a movement. We thought a little bit about this, but I felt that the value of allowing the AI to choose its order would be small, and if one unit moved at the end of turn 1, but at the beginning of turn 2, that might feel like the unit is moving too fast, even though the human player has the same option.
So, the game tells us that it's time for one of the AI mechs to move, it begins going through the behavior tree. A lot of the nodes are "internal", having no effect on the outside world, no visuals. Examples of this are the observational nodes, which provide a value of success or failure for some question about the outside world or the mech itself. These usually execute very fast. However, there are things that take longer, like pathfinding.
Our game was broken up into missions, some of which were a simple arrangement of a few different spawnpoints on the map, with the human player's mech starting at one spawnpoint, and the AI mechs spawning elsewhere on the map, with the last player standing winning.
We also had other missions that were more carefully scripted. In these missions, the level designer would provide "orders" to give the AI mechs things to do. Often, this would be that the AI player units would start at one location, and go to a second location. Sometimes, it was left up to the AI to figure out a good path to the destination, sometimes a path was provided.
In other games, even waiting for movement to complete is an important constraint. It's not so bad for BattleTech, because the Behavior Tree code generates orders, which it gives to the underlying game code to act on, in a similar way to the way that human player input is acted on. Once the AI decides on the orders, it submits them, and the AI goes away until the next time it needs to move a unit.
Even so, generating these orders might take up to a second, and we want to go on drawing the screen, probably playing music, other things, so we broke up that processing by "timeslicing", which just means doing a little bit, letting the computer do what it needs to do, then doing a little bit more, and so on, until completion.
This timeslicing approach meant that we would be working on traversing our tree, executing behavior nodes, and then the timer would tell us to pause our work. If you have ever played "Red Light/Green Light" or "Granny's Footsteps", it's similar.
The way I got this to work was to check the clock when I started working, I would do as much execution as I could up to the point where the clock told me that I was out of time, and then I would pass a special return code up, RUNNING, instead of success or failure. This value would continue up the tree to the root, which would return RUNNING, and I would know that the AI was still in process.
Then, on my next update, I would follow the chain of RUNNING nodes down to the bottommost one, let it continue as before, and pass RUNNING up the next time we ran out of time. Eventually, we'd have orders to submit, and we wouldn't need to do any more evaluation.
In a different game, where the behavior trees were being evaluated more frequently, or in a more ongoing fashion, you could imagine a "walk to <location>" behavior that might take several seconds to complete, depending on how far away the location is, and how fast or slow the AI walks. You would return RUNNING as long as the character was walking (don't get confused between the RUNNING return state, which refers to evaluation of the Behavior Tree and the physical character's movement animation).
If your game has this kind of frequent/ongoing AI processing, you would want to validate that the various sensors that you had used to decide to do the ongoing action were still consistent with doing the action. For example in the grocery shopping example, if you suddenly ran out of gas on your way to the grocery store, you'd want to have the AI be aware of this right away, and select a different behavior for getting food.
How far back do you go? Seems like you need to re-run the observational behaviors to figure out the context to be acting in, but that breaks observational behaviors away from behaviors with physical effects, which makes the structure a little different. For more discussion on that, you'll have to go elsewhere, as we had the luxury of essentially one unit moving at a time.
Next up: Influence Maps for Spatial Awareness