Sunday, January 30, 2011

Counting the Sheep



I had little time today, so I implemented off-mesh link handling for DetourCrowd. Yet another layer of state handling. The logic to detect and change state is not too complicated. I was mainly trying to see how to fit the link animation processing in the system. Not too much on how looks for now.

It looks like I might be able to make the system so that all custom handlers for steering and off-mesh connection handling will be just one (virtual) function call which handles all the agents. Something like:

void updateSteering(dtCrowdAgent** agents, const int nagents, const float dt);

DetourCrowd will just make sure the call happens in right order and the user can then do the update in anyway he feels necessary. Which brings me to a question to the readers of this blog:
How do you mix and match locomotion and other navigation animations?
For example, an agent is running and then jumps over a fence, and then keeps on running. I'm especially interested in what kind of logic might be used to trigger the change from running to jumping, if there is some kind of transition phase, etc. I know from the past that that is one nasty topic. I could really use some feedback on this. Good implementations, failed attempts, something that barely works, I want to hear it all :)

Another hard topic is how to manage potential queueing to an off-mesh link that becomes choke point. Or how to evenly distribute agents to multiple jump-links. Should the links be locked so that only one NPC can use it at time, or should there be invisible colliders at each end of the jump-link so other would avoid the location even more.

Some of those problems were touched by this paper: Situation Agents: Agent-based Externalized Steering Logic. I hope to have some time to try some of the ideas out.

5 comments:

  1. Mikko, there was a session recently with Alex from one of the guys who did animation selection for navigating a world (cant remember the game, but pretty much the kind of stuff in assasins creed), worth watching. Short version is that you transform the animation into the space of the thing being animated against (i.e. the thing the agent will cross over, use, vault etc). Which means you have specific points in space and must warp the agent to get there before it starts its use animation. So there's definitely a phase where you move from normal cycle to warp-to-begin animation, and same for out of it.

    ReplyDelete
  2. Thanks Phil! You mean that parkour session? I have to check it out!

    ReplyDelete
  3. I seem to recall one about something similar (warping to start animations for specifically animated leaps, slides, vaults, etc) for the game "Brink". Given the parkour nature of the gameplay, I am assuming this is the one Mikko is referring to.

    That said, the presentation I watched was more about predicting the player's desire for navigating an environment than about automated navigation. If there is an automated navigation speech/presentation/paper from Brink's dev team - I would REALLY love to know where it is! :)

    ReplyDelete
  4. The masterclass I'm referring to is this:

    CINEMATIC PARKOUR ANIMATION... BUT INTERACTIVE!, Laurent Ancessi
    http://aigamedev.com/premium/masterclass/interactive-parkour-animation/

    The class refers to Uncharted quite a few times, looks like similar method was used in that game too. If there is a Brink presentation somewhere, I'd be interested as well :)

    ReplyDelete
  5. As I mentioned, the Brink presentation I saw was more about the USER features and requirements of their S.M.A.R.T. (Smart Movement Across Random Terrain) system. That said, if they provide a decent single-player / co-op game; they must have something in place for the AI to use the same movement capabilities.

    I honestly can't recall the location of the video / presentation I watched, but hopefully the above gives you some Google-fodder

    ReplyDelete