Monday, October 26, 2009

Not Bumping Into Things

Usually when I do research to solve some problem, I get fond of some particular method, try it out, get disappointed because it did not quite solve all my cases, then I try to track back towards the root of the idea to try to figure out what might have been the original motivation behind the solution and try to find a better suiting method from there.

My relationship with velocity obstacles has reached the point that I have gotten a bit disappointed with the solution. I have a feeling it has gotten certain things right, but it does not quite deliver.

It looks like the basic idea has been invented in the literature over and over again. For example one of my Google searches led me back to the old Craig Raynolds (Mr. Boids) paper from SIGGRAPH 88 called: Not Bumping Into Things.

I must have read that paper numerous times, but it did not strike me until today that it is actually describing velocity obstacles. The left side of the picture at to top of the post is from Raynold's paper, and the scribble right to it is my drawing from another angle.

I also found Fiorini's disseration, Robot Motion Planning in Dynamic Environments, which goes a bit more into certain details as well as list of references, which is missing from the paper that is usually available on the net. I wonder if his website along with complete list of publications is available somewhere? The one at NASA does not respond anymore.

One reason which made me revisit Fiorini's work is that unlike many more recent papers he tries to build meaning to the different safe velocity areas. One thing I noticed while testing VOs is that it is really easy to get a lot of noise in the input data, that is the cones jump around quite a bit, and that seems to be the greatest reason for any sort of oscillation problems. Shit in shit out, surprise, suprise!

Just take a look at this youtube video. The flashing red rectangle at top right corner is the velocity obstacles of the agent. I have witnessed similar behavior in my tests. Also, because of the feedback nature of the system once you get noise, it will amplify. One way to get rid of that (and as a side effect, inject a horde of other problems) is to either lowpass filter the velocity that is passed to the VO calculation or, lowpass the change in velocity. RVOs is kinda-sorta doing that, by adjusting the velocity towards the safe velocity just by a fraction.

So my gut feeling is that it is possible to build better method to choose good safe velocity by analyzing the arrange of the VO structure. I also think it is possible have better temporal behavior using similar tricks as in Coherent Stylized Silhouettes. The better temporal behavior should fix the usual pitfalls of VOs such as oscillations and choosing side to pass and still keep the system responsive.


  1. So the question is, given that the relative velocities of each object is stable, how come the velocity cones jump so much?

  2. Oh, btw. The other side of this velocity obstacle issue I don't quite like, is that the whole thing is treating people like vehicles. If you look at the clearpath example, the humans dont really interact. Where as social norms come into crowd movements quite a lot (although I've not got evidence on film of that yet).

  3. In crowded scenes the choice between several good velocities outside the VOs is usually quite small. So if you choose good velocity this frame, and so does everyone else, the cones are at slighly different position on the next iteration and you may end up picking another really good velocity, but on quite different direction. This is usually calle reciprocal dance in the VO literature.

    Limiting the maximum deviation (RAV) helps a bit, but makes the system more blind, enlarging the cone on one side might help, but you will still have cases where tehre might be two equally good choices, adjusting just a fraction helps (RVO), but it still does not solve all the problems.

    So the bottom line is that the velocities are not usually stable, and even slight jitter can, and usually will, accumulate because of feedback.

  4. You're right about the social norms. Some simulations which use VOs seem to create the usual crowd behavior such as lane formations, etc.

    My goal is to create robust local navigation model to complement my global pathfinding work. I think it is important to get that working robust first and hope that some other minds may build on top of that to get more interesting results :)

  5. Although you don't seem to be completely fond about work of "Mr. Boids", his stuff has still been distilled already to the OpenSteer library:

    Wouldn't it still be a somewhat neat idea to try around how well that interfaces with your work? Somebody might already use the OpenSteer lib anyway, making it easier to adapt your pathfinding.

    Just one more angle to think about... :)

  6. I'm familair with steering behaviors, many of my games (both protos as well as some which have seen the daylight) have used them. Actually my point was that Raynold's reasoning about avoiding obstacles is very similar to VOs. I really like Raynold's practical solutions to steering behaviors and that just heightens my belief that there is something worth considering with VOs (even if I'm not very happy about all the aspects yet).

    I think steering behaviors are awesome to get something up and running quickly, but once you start to have problems they are really hard to control. Usually the first problem is that the characters get stuck in some weird corner cases and looks like it should be trivial to solve (usually it is not). Then you start to dig deeper and next you notice that you are neck deep in trying to figure out even more and more complicated ways to solve the problem of how to combine multiple forces.

    I think the great misconsemption there is that everything that can be expressed as a force are equal can be combined in some fashion. Programmers are fond of this kind of thinking.

    Even if you can walk and chew bublegum at the same time it does not mean that you should use one system to process both of them. Or else... you get that bubble gum stuck on the bottom of your shoe! (Wonderful deduction! :)

    Now if you split the problem in several steps, where each feed the next you might be much further. For example choosing a direction to move is one. It may be that you want to move towards the next waypoint (which is defined by higher level system which does gloal planning) or just avoid a wall. This is what defines the goal of the agent and pretty much also defines it's overal behavior.

    Next should be local planning, which makes sure that you get as closely towards your direction as possible, without colliding with other characters. This is separate problem and should be handled with different type of system. This can be solved using something VOs or nearness diagrams or what not.

    Simple hierarchy of things that pass directions and requests to the system below. Easy to implement, easy to debug, easy to hack, and easy to keep clean and lean.

    That is what I do not like about steering behaviors, mixing everything together. If I order vodka martini, it is shaken not stirred olive on the side and definitely not all mixed up together using mixer/blender ;)

    (enough of these silly analogies...)

  7. Hey, thanks for the extra insight. Haven't done much of these kind of things, so it is quite interesting reading. Now that you explained it, it surely does sound like the local planning should still be higher level than the low level steering...

  8. Huh! Its “old Craig Reynolds (Mr. Boids)”. I was reminded of this paper (informal notes for a tutorial/course at SIGGRAPH 1988) because someone asked for a copy of it at ResearchGate. Then out of curiosity I did a web search to see where else it might have been mentioned. Thanks to Mikko Mononen (Mr. Duck) for this. We corresponded in 2013 about Recast and Unity, then met in person at SIGGRAPH (in Vancouver?).