Tuesday, March 23, 2010

Geometric vs. Sampling

Inspired by the relative success of the sampled hRVO thing, I though I'd dust off my old ClearPath proto and try out how it would compare with the sampling based method.

I have to admit that the recent ORCA explanation video also sparked some gas too. I think I got the ORCA thing finally after seeing it in that video.

Without further due here's the video.


There are a couple of tricks I added on top of ClearPath. I use similar side bias as HRVO uses, that is based on the velocities and location of the objects, one side of the VO is expanded. It helps the agent to choose one side and reduces the side selection flickering.

Another trick I added is that I clamp the VO cone using a spike instead of a butt end like they do in the ClearPath paper. This makes the avoidance much smoother as can be seen in the head-to-head examples. It adds much needed prediction Clodéric mentioned in the comments of my last post.

I used different time horizon for static and moving obstacles, this allows the agents to move close to the walls while avoiding each other. This is quite important to get good behavior in crowded locations.

The sudden movements are caused by thin sliver section being opened and closed based on neighbour movement.

Geometric vs. Sampled

In many cases the geometric methods are much smoother than any sampling method. I think its biggest drawback is that the result is binary. You're either in or out. But that is also its strength too making the results accurate. The binary results also mean, that it is possible to get into situations where there is no admissible velocity at all.

Another con of the geometric method is the velocity selection. A common way is to just move the velocity to the nearest location at the border of the velocity obstacle. On crowded cases this results dead-locks as can be seen in the video too. I'm pretty sure that if the floating point results would be super accurate, the dead-locks would never resolve.

You can see similar clogging, which is present in my video, in the ClearPath video too (at the time they turn the speed to 10x).

The sampled VO looks smoother too, especially around the tip of the cone. In practice this means that you are less likely to run into situations where there is no admissible velocities. You will get high penalty for velocities which will collide with another agent, but sampling allows you to take the risk.

Sampling also allows to combine multiple different weights when calculating the best velocity. This allows a bit more interesting methods to smooth out velocity selection.

I'd love to see an academic paper which compares the pros and cons between these two methods and educated conclusion. Too bad most papers present their new method as the best thing since sliced bread and fail to address the cons properly.


The sampling method looks really good and smooth when I have 129x129 samples (like in the example picture above), but that amount of samples is totally impractical. I think sample counts around 200 start to become acceptable, I'd love to have less. I will try to see if I can cook up some wacky stable adaptive version to reduce the sample count.

I think the geometric method could be improved quite a bit if I could figure out better method to clamp the velocity to the boundary of the VO. Nearest point is evidently not good enough.

I do like both methods. I will continue trying to fix the cons of both as time permits. And I have a funny feeling that one of these methods might be good enough for a triple A quality agent movement. Well, that would still lack the animation bit, but that is another story.


  1. Mikko: The more I think about it, the more I think this is the wrong way to do agent locomotion. My problem is that individuals simply dont treat the world as a set of moving vehicles. They often negotiate a contract between other agents if they see that moving forward would actually cause both of them to stop moving. Your previous video actually showed some of that behavior (the funnel case at the end where some of the agents simply move out of the way).

    I think adding some super-simple personality modelling on top of these methods would solve some of the crazy edge cases where agents start clumping in a group. It would allow individual agents to essentially adapt themselves out of the steering problems. For instance imagine in the case where you have agents pathing to the other side of a circle, instead of the current "everyone circles round to the right" case, some of them simply stay on the periphery and walk on the outside of the group, while others make a direct line.

    This feels like the kind of problem you could actually try sampling the behaviour of real humans (by finding a local university and getting them to walk to the same kind of points and videoing the result). Maybe someday I'll try doing that with a group of students and film it for you!


  2. I'm thought process is that, what can I take away to fix it, rather than what needs to be added to make it better ;)

    I hear your point, though, and I believe it is well justified too. There is really cool emerging research both at Gamma [1] as well as at IRISA [2] which is trying to find new solutions to the crowd simulation based on data recorded from humans. I'm keeping keen eye on this research.

    This actually led to quite interested insight the other day. The big problem with velocity selection is that we can easily find the velocities which are just super bad and physically impossible impossible (i.e. any derivative of VOs), but selecting the velocity amongst the admissible velocities is actually defining the behavior of your agent!

    That makes for example comparison between different methods impossible since the results are subjective. There is always someone stupid or drunk enough example to compare against so that you can describe the motion "human like".

    That is the reason I like the above mentioned research, they are pitching their algorithms against plain cold data, and they put a lot of effort into creating as good data as possible.

    So that is sort of the stuff I'm aiming at in the distant horizon. In the mean while, I want to enable the community with better tools than there currently exists.

    The currently the accepted standard is "let's trace the polygon edge mid points and add some forces to avoid the others". This is just unacceptable. I want to first come up with a solution that is robust and grounded to something that we are accustomed already, then pursue the greater goal.

    If the solution is too drastic and requires a lot of work to get it implemented and integrated into existing production pipelines, none will follow it and the sorry state of agent movement will remain.

    The problem I see with waiting (without any other good means of avoidance) is that then obstacle avoidance becomes scheduling problem, which is not really easy either. It can become as complicated to choose when to stop and when to move again, than it is to joining the merry go round with the others. I did some experiments at some point about just stopping and waiting, and while it worked in some cases, most of the time it resulted dead-locks.

    I'd love to see a research and some prototype on that topic, though!

    One thing that is visible in the
    Geometric Methods of Multi-agent Collision Avoidance video [3], and which supports your hypothesis, is that they actually use random desired movement speeds to solve some of short comings of their avoidance method.

    I could yield as much as trying to use the behavior to alter the speed in different kinds of situations. Desired speed and change in desired speed is expressing quite a bit of behavior already.

    One thing I really hate is how different disciplines have named this problem differently. Some call it multi-agent navigation, some call it virtual walkers and what not. Makes Googling quite frustrated job. I think "virtual walkers" might be most relevant to your work.

    [1] http://gamma.cs.unc.edu/RCAP/
    [2] http://www.irisa.fr/bunraku/Julien.Pettre/dataset1.html
    [3] http://www.youtube.com/watch?v=oleR0ovxgN8

  3. I smiled at your comment about finally getting ORCA. It took me some noodling as well, but once I did and implemented it, I have been quite happy with it. I am using it for quadrupeds and am pleased with the results.

    You made a comment several weeks ago which it is quite true: at best collision techniques tell you impermissable velocities, rather than telling you good velocities, so CA needs to work in conjunction with some pathing/nav approach.


  4. Bruce, nice to hear that it works well.

    How did you implement the linear programming? I've never implemented one, and I could not find any simple pointers to that either. I will probably just clip a rectangle with ORCA planes if I cannot figure out anything smarter.

    On a side note, I did some profiling with my sampled vs. geometric method and the geometric method is over 2x faster than sampling with 150 samples. The way I imagine implementing ORCA, it will at least another 2x faster if not more.

    That should get me to my target, which is 20+ agents in less than 0.5ms.

  5. I used the LP algorithm on page 76 of Computational Geometry by de Berg et al. The one they use in the paper is actually later in the same chapter, but since I was doing a proof of concept I stuck with the simplest algorithm. The algorithm is pretty intuitive, once you get the intuition. The trick in the implementation, I think, is to have a really robust Boundary class that can tell you:

    1. Given a point and a boundary, which side the point is on, and distance.

    2. Intersection point between 2 boundaries.

    One practical issue on ORCA is handling the case when 2 entities are inside the safety zone (BminusA) < radius AB. I wound up clamping BminusA to radius AB + epsilon and that seemed to work ok. I also did the real boundary calculation rather than the clear path approximation.

    On a different (and speculative note), years ago (1995) I did a synthetic vision based collision avoidance scheme that worked surprisingly well. I had an autonomous animated dog that could "navigate" in the DOOM environment (he could move around on a level, go through openings, avoid walls etc...) The idea was really simple: I rendered the scene from the dog's head and calculated the motion energy between successive images at each pixel (I did a 64 x 64, or 32 x 32 array as I remember). The motion energy is just how much the color changed. To stay in the middle of a corridor, you have a simple control law that balances the motion energy on the right and left half of the image (this is actually how Bees do it in nature). I had special cases for handling the case of approaching a wall head-on, and a few other special cases as I remember. It did require that surfaces be textured. This was all before GPUs etc.

    I am not proposing this as a method necessarily but rather to suggest that approaching CA as a vision problem might be a profitable direction to explore. I think this might be especially true when you are dealing with something like a pack or group as opposed to a large crowd, where the fine details of the motion actually matter and where the analytical approximations of the shape of the characters starts having visible artifacts. Just speculation.


  6. Thanks, Bruce! Looks like a book worth having. I did a quick hack yesterday where I just star with a rectangle and chip off each ORCA plane from that.

    One thing that I noticed was that it becomes really important how you choose the planes. At first I just tried to use the polygon I created for the ClearPath proto and while it worked with a couple of agents, I got really bad results when there were many agents. The planes started to flicker around and often create null admissible set. I will next try smooth boundary too.

    How do you handle the case when the admissible set is nul? Or do you use 3d linear programming?

    I was thinking about to test a version which uses sampling together with the orca constraints. Sort of like rendering gradients using max blending mode, and then finding local minima after that. This might allow do combine other weights too.

    As for your side note, I think if you squint your eyes, my previous TOI sampling can be thought to belong to that category. Basically I was rendering 1px high "depth map" around the agent and steer based on that.