I've been working on updating my pathfinding articles with interactive diagrams. While doing so, I realized that I don't like the way I've presented information in my pathfinding articles. Instead of just updating them in-place, I'm writing new tutorials and will eventually figure out how to stitch everything together.

A big one I've been working on is the introduction to A*. I started writing these notes in 1997 and have updated them over the years. The past few months I've been working on a replacement for this page. The page compares Breadth First Search, Dijsktra's Algorithm, Greedy Best First Search, and A*. Differences from the old page:

  • All the diagrams are interactive (of course)
  • I mention non-grids a little more (still not enough)
  • I use contour lines to compare the algorithms (not sure if this will make sense to people)
  • I show working Python code for each of the algorithms
  • I have supplemental material that includes the helper functions and classes needed for the search algorithms
  • I give some guidance on which algorithm to use

There are lots of little things (smoother animations, touch screen support, better code highlighting, better contour line visualization) that remain, but I decided to publish it 90% complete instead of delaying it to polish all those little details. Take a look at the new introduction and let me know what you think. Is it understandable? Do the contour lines help? Is it enough to help you implement A* yourself?

Labels:

I'm working on a tutorial showing how Breadth First Search, Dijkstra's Algorithm, Greedy Best-First Search, and A* are related. I'm focusing on keeping the code simple and easy to understand.

While tracking down a bug in my Dijkstra's Algorithm code, I realized I had forgotten to implement the "reprioritize" case. That led me to wonder why my code seemed to work without handling that case. It turns out that case is never triggered by the graphs I'm searching over. Let me explore this …

Labels: ,

(Note: I'm going to try writing more unpolished things on this blog and leave the polished articles to redblobgames.com.)

The utility functions I learned in economics are history-agnostic. They look at the current state of the world and calculate a “utility”. For example, you might say the utility of money is the logarithm of the amount of money you have.

Prospect Theory says that this view of the world does not match how people actually behave. Instead, the history of how you got to a certain point matters in how you value it. There's an asymmetry around a “reference point”:

Valuefun.jpg
(credit: Wikipedia, Prospect Theory page)

Consider these scenarios:

  • You get $200 and have a 90% chance of losing $100 of it.
  • You get $100 and have a 10% chance of gaining an additional $100.

These are mathematically equivalent. Both have a 90% chance of giving $100 and 10% chance of giving $200. However, they are not equivalent to humans. That's because humans consider not only the final result but how it was reached. Having $200 and then losing $100 feels different from having just the $100 in the first place. Even though the outcomes are the same, the reference point is different, so it feels different. Prospect theory takes this into account somewhat.

In game AI, I've only used regular utility functions. However, it seems reasonable to try using prospect theory in some way. Even prospect theory isn't complete; there are more human behaviors in decision making and valuation that it doesn't account for. Maybe FTT or something else. But sometimes you want to balance simplicity and comprehensiveness. In any case, it's something I'll want to ponder the next time I'm writing AI evaluation functions for NPCs.

Labels: , ,

Popular Posts