A little while ago, as an exercise in procedural content generation, I made a stupid little maze generation script, like this.
A buddy of mine asked how configurable the algorithm was, and I admitted that it didn't really have any knobs to adjust.
My buddy was looking for what I'd call a "labyrinth" generator - a complete tour of every location in the space. Also, Hamiltonian_path sounds cool, because everybody's crazy about Hamilton these days, right?
In particular, he wanted a complete tour of all of the squares on the surface of a 6x6x6 cube.
I adjusted my script a little bit, and came up with randomly generating a path around the cube, that didn't visit all the locations:
which is a step in the right direction - getting the edge crossings working correctly, that's an accomplishment. I was considering making modifications of the solution, like the rewriting that goes on in L-systems. I wasn't really convinced that would work, so I went more for the more direct approach of creating a path and validating certain constraints as I went.
In particular, I found Frank Rubin's paper: A Search Procedure for Hamilton Paths and Circuits which described a bunch of constraints on potential passageways (edges), and ways to determine if the solution so far is already inconsistent. For example, if there's any location that can't be reached from both the start or the endpoint, you've painted yourself into a corner. (Or maybe the opposite, painted yourself out of reach of a corner?)
It also identifies other kill conditions, like a location with only one passageway available, that's a dead end, and no good.
Also, passageways can be "required", like a node that is down to two passageways incident to it, they both have to be used.
This led to some solutions, and I presented the first of them to my friend yesterday morning, along with the script, so he could generate more on his own.
My friend discovered that the algorithm would often find solutions quickly, or very slowly - like 25% of the time, a solution would be found in one minute, but 75% of the time, it'd take longer than three minutes. I haven't done and real data collection, but there seems like an optimization problem to be had - pick a timeout value t, which will yield a solution in 1/n(t) cases, and then iterating until you get a solution.
I implemented a variation on that, with exponential backoff - I first try a small timeout, and if I fail to get a solution in that time, I multiply the timeout by a constant value and start over.
I've got extra optimizations that I could put in, but really, the point of the project has been accomplished, my "customer" is happy, and I should really go to work on other projects.
No comments:
Post a Comment