Sunday, April 30, 2017

Lulu + ReportLab = unembedded fonts, by default - FIXED

I use ReportLab's PDFGen Python library to generate PDFs. I use it for a lot of different stuff, including my line of Kakuro books. It's not tricky to get it to do what I want, most of the time. When I generated the PDFs for those Kakuro books, I submitted them to lulu.com and immediately got an error complaining that my books didn't properly embed the Helvetica font.

I did vague digging around and some sort of hacking (so many years ago, I don't recall the details), and got something gross that didn't have a reference to Helvetica.

I also shelled out money to Adobe for their commercial PDF tool, and massaged the output.

Neither of these solutions were really satisfying, but they each seemed to work at various times.

Fast forward to today, and I wanted to make a notebook of hex grid paper:


I uploaded the PDF of my graph paper, and got that same error again. To be clear, the entire book was graph paper - no fonts were being used. I poked around in the PDFGen documentation (there's a user's tutorial, not so much a reference manual), and found some information that was useful for importing TTF files and embedding them, but nothing for an unused Helvetica font.

I poked around inside the source again, and discovered the initialFontName and initialFontSize arguments to the canvas object, so now, my canvas creation looks like this:

ttfFile = os.path.join('.', 'UniversalisADFStd-Regular.ttf')  
pdfmetrics.registerFont(TTFont("Universalis", ttfFile))
c = canvas.Canvas("hex_book.pdf", initialFontName='Universalis', initialFontSize = 24, pagesize=pageSize)
c.setFont('Universalis', 24)


And there's no dangling use of Helvetica, and Lulu's happy, and I'm happy. Maybe this is useful to you, maybe it'll be useful to me, next time I run into this particular weirdness.

Saturday, April 8, 2017

Hamiltonian Cubes

A little while ago, as an exercise in procedural content generation, I made a stupid little maze generation script, like this.

A buddy of mine asked how configurable the algorithm was, and I admitted that it didn't really have any knobs to adjust.

My buddy was looking for what I'd call a "labyrinth" generator - a complete tour of every location in the space. Also, Hamiltonian_path sounds cool, because everybody's crazy about Hamilton these days, right?

In particular, he wanted a complete tour of all of the squares on the surface of a 6x6x6 cube.

I adjusted my script a little bit, and came up with randomly generating a path around the cube, that didn't visit all the locations:
which is a step in the right direction - getting the edge crossings working correctly, that's an accomplishment. I was considering making modifications of the solution, like the rewriting that goes on in L-systems. I wasn't really convinced that would work, so I went more for the more direct approach of creating a path and validating certain constraints as I went.

In particular, I found Frank Rubin's paper: A Search Procedure for Hamilton Paths and Circuits which described a bunch of constraints on potential passageways (edges), and ways to determine if the solution so far is already inconsistent. For example, if there's any location that can't be reached from both the start or the endpoint, you've painted yourself into a corner. (Or maybe the opposite, painted yourself out of reach of a corner?)

It also identifies other kill conditions, like a location with only one passageway available, that's a dead end, and no good.

Also, passageways can be "required", like a node that is down to two passageways incident to it, they both have to be used.

This led to some solutions, and I presented the first of them to my friend yesterday morning, along with the script, so he could generate more on his own.


My friend discovered that the algorithm would often find solutions quickly, or very slowly - like 25% of the time, a solution would be found in one minute, but 75% of the time, it'd take longer than three minutes. I haven't done and real data collection, but there seems like an optimization problem to be had - pick a timeout value t, which will yield a solution in 1/n(t) cases, and then iterating until you get a solution.

I implemented a variation on that, with exponential backoff - I first try a small timeout, and if I fail to get a solution in that time, I multiply the timeout by a constant value and start over.

I've got extra optimizations that I could put in, but really, the point of the project has been accomplished, my "customer" is happy, and I should really go to work on other projects.

Friday, March 3, 2017

GDC 2017 Part I : AI Summit (plus and minus)

It's been just over a year now since I leapt back into being a game developer for money (after a six year hiatus where I was letting non-game stuff pay the bills, but still doing games for fun). That lines up nicely with the Game Developers Conference (GDC), which has historically been a huge source of inspiration for me. Networking is good, but I'm bad at it, seeing San Francisco and family is nice, but I can do that at other times. Friends and colleagues suggest that watching the presentations after the fact is just as good, but I don't agree.

This was the first year I attended the AI summit, an extra two days focused on AI topics. As it happens, there was a bunch of other stuff going on for those two days, and my pass allowed me to browse, so I did.

This is a quick rundown of what I saw and did up to the main conference, which will be Part II. Maybe I'll even make that a link.

Day -1: Saturday

Flew down at crazy-early-o-clock, probably leaving the house later than I should have. Met up with my sister and brother in law, saw some Monet, got chastised for standing closer than 18 inches from the Monet. Went to the exploratorium. Science is weird.

Day 0: Sunday

Had Nepalese food with said sister and brother in law. Went to the San Francisco Museum of Modern Art.

Went to an AI Programmers' mixer thing. Bumped into Kate Compton, who gave me one of her Tracery zine/manuals, a library for generative grammars, e.g. for writing a TwitterBot. This stuck in my brain for the remainder of GDC. (spoiler)

Day 1: Monday

Ok, so if you're skimming past the preliminary bits, this is where we get to actual sessions.

Crowd AI in Watch Dogs 2

I would have titled this "Bystander AI". WD2 is an open world game, where stuff happens on the streets of San Francisco. They implemented an architecture where events get posted (playing guitar, posing for a photo), and other NPCs trigger on those events, which can cascade into emergent crazy street scenes. An example was shown of an NPC proposing to another NPC, and then they posed for a selfie, which got photobombed, which led to a furious fight.

Behavior Tree Arborist

A year ago, I was trying to decide the AI architecture necessary for BattleTech, the game I'm working on at the day job. Since then, I've built a behavior tree / influence map / blackboard system, which is probably not terribly different from other systems out there. I was looking forward to this session (three mini-sessions) to see if there were best practices that I should have adopted a year ago. Turns out, not a whole lot of bad decisions, so that's cool.

The first mini-session by Mika Vehkala talked about flipping around the idea of decorator nodes and making node decorations - things that hang off of nodes to make better composition of reusable nodes. That's an interesting idea, and I could imagine it condensing the presentation of my behavior trees. I'm not sure if it would make things any more reusable, but more concise expression is good.

There were a few decorations presented that would make more advanced control flows than I currently use. Again, not sure if that would be useful for my current project.

Also mentioned was splitting behavior trees, which just makes sense. My current one is getting a little bulky. I'm the only person writing the code, so there isn't any contention for locking the file for writing, but it'd be worth considering for later. The practice suggested was to author the trees with references, but at compile/load time, merge them into a single tree. That's interesting, but I think that one might get some value out of dynamic tree references. Extra cost, complexity? Sure.

Also mentioned was dynamic behavior tree references, slotting in specific behaviors at runtime. That seems promising, but again, nothing for my current project.


The second mini-session by Bobby Anguelov was a stern instruction to know the relative merits of Behavior Trees and Finite State Machines. If you're having to jump out of your tree to replan, you might be doing it wrong.

I don't know if it was this micro-session, but along the way, I began to think that my Behavior Tree work might actually be accomplished as well or better by a "Decision Tree", which I think is closer to what I actually use.

The final sub-talk was by Ben Weber, who talked about a few additional node types that he found useful (spawn goal, working memory modifier, success test (wait until conditions are true)) and offered a few patterns he found useful (daemon processes, managers, message passing, behavior locking, and unit subtasks). I vaguely recall these being interesting, but not immediately relevant - I'd like to go back and review the presentation to see if it sinks in better.

The Simplest AI Trick in the Book

Another collection of mini-talks, this one gave small techniques to get great effects.

Steve Rabin advocated for making the AI believable, "You must sell the AI". This is largely a design issue, but the AI engineer is probably the best person to advocate for things that make the AI look less like a robot.

David Churchill gave a quick little implementation of a stack guard implementation that caught a buffer overrun in some of his code.

Mike Lewis suggested adding a button to freeze output for cases where the debug spew is flying too fast. If you've ever hit Ctrl-S to pause a stdio program, that's the idea.

Xavier (I missed his last name) talked about adding a dynamic proxy object for events that happen quickly and your AI should react to. I might call this a "bread crumb", but I think he had another name for it.

Brian Schwab suggested sitting quietly behind a tester and see how they play the game.


Predictable Projectiles

Again, I misunderstood the point of this talk, I thought it was going to be about stable math for projectile simulation, so that you could network your games without having to adjust the simulations. Instead, it was a talk by Chris Stark about "Orcs must Die", and how they used linear and ballistic projectiles to shoot at the player character, leading the player in both cases. Lots of math was presented. I've wanted to write some code for leading the target for linear projectiles before, but now I also want to write a ballistic solver.


ELO, TrueSkill, or write your own

ELO is a ranking system, designed to give chess players a score. I write it as ELO, because it feels like it wants to be an acronym. It's named after a guy, though. I had read the Wikipedia article on ELO a while ago, and this talk didn't give me a whole lot of new information. One thing that I did get was that Mario Izquierdo had used ELO (or some variant) to score user-generated levels, which seems relevant to some of my PCG projects.

TrueSkill is Microsoft's proprietary version of this, which extends to supporting team play. Glicko is an open source system that's similar, but still only 1v1.


Kate Compton's PCG Talk

Didn't manage to make it to this, but she's really good about posting resources, so I've managed to collect a lot of what was presented, and do my own homework. I'll probably come back to edit in some of those links.


Deep Learning Math

Not learning how to do math using a neural network, just the calculations going on inside the latest iteration on neural nets.

A phrase Alex Champanard presented was "differentiable computing", which places this stuff in a good context - we're trying to compute a value, and the process for doing that is continuous functions, which we can adjust to get a better approximation of the function we're trying to represent.

A surprising bit of trivia for me is that I had always thought of neural networks as simple fully connected layer affairs - a few input nodes, and then those connecting to layer 1, which fully connected to layer 2, which fully connected to layer 3, until you got to the output nodes. Turns out, these days, nodes are clustered, with one node on layer 0 having a link to a node on layer 5, or whatever - it's still a directed acyclic graph (right?), but it's not as uniform as I was taught so many years ago.


Harmonic Functions and Mean Value Coordinates

I like trying to fit math ideas into my head. This talk didn't try to give immediate practical tools, but present some tools that could be used for some mesh analysis tasks. I'm not entirely sure what those tasks are, which made the talk even more abstract. But if I was writing a mesh unwrapper for assigning texture coordinates, I think I'd really want to know about this stuff.

And hey, I won't use the phrase "discretize the mesh laplacian" elsewhere in this writeup, so this is my one opportunity.


B-Rep for Triangle Meshes

I remember Gino van den Bergen as having given a talk many years ago, I think about collision detection and measuring penetration depth for a variety of interesting geometric primitives. In this talk, he presented a pretty optimized representation of the familiar half-edge/winged-edge representation, optimized for triangle meshes, especially for dynamically cut meshes for "Farming Simulator". I won't get into the code here, but it's some pretty tight stuff, and if I needed to dynamically modify meshes, I should revisit his stuff.


Also

I spent some time between sessions using PyTracery to generate "LifePaths" for characters in a post-apocalyptic road warrior setting. I want to adjust the LifePath sim logic to have different probabilities for different events, and probably do an event-based sim, rather than a simple grammar generator. I'd also like to try to drive things backwards, so I could say "I need one crime boss and 10 street thugs - go!", but maybe the best way to go about that is to generate NPCs ahead of time and store them in a database for later retrieval.

Day 2 - Tuesday

Narrative Innovation Showcase

Several small talks (again!) about games not yet released,

Francisco Gonzalez presented "Lamplight City", a detective game where you're not railroaded into being a super-detective; you can be bad at your job, and the game will react, but not stop you.

Cara Ellison presented "Where the Water Tastes Like Wine", a roadtrip interactive fiction story, which tackled having different characters sound different by having different writers for different characters.

Greg Heffernan showed "The Norwood Suite", a graphic adventure game with a strong music theme. When characters speak, their speech bubbles get populated with words, and when each word arrives, there's a note from a distinctive instrument. Like taking the trombone wah wah sounds from the Charile Brown specials and passing it through "Peter and the Wolf". Grampa is an oboe.

Emily Short showed an "interrogation demo", where you're grilling a robot about a murder it witnessed (hello, Susan Calvin). There was a lot of interesting data in the knowledge graph, and the conversation was a means to explore that graph. The graph also had information about "narrative beats", so over time, the robot guided the conversation.

Navid Khonsari presented "Blindfold", a VR verite experience that used nodding and shaking of the head as the only user input while putting the player in a room with a member of a violent (terrorist?) faction.

Procedural Content Shotgun

Mitu Kandaker-Kokoris talked about the spectrum between agent driven stories and story driven agents.

Tanya Short talked about "maximizing the impact of generated personalities", which was recently on Gamasutra.

Tarn Adams of "Dwarf Fortress" talked about using personality traits to generate content. I didn't fully follow what he was saying, but it seemed really interesting. I want to take another run at it. There was some discussion of NPCs creating artifacts (statues, books) based on events in the game world, then those artifacts effecting NPCs later activities. Also, something about history being an allegory, which can provide structure for the NPCs. A lot to unpack.

Zach Aikman talked about using Cellular Automata and Hilbert Curves to create tiles and mazes for "Galak-Z". This was a very short version of a longer talk given at Unite, previously.

[somebody] talked about contextual barks, based on Elan Ruskin's "Left 4 Dead" 2012 GDC presentation, which I should watch.

Luiz Kruel talked about a procedurally generated FPS

Tyler Coleman talked about things to do with your random seeds, maybe seeding some stuff off of player ID, so it would never change for that player. I still don't buy that's really interesting, but maybe in a social game.


Crackpot AI Dev Talks

The premise of this session was off-the-wall ideas for techniques that might just work, or be interesting enough to pursue anyway.

Zach Aikman talked about synesthesia and generating music based on a color palette

Tyler Coleman proposed a layered AI memory, including a socially shared layer for the longest term memories

Mitu Khandaker-Kokoris demoed an AI bot that assisted a Massively Multiplayer game player who was being harassed ingame.

Luiz Kruel talked about a procedurally generated FPS

Rez talked about longform improv as a model for collaborative storytelling. An example of something not quite working, was playing Oblivion as a thief and ignoring the scripted story, just playing the systemic game. What if the game saw that, and served scripted story based on what the user did?



Stopping AI fires before they start

Andrea Schiel talked about a few cognitive biases that led to antipatterns in AI development
- because it's there: using the wrong engine because it's what you have
- "beware the goalie playing out": using idioms from the last system inappropriate to the current system
- "too many heroes": overlapping roles, duplicated code doing similar functionality (maybe simultaneously)


Can you see me now?

Eric Martel talked about sensor construction for AI NPCs.
One big takeaway is to put sensor locations on non-rendered bones, to allow the animators to put in the eye location in a deliberate place, rather than just put it on the head, which is noisy, and might not be synced the way you want. (sensor follows animation gives players the ability to get out of the way)


Bringing Hell to life with full-body animations in Doom

Lots of straightforward techniques, from delta correction to get jumps to land at the right spot, to focus tracking, to use IK to get the NPC to look at a target.


Indie Soapbox

Yet another rapid-fire session of microsessions

Brandon Sheffield advocated embracing your own sense of taste. Make a game that works for you, and go with it. "People identify with what they like more than what they do, from foodies to film buffs". His game is "Oh, Deer", a pseudo-3d driving game.

Tanya Short told us that self-care is important, and don't work all the time, it's not as productive, and you burn out.

Jarryd Huntley talked about indie rock bands, and how they're just like us.

Sadia Bashir talked about the importance of having a good process when making games

Marben Exposito told us that "People fuckin' love surprises", and showed a little bit of "Showering With Your Dad Simulator 2015".

Gemma Thompson talked about "owning your space", pushing for a broader notion of what an indie game developer is (not just Jonathan Blow in a coffee shop on his laptop).

Jerry Belich called for more industry people to work with academic people to share knowledge with the new crop of kids

Brie Code talked about techniques for public speaking

Colm Larkin talked about sharing the game early, including a tweet of the one-sentence game design (elevator pitch) for "Guild of Dungeoneers" when it was just a game jam idea

Jane Ng talked about thinking about your game as a product, and sometimes thinking of the potential player of the game, not just the player of the game. Product design is a huge thing.



To Be Continued

That's a lot of really short snippets, I'm frankly exhausted writing even that much. The next post [TODO link] will have 3 days of sessions, some of which will be as spartan as the above, maybe some will be fleshed out more.