Tuesday, October 27, 2009

Loot and New Content

Back in the Olden Days, when we had to walk to school in the snow, uphill both ways, the only way to get awesome loot in WoW was to run Molten Core. If you didn't have 36+ friends that were all well-geared, attentive, and competent, your only chance at purples was world drops and auction housing. You could run UBRS or Strat or Scholo if you wanted to, but there wasn't really much reason; there were some nice pieces in there but gear in MC was far better. Yet you couldn't progress in MC but once a week, and you needed to be well-geared, attentive, and competent yourself.

Nowadays, loot is cake. All you need in 9 friends, and there's no trash to fight through so you don't even need a multi-hour commitment.

Better

Is the current system better? Well, what does 'better' mean? It's easier to get loot. You don't need the social structure now that was needed then. 10-mans are easier to organize than 40-mans, it's harder for a player to go AFK, it's easier to get into a guild that has 10 people that can raid on the same night at the same time, and more. The barrier to loot is lower, in that more people will be able to get this group together. Is that better?

You don't need to progress through MC then BWL then AQ20 to get to AQ40; now, run some heroics, build up some purples, then jump into a 10-man ToC group. Or, heck, some heroic 5-mans drop competitive purples. A new 10-man guild can move on to 10-man ToC fairly quickly, and then find a 25-man PUG. The time between hitting end-level and raiding the final dungeon is much lower. Is that better?

One of the reasons that initiation rituals remain in fraternities is that it makes admission to the group that much harder and stressful. We value that which was difficult to obtain. Downing Ragnaros was a serious effin task, especially before BWL was released. It was a badge of honor.

Where is that badge now? Is that relevant? Compared to 2005, nowadays many more people are seeing more content and improving their gear, without getting frustrated by organizational hurdles. Because more people get there, it's less exclusive.

Who Cares?

It doesn't seem to matter. If more people are getting more phat lewt, they're happier and having more fun. I might complain about more people reaching the "elite" end-game ranks -- but there's still a time factor. What distinguishes the top elite from the next group is when they achieved the rank, not if they got to that final boss.

I've talked to players doing 10-man normal ToC and they consider themselves up in the elite. They're very happy with their progression. They know they're not doing hard-mode, much less the 25-man version, but that doesn't seem to be a big deal. At least they're not stuck!

There were tons of players back in the first year of WoW that wanted to do MC, but couldn't, because they weren't in "the right guild." Even in that guild some players got left on the sidelines because their gear wasn't good enough. Now, those guys have somewhere to go. They're not stuck pugging MC and wiping on the first giant; they can do 5-man and 10-man content that continues to give them better loot.

Bias

I want to sit and bitch about how easy kids have it these days, but that just biases me against the current model. What makes a game fun is perceived mastery, and WoW has that.

The WoW end-game is loot acquisition, and as long as players are getting better loot, they are mastering the game.

Lessons

Progression is important. Really, that's it. Players like challenge but not because of the cost of failure. They want to succeed, and look back, and say "I overcame that." As long as players continue to progress, they'll have fun, be happy, and continue to pay.

Wednesday, October 14, 2009

Creating an Engine from Scratch

I've been coding since I was ten. That's nearly thirty years now. In that time I've worked on games ranging from simple Apple II fare to big-budget PC and console titles. I've also written a ton of applications and tools, on the PC and Mac and that old Apple II, too.

So I can create an engine from scratch if I want to. Do I want to?

Right now, the 2D RPG I'm working on is an XNA title. I'm not using an engine, tho I am using the XNA and .NET libraries. Coding up my own engine is not too bad, but there's still tons of little cruft that I have to code. I wrote a line-wrapping routine last night; the code that breaks up a string of text across multiple lines. It's perhaps the tenth time I've written such a routine.

It's the sort of thing that makes me wish I had used an engine, something that would give me those sorts of routines. I know how to write them and I've got old code laying about that I can crib from, but I still need to modify the code for the language (C#) and platform (XNA) du jour.

The downside to using an engine is learning it. I remember old Mac documentation, back in the pre-OS X eras. It was awesome. It took a long time to read (compared to today's documentation), but it was fucking stellar. Not just explanations of parameter values, but meaningful sample code (including error-checking) for nearly every use that a particular function had.

Today, most of the documentation and comments I see are like:
int key; // the key
wtf? That comment is noise. It's a waste of space and the time it takes me to look at that and then realize it contributes NOTHING.

Engines are also often buggy; if you don't have source, you're just screwed. Wait six months for the next update and hope it fixes your problem. And being generic engines, they have a strange mix of too-much and too-little functionality. They don't do exactly what you want, but they can do a dozen things that are similar but still inappropriate for your specific needs. So it takes you days to figure out what it does, and then days trying to make it dance the way you need it to -- only to find out it doesn't do that. Then days more to figure out how to change your game spec so that your requirements can be met by the subset of functionality that the engine actually provides.

Using an engine is a great idea if it would take you too long to write it yourself, and if you can amortize the learning time across multiple projects (ie reusing the engine). Finding a great, easy-to-use, powerful, flexible, well-documented, and robust engine is a pain in the ass. Plus, generally impossible. There just aren't enough engines out there (and they're not profitable enough to develop) for them to be stellar products.

The big engines these days are cross-platform; PC, XBox, PS3, and Wii. There's some smaller engines that enjoy some niches, but I don't have high hopes for their lifetimes. Those big cross-platform engines can meet some of those important feature points, but (back when I was seriously looking at them, at game-industry day jobs) they weren't 100%. There's some smaller engines that have a lot of promise, too -- yet they're very niche.

Niche is good. If it does just what you need it to, a niche engine can be great. You don't waste time and money on features you won't use, and it's more likely that the engine does exactly what you need, does it well, and can document its small feature set well. Here, I'm thinking about engines for the iPhone, or 3D-shooter-specific engines, etc.

Which brings me back to that first point: if you lack the experience and talent to quickly develop an engine yourself, chances are, using a cumbersome, weak, inflexible, poorly-documented, and buggy engine will be your best choice. This is mostly cuz of the arcana associated with a new platform and the toolchain that goes with it. An engine isn't just line-wrapping code; it's a sound system, 3D rendering and scene graphs, advanced shaders, exporters for Max and Maya, input abstractions, UI widgets, plus tons more. Maybe even a scripting language and enough of a game shell that most of what you need to do is plug in some art assets, do a bit of scripting, and ship your product. If I was developing an XBLA or a PS3 or even a AAA PC title, I'd be using an engine.

There's a time balance between learning someone else's engine and writing the code you need yourself. The smaller the requirements you have for the engine, the more it makes sense to do it yourself. That's one reason why I'm writing my own engine. Plus, I get to reuse this engine for the next title I do. Plus, I'm not stuck with broken code. Plus, the engine does exactly what I need it to do.

It's still annoying to have to write line-wrapping code. Again.

Tuesday, September 22, 2009

Fun RPG Combat

see also: fun rpg combat part 2

I'm working on a retro, 2D RPG, so RPG mechanics are on my mind. I'll go into my plan and the game a bit more at some other time, but today I wanted to explore combat mechanics, and what makes for a fun RPG. But first, I want to talk about fun in general.

Fun

What makes for a fun game? What is fun? This is a big issue that game designers love talking about, but I'm not really sure why -- I think the issues are fairly straightforward. I'll lay out my thoughts and you can be the judge. :)

Let's look at boring first. Boring is when you've got nothing to do, nothing to think about. Mindless repetition is fun; the 'repetition' means you've figured out how to do a task and you're just mindlessly repeating it. Boring is mowing the grass, stuffing envelopes, fighting the same random creature encounter for the hundredth time. There's nothing about the task that's challenging. Maybe the first time, but now that you've figured it out you're just going through the motions. There's no mental or emotional commitment to the process.

Fun is the opposite of boring. But first let's ask: are there activities that aren't boring, but aren't fun? Some activities aren't boring because they require some problem-solving and/or careful attention, but aren't fun. Fun implies a positive emotional response; busy-work activities don't have that. You don't have time to let your mind wander and get bored, but there's something missing. Busy-work activities include writing uninteresting code, building uninteresting 3D models, doing your taxes. Not boring (you're too engaged to get bored), but definitely not fun.

This is pushing us towards fun, obviously. We know what sort of activities aren't fun, and in describing them it's obvious what they're missing. Intellectual interest, or emotional drive. Curiosity, achievement, happiness, social connection, fear, horror, thrill, suspense.... Horror movies and games push the fear and horror buttons; adventures like Indiana Jones go for thrill; mysteries and dramas often push suspense and curiosity. These movies and games like them are fun.

Interactivity

Games are different from other media in that they're interactive. Movies can't give viewers a sense of achievement, and (except for the camaraderie felt around the water cooler when you're talking about how much you love [insert favorite cult movie here]) can't give a sense of social connection either. Movies can pique curiosity, but they don't give you the tools to resolve it yourself. Games can try to hit the big emotional buttons that movies go after (like fear and thrill) or pique intellectual curiosity, but they can do more: they can provide challenges and reward achievement, let players build social bonds to achieve common goals, and let players explore play spaces and puzzles in their own time and way.

Games are also different because they're typically much longer than movies, and usually longer than books. RPGs, especially. There are short RPGs out there and very long books, but in general games provide far more hours of entertainment. This is a bit of a pickle, because games have to figure out how to be fun for longer than 90 minutes. It's hard enough to make a good movie, how do you make 15 hours of fun on a budget a tenth the size?

There are ways of filling time, of course. Grinding is a bad way. But what's the difference between grinding (boring!) and fun? Yeah, well, asking the question makes the answer obvious. We want games that have fun ways of filling time -- or at least, not boring ways.

Games often fluff themselves out with skill challenges. Some games are primarily skill challenges, like shooters and racing games. Others, such as platformers, focus on exploration or figuring out how to get somewhere or kill a boss mob but also contain (possibly extensive) skill challenges. Starcraft is an RTS that's packed with skill challenges on the multiplayer competetive level.

I've mentioned exploration, too, and this is a great way to extend a game. Even in territory you've already covered, you might explore a different aspect of the world -- in Left 4 Dead, you check nooks and crannies for hiding zombies. Once you've played through the campaigns a few times you know the rooms and the architecture, but you don't know where the enemy is. Some games provide rich mechanics that the player is constantly exploring, such as RTS games where players learn how different units behave and interact.

Puzzles and sims both fill gameplay time with puzzles. In obvious ways in puzzle games, but I lump sims in here because, to me, most sims are long series of specific puzzles. Where do I put the next building, what troops should I train, where do I put my resource fields? I view Transport Tycoon, one of my favorite sims, as a series of four puzzles: where do I put the station? How should I build the line around these terrain features? What consist should I run between these two towns? and finally how do I optimize traffic? The player is constantly shuttling between one puzzle and the next.

High Points and Engagement

I think there's two things that makes a game fun: emotional high points, and near-constant engagement. Basically: add big, cool moments, and avoid breaking the player out of play.

If the game gets boring, tedious, or punishing, it can break suspension of disbelief or add enough of a punishment that the player disengages from concern for his on-screen avatar. Failure itself isn't necessarily a bad thing; some games are built around constant failure, such as roguelikes and shooters. Counter Strike doesn't suck even though you 'fail' (get killed) once every few minutes. That 'failure' frames the game and defines the challenge. The player isn't concerned about totally avoiding that failure as much as he's interested in maximizing the experience between those moments. (It helps that Counter Strike provides a social experience for dead players.)

Games without disengaging moments can keep a player at the controls, but an interesting game without emotional high points is equally unfulfilling. It can serve as a distraction but isn't at the same level of "fun" as a game that provides those high points.

High points are the peaks of emotional engagement. Gaining a level in an RPG is a single moment that collects all of the emotional buildup of previous play and hands it to the player at one, big, emotionally-charged moment. In level-based games (ie map levels, like Doom or Starcraft or Mario), finishing a level is that big moment. In adventures, there's often a big puzzle that's solved in each step of the game. Game designers know all about these emotional high points; they make the effort to provide rewards to players at them. Because of that emotional weight, these are also often the moments that players remember most.

Fun games keep players engaged and provide periodic emotional high points. Players enjoy play, and fondly recall the "peak experiences" of games past.

Recommendations

Those, then, are my two recommendations to game designers: provide engagement without boredom, and put more oomph into your game's high points.

In fun rpg combat part 2, I talk about applying this problem specifically to RPGs.

Saturday, September 19, 2009

Game Tools with WinForms

I'm working on my RPG fairly actively this week. I've got maybe another week on the engine, and about that on the content, so it's close to being ready. As I implement more bits in the engine, I'm going back and changing the editor, too, so I'd figure I'd comment a bit on that here.

The RPG is retro, 2D, turn-based tactical combat, single-player, and single-character. Old school. I'm trying to make it not suck, but I'm using simple technology. It's called BlackThrone and it's up on the web so check it out.

The Editor

The editor was my very first WinForms app. I've been coding UIs and tools forever, and C++ since college, and Windows since the OS/2 days -- but C# for only a few years and barely much professionally until last year. So I was new to WinForms, and at the time was coming off of using C++ heavily for a couple years.

My point is, my god this app sucks. I didn't know about User Controls, so a half-dozen tabs, packed with dozens of controls each, are all in the main Form. The main form .cs file is HUGE. Blech. I didn't have any common methods of dealing with resources and graphics, so lots of the stuff was ad-hoc. It's interesting coming back to it now after having left it half-developed for a year.

There's a few things I wish I'd known and done then, and that's the point of this post.

UserControl

UserControl is your friend. It's basically a collection of other controls; checkboxes, lists, text entry fields, buttons, etc. It's great for taking a chunk of your UI (like the controls that would be on one tab of a control panel) and encapsulating them.

In my editor, each resource type is edited on a different tab of a TabControl. The main form contains a TabControl, and in each TabControl is a user control. This means that there's very little code in the main form.

At least, there's less now. I'm slowly refactoring the application, pulling each one of the tabs out of the main form and sticking them into user controls. This makes it much easier to ensure that I've got everything I need, didn't forget something; debugging is easier; etc etc.

Document Model

The editor actually started as just a map editor; the extra resources got shoved in. What I should have done (earlier, like when I added an editor for the second resource type) is added a document-type class.

The editor works on a set of files. The "document" is really a directory; each different type of resource is stored in a different file. One file for the world map, one file for all the towns and cities, one for dungeons, one for conversations, one for items, etc.

The main reason to pull all these in is interaction between the resources. For example, cities can contain treasure chests which can contain items. Hence, the city editor wants to get access to the list of items so that it can present that to the user. I started out hacking into the main form (which is where everything was stored) to get the item list, but even while writing that code I knew that was a fragile, ugly way to do it. I'm slowly refactoring each resource type out of that main form into the Document class.

Small Parts

This is really a generic Agile practice. Classes shouldn't be big.

One of the common functions I do is grab a tile (a 16x16 block of pixels) from my sprite sheet (which is a 256x256 image), create a Bitmap from that, and set it as the Image in a PictureBox. This is "instant feedback" that makes it easier to see which object I'm dealing with. Bad coding practice is to copy and paste these few lines of code from here to there.

My refactor was to create a TileSheet class, and add a method to that to pull a Bitmap out -- and another method to draw a tile into the current Graphics object (eg in an OnPaint event). The TileSheet itself is small -- it's a small part.

Recommendations

If you're building an indie game, I really recommend using WinForms and C# to build data editors. If you're just starting out with WinForms, I recommend reading a book first -- having an idea of the things you can do makes it easier to choose the right thing to do. I started coding first, hacking together sample code from the net. The book I eventually bought, and one I really liked, is _Pro .Net 2.0 Windows Forms and Custom Controls in C#_ by MacDonald, on Apress.

And when you do start coding, think about putting together a document model. In my day job, having a good doc model is critical for good, clean architecture, and it's the same for my home projects.

Wednesday, September 16, 2009

Loading XML data from a config file using XmlDocument

Part 1: where to save application config data under Vista
Part 2: how to save data in an XML file using XmlDocument
Part 3: this part, how to load (parse) an XML file using XmlDocument

All code samples in C#, cuz it's my drug of choice.

In the previous two parts, I covered where to save data to, and provided some sample code for creating an XmlDocument that can then be written to disk. The idea here is loading and saving application config data. For games, stuff like the user's preferred screen resolution -- which is the specific purpose I had when I dug up this code.

So let me just get straight to the code. That's why you're here, right?
const string kIntId = "ints";
const string kStrId = "strs";
const string kConfigFile = "config.xml";
private void LoadData()
{
string myAppFile = GetConfigPath() + "/" + kConfigFile;
if (!File.Exists(myAppFile))
return;

try
{
XmlDocument doc = new XmlDocument();
doc.Load(myAppFile);
XmlNode root = doc.DocumentElement;

XmlNode intsNode = root.SelectSingleNode(kIntId);
foreach (XmlNode child in intsNode.ChildNodes)
{
string key = child.LocalName;
int value = Convert.ToInt32(child.Attributes["value"].Value);
_intList[key] = value;
}

XmlNode strsNode = root.SelectSingleNode(kStrId);
foreach (XmlNode child in strsNode.ChildNodes)
{
string key = child.LocalName;
string value = child.Attributes["value"].Value;
_stringList[key] = value;
}
}
catch
{
// feh
}
}
The try/catch is there cuz you should be worried about your users being dumbasses and manually editing your config files. And/or you being a dumbass and screwing it up. Cuz that's what I did. Plus, when I changed formats, some of this stopped working.

Note that I'm using a couple constants to specify the names of the groups that I'm looking for. I do that because of Once and Only Once: mostly to keep myself from mistyping data.

I covered a way to obtain the name for the directory in which to store application data back in part 1, but here's the relevant snippet here:
private static string GetConfigPath()
{
string appData = Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData);
string myAppData = appData + "/MyAppName";
return myAppData;
}
This is for per-user app config data, as opposed to shared (common) config data -- both described back there in part 1.

How to save config data into an XML file

Part 1: where to save application config data
Part 2: this part, how to save data in an XML file (using XmlDocument)
Part 3: how to load (parse) an XML file

There are many different ways of working with XML files under .NET. You can hand-roll your own code, use XmlReader, use XmlDocument, use someone's third-party library, and who knows what else. I find using XmlDocument the most sensible approach -- why write my own code? I'll let someone else worry about that.

I'll just dump the code here:
public void SaveData()
{
XmlDocument doc = CreateSaveDoc();
string myAppPath = GetConfigPath();
string myAppFile = myAppPath + "/" + kConfigFile;
if (!File.Exists(myAppFile))
{
Directory.CreateDirectory(myAppPath);
}
doc.Save(myAppFile);
}

private XmlDocument CreateSaveDoc()
{
XmlDocument doc = new XmlDocument();
XmlElement root = doc.CreateElement("root");
XmlElement ints = doc.CreateElement(kIntId);
int idNum = 0;
foreach (KeyValuePair kvp in _intList)
{
string id = "int" + idNum.ToString();
++idNum;
XmlElement node = doc.CreateElement(id);
node.SetAttribute("key", kvp.Key);
node.SetAttribute("value", kvp.Value.ToString());
ints.AppendChild(node);
}
root.AppendChild(ints);
XmlElement strs = doc.CreateElement(kStrId);
int strNum = 0;
foreach (KeyValuePair kvp in _stringList)
{
string id = "str" + strNum.ToString();
++strNum;
XmlElement node = doc.CreateElement(id);
node.SetAttribute("key", kvp.Key);
node.SetAttribute("value", kvp.Value);
strs.AppendChild(node);
}
root.AppendChild(strs);
doc.AppendChild(root);
return doc;
}
I used attributes to store info, and ignored the tag names for the contained data. I think the whole 6bytes part of XML is poopy. Yes, I said it, poopy. Attributes work better for me, cuz then you get XML that looks like:

This could be shortened to:

but, well, I got it working and I stopped caring. If you implement this yourself, feel free to take that extra step.

Hmm, I say that now... ok, I went back and changed my code. This complicates loading a bit, because I now care about tags, but you'll see that in the next section. For completeness, those inner loops are now:
{
string id = kvp.Key;
XmlElement node = doc.CreateElement(id);
node.SetAttribute("value", kvp.Value);
strs.AppendChild(node);
}
Much cleaner!

Saving application config data under Vista

Part 1: this part, where to save application config data
Part 2: how to save config data in an XML file using XmlDocument
Part 3: how to load (parse) XML-based config data using XmlDocument

I was a bit frustrated at finding this info on the net. It required a bunch of searches to pull it all together. I finally got what I needed, but, you know, bitch/moan/whine and all that. So here's everything in one place!

I'm assuming you're coding in C# (or at least can read it), and using .NET. And, like, Windows? Yeah.

So, part 1 of a 3-part series: where do I put my config data?

In the olden days (ie, under XP), you could just create a "config.ini" file in the current directory, ie with no path info, and it would save it in the same location that your application's exe was located. Under Vista and User Access Control (UAC), applications by default do not have write access to the Program Files folder. Plus, that folder might not be on the C: drive, and it might not be called "Program Files". So where do you save stuff now?

The correct location is in the AppData folder for the current user -- or, if you don't want to store user-specific data, in the common AppData folder. You can get these paths using the following code:
string userAppData = Environment.GetFolderPath(
Environment.SpecialFolder.ApplicationData);
string commonAppData = Envrionment.GetFolderPath(
Environment.SpecialFolder.CommonApplicationData);
You can also hunt for the environment variable %AppData% if you want. The above is .NET friendly, so it's what I used.

I encapsulated the above into a function, which I use a couple places in my config save/load code:
private static string GetConfigPath()
{
string appData = Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData);
string myAppData = appData + "/MyAppName";
return myAppData;
}
Loading it requires the following:
string myAppFile = GetConfigPath() + "/" + kConfigFile;
if (!File.Exists(myAppFile))
return;
XmlDocument doc = new XmlDocument();
doc.Load(myAppFile);
// insert parsing here, see part 3
and saving it requires just a bit more work:
public void SaveData()
{
XmlDocument doc = CreateSaveDoc();
string myAppPath = GetConfigPath();
string myAppFile = myAppPath + "/" + kConfigFile;
if (!File.Exists(myAppFile))
{
Directory.CreateDirectory(myAppPath);
}
doc.Save(myAppFile);
}
In part 2, I cover storing data in an XmlDocument, and in part 3 I cover parsing data back out of an XmlDocument and into some local format.

(Apologies for the poor code formatting. Me and Blogger don't get along.)

Monday, September 14, 2009

DRM, Piracy, and Indie Games

"OMG! teh pirates stoled my gaem!!!1!"

DRM tends to suck. Whether it's the inability to transfer Kindle books (or the fact they costs the same) or the registration hassles with PC software, I think users have a dim view of it. I'm with you there. That's not my point.

On the producer side, complaints about piracy are rampant. "We lost $3 gajillion in sales last quarter due to pirates!" I hate those kinds of complaints. There might be a billion copies of your product out there, and if one does the math that comes out to $3 gajillion, but there's no way those people would have spent that much money on your product if they hadn't found a way to pirate it.

But there is an opportunity cost involved. If pirates couldn't get their warez for free, they'd wind up buying some of those products. They'd spend less on beer, pizza, and auto parts. Minors would spend less on ... what do kids do with spare cash these days? iPods and iTunes? Big Macs and funny t-shirts?

I think piracy has killed off single-player PC games. What remains is the big boys -- The Sims, perhaps. Games that require online registration get hacked, but I think the low price point of Steam titles reduces it. Diablo, although often played single-player, is still played online; that's how you get to compare your epeen to the next guy's.

What's gone are middle-market titles. Everyone and their brother bought Doom 3, but who buys those other shooters? Not the mass-market; just the diehards. The guys more likely to have a bittorrent running in the background 24/7.

That means that successful titles are online (with all the expense that brings), or so popular that they reach out beyond those comfortable with bittorrents, which again means expensive. Niche titles fight for a living and are pirated like crazy. Consoles are grossly expensive to develop for, too.

Are games getting less innovative? Yes. The market sucks. A game designer, a programmer, and an artist can't save up some spare cash and develop a game in their spare time, throw it out on the PC, and profit from it. There's not enough of a market willing to pay for something they could steal instead. Buyers save their money for the console and online titles that they have to pay for. Have a cool idea? You need to convince someone to give you $15M to develop it. Otherwise, you're SOL. Or, just building it in your spare time.

Pros aren't often hobbyists on the side. Once you've done a few years of the crazy game dev rat race, 100-hour weeks and all, you think: screw that noise. I'm working 9-5 and then going home to spend time with my family and friends.

My point is that if it were possible to make money as an indie, more people would do it.

And it is possible, just ... grossly constrained. Next time, I explore revenue possibilities.

Sunday, August 30, 2009

The Rule of Seven

Andrew Doull, on his blog Ascii Dreams, suggests that game designers should follow a moral code -- mostly by not creating punishing or boring game mechanics. One of his rules he calls "The Rule of Seven":
A player should be at most presented with seven options at any one time.
The reason for the number seven here is a reference to the magic number seven, plus or minus two, described in a 1956 paper by psychologist George Miller. The core of the concept is that humans can remember about seven different things at a time; that we can distinguish between seven different qualities or quantities before our capacity to comprehend is compromised.

Miller's Argument

Miller notes that we can get around this: we can count way past the number seven itself, as well as use thousands of words written with 26 different letters, because experience and tricks (such as using arabic numerals in a base-10 system) let us expand the range of qualities that we can express and remember. The point is really that, when faced with a new, unfamiliar group of items, we are at first stuck dividing them into at most seven categories. Never studied trees before? Then you'd probably be able to identify or describe seven types. Never studied breeds of dogs, or types of land animals, or crops, or minerals, or types of architecture, or music? Our natural ability lets us stick them into seven groups. Maybe only five, maybe sometimes nine, depending on the person's intelligence and experience. Until we start studying the subject, of course -- and then we learn all sorts of attributes that let us learn about more types.

And so when you design a game that has new creature types, or treasure types, or places to go -- your players will only be able to distinguish between about seven of them.

Until they learn your game. And there's the rub: how long are they going to play your game? For a short game, one that you finish in ten to fifteen hours, your players are unlikely to learn a lot about your game world, or want to spend time and effort building an efficient mental model to distinguish between all the types of Frobozzes and Gromixes that you've invented, or even to remember if Mithril or Truesilver or Adamantite is the better armor -- even if they've seen those names before.

Roguelike games start with the player in town. They've got ten buildings to choose from, plus stairs to go down and villagers to talk to. The player doesn't group this into "fourteen things" -- that's what Miller answered. The player will see this as three choices: enter a random building, talk to a random person, or head down the stairs. Because there's ten buildings, you've broken the rule of seven: it will take some time for the player to learn what those ten buildings are. If there were only seven, they'd do it quickly. The time difference between learning three buildings, five buildings, or seven buildings is tiny; trying to distinguish between ten takes exponentially longer.

Game Design Ethics

So how does this impact the game designer? It'll stress your players, and possily frustrate them, if you constantly tax their memory. Running out of time and have to choose the right one of those ten shops? Users will fail because they couldn't remember correctly. If you had given them seven choices, players would be a lot less frustrated.

Andrew Doull's basic point about clicklets is that forcing the user to do something boring and repetetive, mindless, without choice or consequence, or in a taxing way is cruel. And the main reason to avoid cruelty is to make a fun game; something that players want to come back to.

Some games frustrate me needlessly. It really turns me off of the game. If I sit down to play a game and am then bombarded with dumbass, frustrating rules and mindless clicking to get what I want, then I feel like a product I purchased for the purpose of entertainment has lied to me, and is subjecting me to pain and frustration. I find that unethical.

An analogy for a moment: Is it unethical to kill someone if you didn't know that what you were doing would cause their death? Out in the real world, we call that manslaughter, and it might be involuntary, but it'll still get you convicted and thrown in jail.

A game designer that builds a frustrating system is still guilty of frustrating his users. It doesn't matter if you knew ahead of time or not. Not knowing is negligence; it indicates a lack of forethought, of consideration. It's inconsiderate.

The idea of the ethics of game design is that: a game designer shouldn't build systems that frustrate, bore, or needlessly confuse their users. (I don't mean all confusion; some jokes and puzzles rely centrally on confusion. Work with me, here.) A designer shouldn't build such systems whether they know they'll have that effect or not; a designer is responsible for building a good product, and for learning more about his art such that he avoids such sinful mechanics.

Thursday, July 23, 2009

Efficient Ellipse Drawing - Part 2

[images coming later]

In Part 1, I discussed drawing lines. Drawing an ellipse, pixel-by-pixel, shares some comments with line drawing, so I covered that there.

In this part, I discuss drawing a circle.

In the next part, I'll discuss complications and provide a more complete circle-drawing algorithm. The final part(s) will cover ellipses.

Symmetry

Circles are handy because they are symmetric. Since we're rendering a circle onto a square grid, the symmetry that's useful to us here is horizontal symmetry and vertical symmetry. We'll also make use of the fact that you can rotate a circle 90º and still have a circle. Combine all the symmetries together, and we really only need to draw one-eighth of the circle.
[insert image here]

Hence, most circle-drawing algorithms will only draw an eight of the circle, and use this symmetry to plot the other 7/8ths. Which eighth you draw is up to you. I'll be drawing the eight from straight up (like 12 o'clock) clockwise to 45º (1:30 on an analog clock).
void PlotEightPoints(int x, int y)
{
PlotPixel(x,y);
PlotPixel(y,x);
PlotPixel(-x,y);
PlotPixel(-y,x);
PlotPixel(x,-y);
PlotPixel(y,-x);
PlotPixel(-x,-y);
PlotPixel(-y,-x);
}
This code can be easily tweaked to draw points centered anywhere in the screen; just add a constant offset to x and y in each case. Something like:
void PlotEightPoints(int x, int y, int xCenter, int yCenter)
{
PlotPixel(x+xCenter,y+yCenter);
etc...
or, if this code is part of a class:
void PlotEightPoints(int x, int y)
{
PlotPixel(x+this.xCenter,y+this.yCenter);
etc...
Borders

A perfect circle on a square grid can either be centered on a pixel, or centered on the border between two pixels.
[insert image here]

It's easy to start the first one. Say our circle is centered at the pixel 0,0, and has a radius of 10. We can conclude that the pixels (10,0), (0,10), (-10,0), and (0,-10) are all points that we want to draw. There's no fractions here, and the logic is fairly simple.

The second example -- when our circle is centered on the border between two pixels -- is a bit more complex, but much of the same logic (below) is the same. I'll cover this case in the next post.

Tricks

The trick to circle drawing is to note that, over this eighth of the circle that we are going to draw, we're only going to draw one pixel per column. Furthermore, as we move from pixel to pixel, we'll either move straight to the right (dy=0), or diagonally down one pixel (dy=-1). Hence: we just need to figure out which of those two choices is closer to our circle!
[insert image here]

The formula for a circle (centered at 0,0) is
x² + y² = r²
where 'r' is our radius.

Let's say we plot our first point at (0,10). Should our next point be (1,9) or (1,10)? We can calculate the radius at those two points easily:
r = sqrt(x² + y²)
Here's another trick: we don't really need to do the square root. We can just pick the point such that (x²+y²) is closest to r², or 100 in our sample case. For (1,9) that value is (1*1 + 9*9), or 82, and for (1,10) the value is (1*1+10*10), or 101. Our goal is 100, so obviously 101 is closer.
[insert image here]

And now for the code:
void DrawCircle( int r )
{
int x = 0;
int y = r;
while (y >= x)
{
PlotEightPoints(x,y);
x++; // always move over one column
int rAcross = x*x + y*y;
int rDown = x*x + (y-1) * (y-1);
int acrossDelta = r*r - rAcross;
int downDelta = r*r - rDown;
int absoluteAcrossDelta = Math.Abs(acrossDelta);
int absoluteDownDelta = Math.Abs(downDelta);
if (absoluteDownDelta < absoluteAcrossDelta)
y--; // sometimes move down one row
}
}
This will draw your circle. In the next post in this series, I'll cover drawing a circle that isn't centered on a pixel, provide a code snippet for drawing a circle anywhere on screen, and handle cases where part of our circle is off-screen.

Friday, June 19, 2009

Cargo Cult Engineering

Process-oriented development achieves its effectiveness through skillful planning, use of carefully defined processes, efficient use of available time, and skillfull application of software engineering best practices. - Steve McConnell
I'll come back to that quote eventually, but today's post is on cargo cults.

In my experience, engineering teams succeed because there's one or two engineers on the project that are smart, hard-working self-starters, but most importantly follow sound software engineering principles and are capable of taking stepping back and getting the big picture.

Smart, Hard-Working, Self Starters

There's ways of assessing this stuff. Personally, I think gradations of these attributes are mostly worthless. Programmers of average intelligence won't be able to tackle huge problems, or get a lot of features done, but having a smart but "unwise" (using that word as a catchall for what I'll describe in the sections below) engineer is worse than having an average-intellect but wise engineer.

Hard-working is good. But I think easy to assess. And if they don't work hard once they're in place, you need to fire them. If you can't fire them, because, say, you're in France, then that sucks. Your next job will be to get them to quit. Try transferring them to the Siberian Office.

Self Starters are handy, but again I don't think it's at the top of the list. A semi-smart, hard-working, self-starting programmer that insists on overengineering everything, following a fancy & convoluted development methodology, and is unable to assess the importance (ie context) of the parts that he is working on will build lots of great code -- that won't help your product ship or keep customers happy.

What's your goal? Are you capable of assessing context? As a manager, you want programmers that help your bottom line. That means quality code, but it also means code that you need, and code that makes your clients happy. Happy clients are more important than pretty code.

The Big Picture

Whether it's figuring out how one method fits into a class, one class into a module, one module into a project, one control into a web form, a folder or set of files into a hierarchy, one product feature into the next release, themselves into the company, their company into the industry, the product into the market, etc etc -- good engineers are capable of taking a step back and assessing context.

Bad engineers do cargo cult programming. They see the artifacts of good engineers, but they don't understand the principles behind it.

Sound Engineering Principles

Lists are popular. "7 Habit of Highly Effective People", "Top 10 Ways to Ship Better Software," the lists of core rules in methodologies, and the very frequent "Five Ways to Fit into Your Swim Suit for Summer" type stuff.

Lists are easy to make. Just observe for a little while. Pretty much anyone can make lists.

But lists aren't principles. Principles are difficult to apply. They're easy to state, but the whole trick with principles is that they must take context into consideration. Principles must also exist in a hierarchy; for each principle, there must be an antecedent principle that sets boundaries. The antecedent says why a principle is important, gives a guideline for the boundaries of the principle (where it makes sense and where it doesn't), and establishes a benchmark by which to judge the execution of a principle.

Take choosing good names for local variable. This isn't just one floating point out of hundred of practices that make for good software engineering. The list-maker will take this point and stick it into his six-page bulleted list of "Best Practices."

As a principle, one chooses good names because it aids in human parsing of code. Let's chase the antecedent principles here. Human parsing of code is important because it makes maintenance (extension and debugging) easier. Maintenance happens -- so why is it important to make it easier? Because it increases the quality of software and reduces the cost of development. Why are those things important? Why are they important on this product? The answers for quality and cost vary from project to project, and I could answer them in the abstract, but how you answer this question is what settles the boundaries of the principle.

Cargo Cults

Cargo cults mimicked the habits that they saw American military men executing. They thought that the motions themselves were what caused the airplanes to land, and the cargo to show up on the beach. Likewise, the actions that McConnell outlines in that quote above -- skillful planning, use of carefully defined processes, efficient use of available time, and skillfull application of software engineering best practices -- are habits. These are good habits, but I don't think they capture the important traits at all.

Specifically, "use of carefully defined processes" implies that browsing to some Six Sigma website and then handing down the printouts to your engineering team is sufficient for project success. That's just mimicking the habits of successful developers; it's not good engineering.

Thursday, June 18, 2009

Builder Pattern vs Factory Pattern

Versus

The builder pattern is appropriate when object creation is more complex than just calling a constructor. The factory pattern is appropriate when you have a hierarchy of created objects and you want to abstract the mapping of creation parameters to a subclass.

These patterns are often used together. Many abstract factories that I've written use builder functions. Sometimes I'll put the builder function into a base class -- which means that I have a builder function that is actually an abstract factory, which might itself use builder functions.

Now for more detail:

Builder

A Builder encapsulates complex creation into a single method (or class). If creating an object is more complex than just calling the constructor, then all of the work that goes into creating the object can be moved into one method, and that method is the Builder. 'Builder' implies only one type of created object, but that is not necessarily so. Builder really just means encapsulating complex construction!

Say that you want a Widget object, and that creating one means making a DB query or loading something from disk, constructing the object (passing in the query results), then making a few more calls to set up the object before it can be used.

Instead of copying and pasting that creation code -- query, construction, setup -- every time you need to create a Widget, you move all of that crap into a single function. The Widget constructor probably takes a whole bunch of parameters; maybe those come from the database query. Maybe the Widget uses multi-phase construction. The Builder pattern helps hide all that.

If the setup and configuration isn't really part of the created class, ie if it doesn't make sense for that class to know about all the other crap that needs to be done, the builder function might go somewhere else. I think 9 times out of 10 my builder functions are static methods in the created class itself. Instead of calling the constructor I call the builder function, and probably make the constructor private.

I usually use builder functions, not builder classes. The separate functions in a builder class might each return the same object just with different configurations, or each function might return different subclasses.

In languages where multi-phase construction is the norm (instantiate an object then make a bunch of function calls to fill it out), refactoring the construction steps into its own method is an instance of the Builder pattern!

Factory

A factory can create several different types of objects, but it returns its objects via an interface (or base class) reference. Whereas a Builder encapsulates complex construction steps, a Factory encapsulates the decision-making that figures out which specific subclass to instantiate.

Factories are accessed through a single method; that's really the point. You call one function, and it creates either a Subclass1 or a Subclass2, returning it via IBaseClass.

The Gang of Four book (Design Patterns) names both a Factory Method pattern and an Abstract Factory pattern.

In the Factory Method pattern, the factory function is virtual, and different subclasses of the creating class return different subclasses of the created class. That is, you call one class (the factory) to instantiate the second class (the created). The factory class is actually a tree - base class and subclasses. You'll have something like:
virtual ICreated* IBaseFactory::Create(...params...)
So you have two class trees: the factories and the created objects. The two trees might be parallel, ie CFordFactory::CreateSedan returns an instance of CTaurus, and CNissanFactory returns CMaxima, etc etc. Or, the two trees might be disjoint: CFryingPan and CMicrowave return an instance of CFood, while CBlender returns an instance of CDrink (where both presumably derive from IConsumable).

You'll most likely use the Factory Method pattern when you have one hierarchy (the created objects), you're about to instantiate a whole bunch of objects, and you don't want to do a switch or if/else/else trees. So you move the creation into a new class tree, instantiate the factory subclass you want, and then use a method to create your objects.

Alternately, you might have a class that creates a bunch of objects, but that class is part of a hierarchy. Depending on which specific subclass you have, you'd get different created objects. That's the Factory Method pattern.

In the Abstract Factory pattern, you've actually got a set of factory methods. A food factory method, one for drinks, one for dishware, etc. More realistically, you might have a factory method for buttons, one for checkboxes, etc, where different factory subclasses create different appearances. In a game, you might have a barracks that will create an infantry, cavalry, and ranged unit, with different factories for each player race. Instead of disjoint classes for each type of unit, one class (IBarracks) will have three methods to create IInfantry, ICavalry, and IRanged units.

Besides subclassing, your abstract factory could be run off of some other logic. The factory function could do some magic to figure out which subclass to create. It could switch off of a parameter:
ICreated MyAbstractFactory.Create( Enum paramEnum )
or it could use static data or other state to decide:
ICreated MyAbstractFactory.Create()
Abstract Factory is a funky pattern. To use it, you'll want to be creating a matched set of objects. If you find yourself wanting to do that, Abstract Factory is your pattern.

see also
Bridge Pattern vs Strategy Pattern
Ownership, Aggregation, and Composition

Wednesday, June 17, 2009

Designing an MMO part 1

Designing a New MMO, Part I: Get Rid of Classes!

Everyone wants to make an MMO. They're fun games, and with WoW pulling in over a billion dollars a year it looks like an insanely lucrative market.

Except, of course, for all those failures. Like Tabula Rasa, which cost a hundred million to make and only brought in a sixth of that in revenue. Unless you've got 84 million dollars you don't mind never seeing again, jumping in should be done with care.

I like thinking about MMO design. I think it's like talking politics. It's not like me and the rest of the crew at the water cooler are going to run for office. Ultimately, the only effect each of us has is one vote -- out of millions. Does it really matter what I think about politics? At most, I'm influencing a dozen people. And I haven't yet converted any of them to the One True and Proper Political Party, so what does it matter?

It doesn't matter. It's fun, though. Likewise, us scrubs talk MMO design. It's an entertaining exercise.

Usually the first topic to come up is something like "classes are lame" or "levels are boring". I think this is a fairly fundamental discussion.

But I'll skip it, because Lum did a much better job than me. Go read that.

(There's no guarantee that I'll ever write a part 2.)

Thursday, June 11, 2009

Reusing End-Game Content

I read a post on end-game content over at Player Vs Developer and was reminded of a suggestion I made to Blizzard long ago.

There are basically three problems that I'll address in this post: players want new content, they want a variety of content, and devolpers don't want to throw away old effort or see great content go unused.

Quake and CTF

One issue I had with Warsong Gulch (a 10-vs-10 Capture the Flag PvP zone) was that the map got boring. This is one issue with FPS games -- many players like having different maps.

Back when I played Quake, there were a handful of maps that everyone played on, and it was interesting to continue playing on the same map, over and over. I was actually learning new things about the maps after a year of play; specifically, I was learning player behavior. Learning where things are on the map is the first step; then one develops patterns; then one learns what patterns the enemy has; and then a metagame starts where players start trying to deceive their foe about what pattern they are running, etc etc. In Quake, I was learning very specific timing patterns, and how to juke out other players and make them think I was somewhere else. I was counting on my opponent knowing the map so well that I could play against that knowledge.

This is like tennis, or basketball, etc. Everyone plays tennis on the same court, don't they get bored of the same layout game after game? The answer is obviously no; the game isn't about the court, the game is about the other player.

Not so with online games like Quake or Warcraft because most players (especially new or casual players) don't want to become PvP pros. And many other players resent having to learn a map well, and instead just want to win without putting in effort. Don't underestimate your players' arrogance. Many players that suck don't believe it; they blame their losses on bad map balance, or the fact that their opponent knows the map 'too well', or some other lame excuse. Players suck. People suck.

When I played Quake on the LAN at work, I would learn the maps quickly (I'm good like that), or I'd remember the map from online play. I'd grab the rocket launcher and red armor and then start tearing people up -- in part because I also played a lot and was a good player. They'd get frustrated or bored, because to them the interesting bit was the new map, not the game mechanics themselves. They wanted a slot machine, where sometimes they won. They wanted to win; they didn't want to earn the win.

My point is that a great majority of people that will pay for your product want variety, not challenge. Don't force them to play competetive tennis; they want wacky new rules and a roll of the dice.

So am I now just bitching about WSG because I want to see new maps? Not exactly. Quake was played on a handful of maps; WSG only takes place on one map. Every time you want to play Capture-the-Flag (CTF) in WoW, you have to go to WSG. The other PvP maps have different gameplay -- Arathi Basin and Eye of the Storm both have a Battlefield/Team Fortress-like base capture mechanic; Alterac Valley is a back-and-forth push to the enemy's base.

My PvP Suggestion

My suggestion to Blizzard was to make more CTF maps, then change the queue mechanism to be somewhat like Arena, so that when someone queued for "WSG", they'd really be queueing for CTF, and sometimes they'd play in Warsong Gulch, sometimes in Netherstorm Gulch, sometimes in Grizzly Gulch, etc.

The major problem with just adding those as separate queues is that it's hard to find players. Now, even with queues spread across an entire battlegroup, sometimes it's hard to find people to play in Alterac Valley. Imagine if there were three times as many PvP queues -- some of those games would never get started! Hence: group several WSG-type maps into one queue. You get more players funnelling into the same queue, and players get to experience a wider variety in online maps. (This is why most Counter-Strike and Team Fortress servers rotate through maps!)

Players Want More Content.

This issue with finding players is also a problem (now that a new expansion has come out) for old end-game content. Who wants to run Scholomance or Kharazan? Those instances are lame! There's level 80 content to do! As much as players want variety, they don't want to do irrelevant content.

Scholomance is old. It takes too long to do all those quests. Once you hit level 61, the content starts becoming trivial, and the rewards for the grind too small. The problem is the same for level 70 instances -- it was hard then to find a group that wanted to do Mechanar, or Arcatraz, or Botanica. There were too many places to go for there to be many people that want to do one specific instance.

One way to fix this is to rebalance Scholomance so that level 80s can do it. They did that with Naxx; it's a fun challenge for 80s and the rewards are appropriate. Yet if they did this to every 60 and 70 instance, it'd be a pain to find a group to do anything. It'd be the problem with Mechanar but far bigger. Especially with the way itemization works -- one person wants to get his hat from here, the next guy needs a pair of pants that drops off a boss there, and once they got their drops they'd never want to do the instance again. It'd be nearly impossible to find someone to do any one specific instance, just because there'd be so few people that want anything that drops from there!

One way to solve that is the token system used at the end of the 60 lifecycle and was fairly widespread in the Burning Crusade world: kill a boss, get a token that can be used by a handful of classes for a number of different armor slots.

Now imagine if you needed Keeper of Time rep for some level 80 gear that you could only get from the Keepers, but that you could get the rep from any of a half-dozen old instances (rebalanced for level 80) and that it also didn't matter at all which one you did. Now you could say "I want to do one of the Caverns of Time", and anyone that wanted Keeper rep could do it. It used to be that people wanted (say) Durnholde specifically because that's where their item dropped. What if their item dropped from all of those instances instead of just one boss in just one instance?

Now everyone could do Caverns of Time again. The developers could re-use end-game content, and players would have a wider variety of options for where to go. The developers could add in one or two new CoT instances, and maybe redo one of the old ones, and everyone (new and old players alike, ancient characters and brand new alts) would have a much wider variety of content to choose from. A group of five players could choose which instance they enjoy rather than which instance that itemization forces them to pick. Players would be far more likely to be able to play a new instance, instead of feeling forced to go do the same instance over again.

The downside to this, of course, is that maybe players are bored of the Caverns -- especially those that were playing before BC came out and have been playing since. I have a hard time believing that Bliz couldn't just redo each of those levels. Seriously. They're making billions of dollars a year on the game. And they could reset KoT rep to Revered, or maybe add something past Exalted, or add a new faction that automatically becomes Revered 0/21000 if the were Exalted before, etc etc, so players would have a reason to go back.

Players want new content. Players want varied content. Developers don't want to develop content, and then effectively throw it away because no-one is doing it any more.

The easiest thing to fix, really, is throwing away content. They removed Old Naxx from the game. They could remove Old Scholomance and who would know or care? Spending the money to develop New Scholomance would be trivial to them, it would be new content even to old players, and (with sufficient itemization eg through tokens) would give players a broader set of dungeons to explore, instead of hitting Kara week after week after week after week after zzzz....

Thursday, April 30, 2009

Complexity in Game Design

My Travian game is coming to a close, ie nearing its one-year mark. I've been poking around at other browser games to assess the competition, thinking about switching, and it reminded me of one of the lessons of game design that I picked up long ago.

I remember playing Warcraft 2 and thinking, "you know, this game would be even better if it had even more upgrades and building types and everything."

A few years later, while playing Kohan, I realized that I was very wrong. WC2 was so awesome because it wasn't any more complex. Kohan (another real-time strategy game) had a different, but relatively straightforward, combat model. It didn't add more building types, or more troop types, or more upgrades -- it just configured armies differently than WC2.

Complexity is great for World of Warcraft because people play that game for thousands of hours. Yet at the lower levels, the game starts out very simple. Starcraft, currently enjoying years of professional play in Korea, isn't any more complicated than Warcraft 2. Chess is much simpler than both.

What makes a game fun is the interplay of choices. With a ton of choices, sometimes randomness sets in and dominates play. "Is unit A better than unit Z732? What about Z731, or Q986? Gah there's so many, forget it! Just build unit A!?" It's difficult to figure out a good strategy (or to be happy with the strategy you chose) when combinations start spiraling up.

Warcraft 2 and Kohan and Starcraft all found a balance with a small number of troops and buildings. Even then, they gradually added all their options in over the course of the game. They don't throw new players into the deep end (the full game); they work up to it over 30 hours or more.

It's like getting decent at chess and thinking, "ok, now that I've learned how all these pieces move, what I need now is more pieces! A larger board!" What makes chess interesting isn't those new pieces; instead, the game changes. The focus shifts to strategy and positional play, thinking ahead and mind games, learning the books and the endgames.

Part of Travian's appeal is its simplicity. If the game got too much more complex -- twice the number of buildings, more complex combat, etc -- then it would be a much harder game to get into. Part of its appeal is its chess-like simplicity. Even in that simplicity there is a lot of interplay, since so much of the game works on an exponential curve.

Good designs are simple designs. Let the fun be in the interplay of a handful of archetypes, not in the mindless proliferation of abilities and powers and resources and buildings and technologies...

Monday, April 27, 2009

Tips on Hiring Agile Programmers

We're looking to hire another couple programmers here, and while I was talking about it with the coding crew, we had some thoughts, there was a rant or two, so now I'm here.

First, What is Agile Programming?

It's not a buzzword. Agile programming is a methodology, which is just a $10 way of saying it's a set of methods. What brings those methods together is that it makes programmers and the code they write more agile. As in flexible. Bendable. Responsive, dextrous, nimble. Agile programmers should be able to adapt the code they write to changing requirements. That's really the whole point. There's a subthread in discussions about "what is agile?" that basically says you won't understand a problem until you try to solve it, and so changing requirements are a natural outcome of exploring the solution domain until you understand it -- but that's not really my point here. We can argue that later.

Using interfaces and patterns is nice, but that's not agile programming. Interfaces and objects are just a part of object-oriented programming, and patterns appear in any programming language (although most well-known patterns are OO patterns).

One part of being agile is avoiding tight coupling. If there's a one-to-one relationship between two class hierarchies, then any time you add a class to one branch, to have to make a similar change to the other branch. This ties those two trees together; they're now tightly bound. One agile approach would be to use smaller bits, like methods (or delegates, in C#) instead. Or to embed the behavior of the second class in the first. Or to get rid of whatever it is that requires you to have two trees with all of the same class types in them.

Being agile means being responsive to change. Writing all your code through interfaces is nice in theory, but whenever you need a client to pull more information out of a subclass of that interface, you're stuck with a problem -- throw in a using clause, or what? Is the interface providing something specific? If there's a 1:1 mapping between interfaces and implementations, ie one interface class for every implementation class, then you haven't done jack. There's already a way to hide implementation from a client, and it's the fucking private keyword, you moron. If the client is going to break through the interface anyway, then get rid of it, it's not actually hiding anything. You should only write code once; there should only be one class exposed to your client. (A 'using' clause etc winds up exposing two classes.) This is the principle of Once And Only Once. If you've got an interface, there better be a reason for it other than "my teacher told me to." If you're doing something and you don't know why, then you better need to do it to get something to work. If you can skip a step that someone told you was required, and your code works just fine without that step, then you've got smaller, more nimble code. That is what Agile means. (And that your guru is full of it.)

Which gets me to the rant: some programmers do things just because they're supposed to. Like adding interfaces for everything, even if there's no hierarchy there. Or using patterns everywhere. I write patterns all the time, but I don't obsess over it. It just happens. If you have to scan through Gang of Four to figure out what pattern to use, then you're not yet a jedi. That's OK, but it's also not the best way to program. Understanding patterns is better than throwing patterns at a project. That's like throwing bailout money everywhere.

Likewise, you don't need a factory for every object. The constructor works just fine! Just call the constructor! I've seen this problem in programmers that have misunderstood the factory pattern. A builder is a class (or method) that assists with complex construction code; a factory is a class (or method) that can build one of several different objects, and returns them through a common base class (which might be an interface). Again, if a factory only builds one type of object, then why do you have the factory? If a class only has one constructor and construction is simple, then why do you have a builder? Both add to code bloat and complexity, and thereby inhibit the ability of future programmers to add new features or fix bugs. Or even understand what the hell you were doing. And here again is the benefit of agile programming: if your code is small and nimble, you can change it more easily.

Wait, I thought you were talking about hiring programmers?

Many academic programmers heed rules that they don't understand. You want the guys that have figured things out for themselves. There's a lot of clues here to figure out how to tell the first group from the second.

In general, the best way to hire programmers is to get them to do the job they're about to do. Give them a written test before the interview, or stand them in front of a whiteboard and ask them to pop out a design.

Not just a function; a design. The stuff that matters, for agile, is design -- not algorithms. (Algorithms are important, but ultimately agile isn't about algorithms. Test algorithm knowledge, sure, but that's not why you're reading this.) Good programmers have a sense not only of algorithms but also data structures. Good OO programmers can think in module-sized units (as well as class-sized units, method-sized units, or statement-sized units). Ask your candidates to express some designs. If you're interviewing a senior candidate, then he should understand framework-sized units. Ask him to sketch a framework for handling a large, complex data set and a wide variety of operations, probably something related to your personal problem domain. You don't really want a correct answer so much as strong thinking. (Don't judge a candidate by how closely they parrot your personal favorite design, or the one that your office has chosen. That's not what you're looking for here.)

For juniors, start with the simple stuff: persistance and streaming, three-tiered architecture stuff, de novo object creation, parsing. And ask them to get specific; where are the interfaces? What patterns do you use here?

And the money question: why?

The thing to look for is not their answer so much as how they answer. Is the candidate trying to think up a good reason for their answer, or are they just struggling to translate their understanding into words? The faster you can get a candidate to talk, the less rationalization goes on. It's ok if they're stumbling around their words or gesturing a lot with their hands, or just drawing circles on a white board and using too many pronouns -- this suggests that they're thinking of objects, not trying to reconstruct some quiz question a prof gave them once.

Object-oriented designs are inherently visual. They're visual creations. This is why whiteboards are a must in interviews, and why it's very difficult to assess a programmer over the phone.

Getting a candidate to explain what agile means is less important than hiring a candidate that inherently does agile things. And the way to test that is not to get him to talk, but to get him to do.

Friday, April 10, 2009

Noobs and Information

Many games have large noob populations, and they suck. Dealing with noobs is a pain. They don't know what they're doing, they don't know what to ask, and they're asking all the wrong people in the wrong places.

I think the same thing happens in many domains, not just in online games.

The problem is that the noobs have no information. Games with strong documentation and community features reduce their noob load; games where information is spread out among third-party sources and where game mechanics are not explained by the developer have much higher noob loads.

Warcraft has extensive online documentation, but they still have a high noob load. 'Preventing' noobs requires addressing the needs of the noobs, not the needs of the marketing department. Keeping players interested, getting them to come back, giving them something to look forward to -- these are all great. But it's not what noobs need.

Travian has crappy documentation. There's a lot of different info sites, but many of them are paltry. They focus on a few of the major concepts in the game, and although often broad and deep, are broad and deep in the wrong places.

Single-player games tend to not have noobs. Single-player games tend to require that they explain themselves to new players or else players just stop playing. I've played a number of 'innovative' single-player console and PC games that just didn't make sense. Although these games hint at depth and complexity and something fun, they hide it. And if I can't find it, the game gets returned and I don't recommend it. And I expect they get bad reviews, too.

What noobs need is direction. They need to know how the game scores them. They want to know the roles that they are expected to take. They want to know the consquences of their actions, before they take them. They want a place to go for this information, plus a forum lively enough where they can ask obscure questions.

Direction

Single-player games tend to have scores or mission directives that communicate to players what it is that they're trying to accomplish. Warcraft is fairly open, yet there are a few major goals: get to the level cap (80), accumulate gear, and work through the hardest dungeons. A more subtle point is the things that they should look out for along the way; some guidance on which goals are worthwhile. Many mid-level players worry about gear, spending hours and days getting just the right piece. Then they out-level it a few days later.

Travian takes pride in its opaqueness, supposedly because it gives players 'freedom' to choose their own path. Yet there are a few paths that are extremely useful to the over-arching goal of winning the game. Winning the game is something that's done by an alliance, not by a solo player, or one player that happens to be in an alliance with some friends. It requires the cooperation of dozens, if not hundreds, of players. There are a few strong roles that players can take, as well as some rules for how to be most effective. Yet the publishers don't do any of that; they leave it up to players to discover all of that on their own.

The discovery process can be fun, but there's two important concepts that limit where a game designer should put discovery: consequence (see below) and direction. If a player first has to figure out where he's going, then he's not discovering the game world, or developing strategy; he's figuring out what the user manual would look like, if the user manual had something more constructive than a simple list of all the units in the game and their cost.

Roles

Role is related to Direction. Whereas Direction shows the player what they have to do to win (what obstacles they have to overcome), Role is the set of tools that the player has available to do it.

A Priest in World of Warcraft knows about the spells that he can cast, but a good role is a bit more than "cast these three spells over and over." In a raid, a healer can stay focused on one target, heal several targets, look over a whole bunch of people and top them off -- or stick to some of the utility spells that they have.

Over a career levelling a priest, that player might go Holy (and heal in groups), Shadow (and focus on damage and mana generation), or Discipline (for... PvP?). Ignoring the accuracy of those descriptions, these are ways of telling new players: if you choose this class, you will have these roles that you fit into.

In Travian, roles could be as a Defender, Hammer, or Feeder; one might work solo or in a group. There's an infinite variety of combinations, obviously -- yet there are no general guidelines. Travian noobs wonder what they should be focused on. They try to do all things, without picking a role. They want to be offensive, but don't know how that plays out over a year. It's very frustrating to spend a lot of time on a game building a character (as in WoW) or a bunch of villages (as in Travian) to find out that you made fundamental mistakes early on, and that your current effectiveness is gimped because of it.

Giving new players guidance on the Role that they'll play can help players get started on the road to contributing during a game, rather than observing. Links to discussions will help them understand how other players feel about that role, letting new players find their sweet spot that much faster.

Score

Score defines direction. Score tells players how they win. It gives them feedback, and it's through feedback that players learn to play better. I don't like mashing buttons; winning for random reasons is not an achievement. Score tells me if I mashed the right buttons, so that I can see patterns in the game, start developing a strategy, start discovering the game world, meeting new people, and then killing them.

Open games sometimes have visible 'score' charts that measure inconsequential things. Statistics can be fun to browse; some people like that. I often do. But if you give everybody a useless metric (but one easy to manipulate), many will shoot to maximize that metric, even if it is a detriment to their play experience. If you put up a score-board but that score-board only applies to some players, or is completely irrelevant to the rest, you misdirect your players.

Travian shows village population for all the players. This is the major score rank in the game, since it's something that everyone has and it's relatively easy to see. Yet it, ultimately, isn't a strong measure of performance. But that's a problem with team games; how do you measure 'performance' when so much of the contribution one makes is building social networks, establishing trust, etc?

For vague, open games like travian, maybe the best way to communicate 'score' to players is to give them an overview of previous rounds. Show them the target, and how they measure progress, and then what last round's measuring stick looked like. They might choose a different metric, but at least with this kind of guidance they can make an informed assessment about how well they are proceeding.

Consquences

This was one big problem in many MMORPGs: players were 'free' to destroy their characters, spending hundreds of hours building a character that was sub-par. I remember putting points into Charisma in Dark Age of Camelot. As a Cleric. It did nothing for my character; they were wasted points. The 'freedom' to distribute points as I felt wasn't backed with enough information to make a good choice (unless I had been playing the game through to the end-game already, which didn't even exist when the game first shipped). Further, my choice was hard-locked; it could never be changed. This was a combination of asking players to make choices without sufficient information and then penalizing them, for the rest of their online career, for the wrong choices.