We're looking to hire another couple programmers here, and while I was talking about it with the coding crew, we had some thoughts, there was a rant or two, so now I'm here.
First, What is Agile Programming?
It's not a buzzword. Agile programming is a methodology, which is just a $10 way of saying it's a set of methods. What brings those methods together is that it makes programmers and the code they write more agile. As in flexible. Bendable. Responsive, dextrous, nimble. Agile programmers should be able to adapt the code they write to changing requirements. That's really the whole point. There's a subthread in discussions about "what is agile?" that basically says you won't understand a problem until you try to solve it, and so changing requirements are a natural outcome of exploring the solution domain until you understand it -- but that's not really my point here. We can argue that later.
Using interfaces and patterns is nice, but that's not agile programming. Interfaces and objects are just a part of object-oriented programming, and patterns appear in any programming language (although most well-known patterns are OO patterns).
One part of being agile is avoiding tight coupling. If there's a one-to-one relationship between two class hierarchies, then any time you add a class to one branch, to have to make a similar change to the other branch. This ties those two trees together; they're now tightly bound. One agile approach would be to use smaller bits, like methods (or delegates, in C#) instead. Or to embed the behavior of the second class in the first. Or to get rid of whatever it is that requires you to have two trees with all of the same class types in them.
Being agile means being responsive to change. Writing all your code through interfaces is nice in theory, but whenever you need a client to pull more information out of a subclass of that interface, you're stuck with a problem -- throw in a using clause, or what? Is the interface providing something specific? If there's a 1:1 mapping between interfaces and implementations, ie one interface class for every implementation class, then you haven't done jack. There's already a way to hide implementation from a client, and it's the fucking private keyword, you moron. If the client is going to break through the interface anyway, then get rid of it, it's not actually hiding anything. You should only write code once; there should only be one class exposed to your client. (A 'using' clause etc winds up exposing two classes.) This is the principle of Once And Only Once. If you've got an interface, there better be a reason for it other than "my teacher told me to." If you're doing something and you don't know why, then you better need to do it to get something to work. If you can skip a step that someone told you was required, and your code works just fine without that step, then you've got smaller, more nimble code. That is what Agile means. (And that your guru is full of it.)
Which gets me to the rant: some programmers do things just because they're supposed to. Like adding interfaces for everything, even if there's no hierarchy there. Or using patterns everywhere. I write patterns all the time, but I don't obsess over it. It just happens. If you have to scan through Gang of Four to figure out what pattern to use, then you're not yet a jedi. That's OK, but it's also not the best way to program. Understanding patterns is better than throwing patterns at a project. That's like throwing bailout money everywhere.
Likewise, you don't need a factory for every object. The constructor works just fine! Just call the constructor! I've seen this problem in programmers that have misunderstood the factory pattern. A builder is a class (or method) that assists with complex construction code; a factory is a class (or method) that can build one of several different objects, and returns them through a common base class (which might be an interface). Again, if a factory only builds one type of object, then why do you have the factory? If a class only has one constructor and construction is simple, then why do you have a builder? Both add to code bloat and complexity, and thereby inhibit the ability of future programmers to add new features or fix bugs. Or even understand what the hell you were doing. And here again is the benefit of agile programming: if your code is small and nimble, you can change it more easily.
Wait, I thought you were talking about hiring programmers?
Many academic programmers heed rules that they don't understand. You want the guys that have figured things out for themselves. There's a lot of clues here to figure out how to tell the first group from the second.
In general, the best way to hire programmers is to get them to do the job they're about to do. Give them a written test before the interview, or stand them in front of a whiteboard and ask them to pop out a design.
Not just a function; a design. The stuff that matters, for agile, is design -- not algorithms. (Algorithms are important, but ultimately agile isn't about algorithms. Test algorithm knowledge, sure, but that's not why you're reading this.) Good programmers have a sense not only of algorithms but also data structures. Good OO programmers can think in module-sized units (as well as class-sized units, method-sized units, or statement-sized units). Ask your candidates to express some designs. If you're interviewing a senior candidate, then he should understand framework-sized units. Ask him to sketch a framework for handling a large, complex data set and a wide variety of operations, probably something related to your personal problem domain. You don't really want a correct answer so much as strong thinking. (Don't judge a candidate by how closely they parrot your personal favorite design, or the one that your office has chosen. That's not what you're looking for here.)
For juniors, start with the simple stuff: persistance and streaming, three-tiered architecture stuff, de novo object creation, parsing. And ask them to get specific; where are the interfaces? What patterns do you use here?
And the money question: why?
The thing to look for is not their answer so much as how they answer. Is the candidate trying to think up a good reason for their answer, or are they just struggling to translate their understanding into words? The faster you can get a candidate to talk, the less rationalization goes on. It's ok if they're stumbling around their words or gesturing a lot with their hands, or just drawing circles on a white board and using too many pronouns -- this suggests that they're thinking of objects, not trying to reconstruct some quiz question a prof gave them once.
Object-oriented designs are inherently visual. They're visual creations. This is why whiteboards are a must in interviews, and why it's very difficult to assess a programmer over the phone.
Getting a candidate to explain what agile means is less important than hiring a candidate that inherently does agile things. And the way to test that is not to get him to talk, but to get him to do.
Showing posts with label agile. Show all posts
Showing posts with label agile. Show all posts
Monday, April 27, 2009
Tuesday, August 12, 2008
Mental Economy - Communicating with Programmers Part I
Good programming style is more a matter of communicating well with other programmers than capturing an algorithm elegantly. Sometimes that other programmer is you, in six hours or six years. Being a good programmer starts with good problem-solving skills and a broad knowledge of both algorithms and APIs. Good employees are disciplined and self-starting. Beyond that, on almost any scale of project, is writing code that can be easily understood.
There are a few aspects of inter-programmer communication that I want to cover. The first is mental economy -- how much data your brain can work with at one point. Part two is about jargon, and in the third part of this series I'll cover grammar.
I was working on ellipse-drawing code the other day and, while mired deep in the math, realized that I didn't have a lot of bandwidth to deal with other bits of the code. I've talked to (and, unfortunately, worked with) a number of programmers that think they're great if they can solve really complex problems and fix deep bugs in spaghetti code. I think it's worse having a manager that thinks the mark of a good programmer is fixing deep bugs in spaghetti code.
The problem with spaghetti code is that you spend all your mental powers unraveling the spaghetti, rather than solving the problem. Part of the problem with the ellipse code I was working with was that the paper I was reading from had bugs. So, not only did I have to read and understand what the code was doing, I had to reverse-engineer the algorithm they were using. They had comments like "it's faster to do it this way", without explaining why, or even what the 'stupid' way was.
I liken the brain to a CPU. An x86 CPU, to be specific, according to its architectural definition rather than its actual implementation, which is even worse. Details aside, it only has about 7 (plus or minus two) registers to play with. Having to deal with chasing down the meaning of a new or unknown or poorly-explained piece of data means saving some state off to long-term storage (e.g. a piece of paper), exploring a bit, and then reconstructing where you were before.
That's the point of variable names, subroutines, and classes. It's not to elegantly capture behavior; the purpose of "elegantly capturing behavior" is a means to the ends -- the end of minimizing the subroutines that you'll subject your brain to when you try to parse code.
You can either be born brilliant, or trained. About half of the great programmers I know have learned to be brilliant. They're rational, methodical, skeptical, and mentally disciplined. Writing code that is well-organized makes it easier to read code, and that makes that code easier to extend, modify, and debug. They "work smarter, not harder." They don't have to waste brain power untangling spaghetti, or mentally keeping track of dozens of variables and routines and equations.
If you haven't read that George Miller paper (the "plus or minus two" link above), I highly recommend it.
There are a few aspects of inter-programmer communication that I want to cover. The first is mental economy -- how much data your brain can work with at one point. Part two is about jargon, and in the third part of this series I'll cover grammar.
I was working on ellipse-drawing code the other day and, while mired deep in the math, realized that I didn't have a lot of bandwidth to deal with other bits of the code. I've talked to (and, unfortunately, worked with) a number of programmers that think they're great if they can solve really complex problems and fix deep bugs in spaghetti code. I think it's worse having a manager that thinks the mark of a good programmer is fixing deep bugs in spaghetti code.
The problem with spaghetti code is that you spend all your mental powers unraveling the spaghetti, rather than solving the problem. Part of the problem with the ellipse code I was working with was that the paper I was reading from had bugs. So, not only did I have to read and understand what the code was doing, I had to reverse-engineer the algorithm they were using. They had comments like "it's faster to do it this way", without explaining why, or even what the 'stupid' way was.
I liken the brain to a CPU. An x86 CPU, to be specific, according to its architectural definition rather than its actual implementation, which is even worse. Details aside, it only has about 7 (plus or minus two) registers to play with. Having to deal with chasing down the meaning of a new or unknown or poorly-explained piece of data means saving some state off to long-term storage (e.g. a piece of paper), exploring a bit, and then reconstructing where you were before.
That's the point of variable names, subroutines, and classes. It's not to elegantly capture behavior; the purpose of "elegantly capturing behavior" is a means to the ends -- the end of minimizing the subroutines that you'll subject your brain to when you try to parse code.
You can either be born brilliant, or trained. About half of the great programmers I know have learned to be brilliant. They're rational, methodical, skeptical, and mentally disciplined. Writing code that is well-organized makes it easier to read code, and that makes that code easier to extend, modify, and debug. They "work smarter, not harder." They don't have to waste brain power untangling spaghetti, or mentally keeping track of dozens of variables and routines and equations.
If you haven't read that George Miller paper (the "plus or minus two" link above), I highly recommend it.
Labels:
agile,
naming conventions,
rant
Thursday, July 24, 2008
Silver Bullets
As Fred Brooks says, there are no Silver Bullets.
The basic message is that there is no language, tool, organization structure, or practice that will magically solve your problems and let you ship software on time. People who have read his 1987 essay come away from it avowing to never use a silver bullet -- but I think the result is often that they instead use a different phrase to refer to their silver bullet.
If you've got a practice in your organization that is essential, the sort of practice that anyone using your given language and tools would be a fool not to use, then that's your silver bullet. Do you have seven different pointer wrappers that everyone must use? That's your silver bullet. Do you require programmers to write an interface for every class that they implement? Are naming conventions the sine qua non in your office? Are you lax on a number of XP practices but absolutely adamant about unit tests?
Just because you don't call it a silver bullet doesn't mean it isn't.
Complexity is a hard problem, and ignoring some problems can produce an order-of-magnitude decrease in productivity. There's a difference between being avoiding willful ignorance and requiring some critical practice. I've often run into people that had a bad time "at their last job" or "on the last project," and are committed to avoiding that problem at all costs.
This is dropping context, though. The solution will remove the problem (but see below), but was the problem destiny or was it the result of an organizational shortfall? Maybe their last project had a lot of dangling pointers and memory leaks; using pointer wrappers won't make that problem go away. It doesn't even make it harder. Adding complexity to a project makes working on it more difficult, painstaking, and error-prone. Although they're trying to remove what they see as a flaw in the language (in this case, unmanaged memory), the solution doesn't change the language. Programmers can ignore, misuse, or work around pointer wrappers. Plus they've got to throw out their old understanding of the language and use a "new and improved" flavor of the language. All of their experience, previously very valuable, now works against them. It's simple to follow someone else's explanation of their solution, but understanding it and grokking it sufficiently to use it yourself is not trivial or fast. Forget using it adeptly!
Training programmers to understand memory allocation and giving them a model (such as 'ownership') to follow goes a long way to reducing memory misuse, makes them better programmers, is something they can use for the rest of their career, is a topic well-documented in books, magazines, blogs, and seminars, and requires no new tools or code.
The basic message is that there is no language, tool, organization structure, or practice that will magically solve your problems and let you ship software on time. People who have read his 1987 essay come away from it avowing to never use a silver bullet -- but I think the result is often that they instead use a different phrase to refer to their silver bullet.
If you've got a practice in your organization that is essential, the sort of practice that anyone using your given language and tools would be a fool not to use, then that's your silver bullet. Do you have seven different pointer wrappers that everyone must use? That's your silver bullet. Do you require programmers to write an interface for every class that they implement? Are naming conventions the sine qua non in your office? Are you lax on a number of XP practices but absolutely adamant about unit tests?
Just because you don't call it a silver bullet doesn't mean it isn't.
Complexity is a hard problem, and ignoring some problems can produce an order-of-magnitude decrease in productivity. There's a difference between being avoiding willful ignorance and requiring some critical practice. I've often run into people that had a bad time "at their last job" or "on the last project," and are committed to avoiding that problem at all costs.
This is dropping context, though. The solution will remove the problem (but see below), but was the problem destiny or was it the result of an organizational shortfall? Maybe their last project had a lot of dangling pointers and memory leaks; using pointer wrappers won't make that problem go away. It doesn't even make it harder. Adding complexity to a project makes working on it more difficult, painstaking, and error-prone. Although they're trying to remove what they see as a flaw in the language (in this case, unmanaged memory), the solution doesn't change the language. Programmers can ignore, misuse, or work around pointer wrappers. Plus they've got to throw out their old understanding of the language and use a "new and improved" flavor of the language. All of their experience, previously very valuable, now works against them. It's simple to follow someone else's explanation of their solution, but understanding it and grokking it sufficiently to use it yourself is not trivial or fast. Forget using it adeptly!
Training programmers to understand memory allocation and giving them a model (such as 'ownership') to follow goes a long way to reducing memory misuse, makes them better programmers, is something they can use for the rest of their career, is a topic well-documented in books, magazines, blogs, and seminars, and requires no new tools or code.
Labels:
agile
Saturday, June 21, 2008
Creeping Featuritis
This post was going to be on state management, but it turned into a discussion on featuritis.
My previous post on state management covered high-level game state. While rummaging around the web this week I ran across a few sites that offered finite state machines and whatnot. That's generally what people mean when they talk about state management, so I thought it'd make a good topic.
When designing a new class or building a tool to help build a game or when refactoring, one of your first questions should be, "where do I want flexibility?" Making code flexible in ways that you aren't (and won't) use means it'll take longer to get the code written and working, it'll be harder to use, and it's that much longer until you actually get something working.
So before I get into the FSM discussion, I thought I'd discuss flexibility for a bit.
One of the programmers pitfalls is gold-plating. Creeping featuritis. Whatever you call it, the tendency to make the bestest evar lies somewhere between pride and sloth on the road to hell. I believe strongly in YAGNI (see Wikipedia too): You Aren't Gonna Need It. YAGNI is a way of life, as all good principles are. It's not something you think about once in a while; it should guide a lot of your coding decisions.
For example, I'm working on editors for a game right now. My #1 goal is to get the game shipped. To have a decent, working game with a working combat and advancement system that I can use as experience when developing my next game. Whenever I think of adding some new feature to the engine, or making some system more complex and ever more awesome, I stop myself, and then figure out if that feature belongs in version 2, or 3, or 7. I write down my choice, and then go back to getting the current game working.
In fact, working on the editors is kinda backwards. The main reason I didn't start with a simple game engine is because I'm planning on using XNA, and I know way less about XNA than I do about WinForms. I figured I'd read up on XNA while building the editors, and by the time the editors were done, I'd be ready to jump into XNA coding. I like top-down coding: get something simple working, and progressively add more features. Some programmers think you should start at the bottom, making awesome libraries, but the problem there is that you never really know what you actually need.
I've learned a lot about WinForms while building the RPG4 editor (where 'RPG4' is the working title on the game). If I read a few more books first, chances are I'd still be farting around, trying to learn the framework better. I bunch of stuff I did early on was just stupid. At least, looking back, the design is bad. But I know that in part because I learned what I needed to learn as I went. Whenever I found myself adding yet another function into the main Form.cs file, I knew there had to be a better approach. Of course I knew why it was a bad idea; I knew that years ago. A decade ago. But here I had very explicit evidence about how much extra baggage you can add to a form until it becomes unwieldy.
There's a difference between too many methods in a class and too many features in a project.
I think it's a mistake to make one game (or tool or even a day-job work project) and keep adding more and more features. There's a difference between adding icing and adding cruft, and it's really a question of scale. I think it's better to err on the side of caution, of too little rather than too much. With too much, you've wasted time and energy. If you have too few features, you can always add more later.
But too many features in one class (instead of refactoring) is a different beast. I'm planning on a sequel. This is kinda like Fred Brooks' advice to "build one to throw away." I'm not worried if there's some cruft in my current project, for two reasons: (1) the design purity isn't a sign of my worth as a human being, and (2) I'm learning here! I don't need to refactor mercilessly, because in games you code it, ship it, then burn the source code. Being able to reuse systems next time is great, but being able to build a bigger, better, and faster widget next time is worth alot, too. Especially if you're going to be at a different company...
Technology changes quickly. Every year, there's new video cards out. New pixel shader standards. New consoles to consider. Phones and mobile platforms get more complex every year. It's very difficult to change a complex system to handle data in a fundamentally different way. Trying to add 3D into a 2D game engine is... blech. Besides being difficult, the solution is gonna be clunky at best.
If you build clean, tight systems, you can build them quickly. They do what you want to do, they're easy to work with, easy to change, and when you need to throw them away, you're not throwing away a lot of time and effort. I remember deciding to rebuild a 3D engine at one company, and the artists were shitting their pants. They'd seen engineers before, and what usually happened was the engineers would build, and build, and build, and then two years were passed and the thing still wasn't working.
That's what you get when you build from the bottom up. You spend time adding features you don't need, you add classes that wind up never getting used, and the whole thing is difficult to work with, too, because it's so huge. And then that hugeness starts producing bugs that are difficult to find. And then you work all weekend digging through code to find the bug, and you get it fixed, and everybody thinks you're a genius -- but two years are passed and funding ran out and the project gets cancelled.
A genius doesn't build a house of cards and then impress people by replacing one of the cards in the middle of the house. That takes a lot of talent, but the house of cards isn't what the company wanted. They just want to ship a game! Any fool can make a complex system. The smart programmers are the ones that know that a simple system with all the features that you actually need does more for the project.
My previous post on state management covered high-level game state. While rummaging around the web this week I ran across a few sites that offered finite state machines and whatnot. That's generally what people mean when they talk about state management, so I thought it'd make a good topic.
When designing a new class or building a tool to help build a game or when refactoring, one of your first questions should be, "where do I want flexibility?" Making code flexible in ways that you aren't (and won't) use means it'll take longer to get the code written and working, it'll be harder to use, and it's that much longer until you actually get something working.
So before I get into the FSM discussion, I thought I'd discuss flexibility for a bit.
One of the programmers pitfalls is gold-plating. Creeping featuritis. Whatever you call it, the tendency to make the bestest evar lies somewhere between pride and sloth on the road to hell. I believe strongly in YAGNI (see Wikipedia too): You Aren't Gonna Need It. YAGNI is a way of life, as all good principles are. It's not something you think about once in a while; it should guide a lot of your coding decisions.
For example, I'm working on editors for a game right now. My #1 goal is to get the game shipped. To have a decent, working game with a working combat and advancement system that I can use as experience when developing my next game. Whenever I think of adding some new feature to the engine, or making some system more complex and ever more awesome, I stop myself, and then figure out if that feature belongs in version 2, or 3, or 7. I write down my choice, and then go back to getting the current game working.
In fact, working on the editors is kinda backwards. The main reason I didn't start with a simple game engine is because I'm planning on using XNA, and I know way less about XNA than I do about WinForms. I figured I'd read up on XNA while building the editors, and by the time the editors were done, I'd be ready to jump into XNA coding. I like top-down coding: get something simple working, and progressively add more features. Some programmers think you should start at the bottom, making awesome libraries, but the problem there is that you never really know what you actually need.
I've learned a lot about WinForms while building the RPG4 editor (where 'RPG4' is the working title on the game). If I read a few more books first, chances are I'd still be farting around, trying to learn the framework better. I bunch of stuff I did early on was just stupid. At least, looking back, the design is bad. But I know that in part because I learned what I needed to learn as I went. Whenever I found myself adding yet another function into the main Form.cs file, I knew there had to be a better approach. Of course I knew why it was a bad idea; I knew that years ago. A decade ago. But here I had very explicit evidence about how much extra baggage you can add to a form until it becomes unwieldy.
There's a difference between too many methods in a class and too many features in a project.
I think it's a mistake to make one game (or tool or even a day-job work project) and keep adding more and more features. There's a difference between adding icing and adding cruft, and it's really a question of scale. I think it's better to err on the side of caution, of too little rather than too much. With too much, you've wasted time and energy. If you have too few features, you can always add more later.
But too many features in one class (instead of refactoring) is a different beast. I'm planning on a sequel. This is kinda like Fred Brooks' advice to "build one to throw away." I'm not worried if there's some cruft in my current project, for two reasons: (1) the design purity isn't a sign of my worth as a human being, and (2) I'm learning here! I don't need to refactor mercilessly, because in games you code it, ship it, then burn the source code. Being able to reuse systems next time is great, but being able to build a bigger, better, and faster widget next time is worth alot, too. Especially if you're going to be at a different company...
Technology changes quickly. Every year, there's new video cards out. New pixel shader standards. New consoles to consider. Phones and mobile platforms get more complex every year. It's very difficult to change a complex system to handle data in a fundamentally different way. Trying to add 3D into a 2D game engine is... blech. Besides being difficult, the solution is gonna be clunky at best.
If you build clean, tight systems, you can build them quickly. They do what you want to do, they're easy to work with, easy to change, and when you need to throw them away, you're not throwing away a lot of time and effort. I remember deciding to rebuild a 3D engine at one company, and the artists were shitting their pants. They'd seen engineers before, and what usually happened was the engineers would build, and build, and build, and then two years were passed and the thing still wasn't working.
That's what you get when you build from the bottom up. You spend time adding features you don't need, you add classes that wind up never getting used, and the whole thing is difficult to work with, too, because it's so huge. And then that hugeness starts producing bugs that are difficult to find. And then you work all weekend digging through code to find the bug, and you get it fixed, and everybody thinks you're a genius -- but two years are passed and funding ran out and the project gets cancelled.
A genius doesn't build a house of cards and then impress people by replacing one of the cards in the middle of the house. That takes a lot of talent, but the house of cards isn't what the company wanted. They just want to ship a game! Any fool can make a complex system. The smart programmers are the ones that know that a simple system with all the features that you actually need does more for the project.
Saturday, June 7, 2008
Schedules
Gantt charts suck.
They're great for work that's like manufacturing, where you have a clear assembly-line perspective on how the work is supposed to be done. Gantt charts are useful for resolving dependencies between different parts and finding out what the gates on development are.
But programming isn't manufacturing. It's not engineering, either. I'm a big proponent of software engineering, but that body of knowledge is more like disciplined craft than engineering. An engineer can plan out what needs to be developed, what each piece needs, and how long each piece will take to build. He can make you detailed drawings on how the piece looks. He'll give you the formulas for all the chemical reactions that go on in your plant. He'll tell you how strong that girder needs to be to hold the weight and stress that will be put on it.
But programmers can't do that. How long will it take to write the AI? Hah! The designers don't even know what the AI will do yet, or how complex the collision maps are, why would a programmer have any idea how long an AI will take?
I've written many different particle systems. I could clone one of my old systems fairly quickly, and also give you an accurate estimate of how long it would take to do. But no system exists in a vacuum. A particle system has to integrate with the underlying graphics engine. You might want to add some constraints to the system to prevent it from bogging down the CPU. And then there's always new features to add halfway through development.
If you ask me to build the same system that I built last year, then we're getting close to engineering. If I built that same system a half-dozen more times, it'd be the sort of streamlined, organized project worthy of being called engineering. But hah! And hah again! The technology moves to fast for that to happen.
New systems are research projects. If we could properly schedule research projects, then we'd have a cure for cancer already. We woulda had one decades ago. But research isn't like that, and most systems in games are new systems, similar to last year's model but with a new twist. There's no good way to figure out how that twist will effect the project until it's built, though.
And that brings us back to agile development. Scheduling is just a loopy fallacy. If it wasn't for programmers being exempt employees (or employers dodging the law and refusing to pay overtime)... well, actually, I think that's the sort of mess that would force a lot of people to reassess how they schedule programming.
An iterative model means frequent releases. Agile development means prioritizing features. You might not know when everything you want will be done, but at least you'll be sure the important bits were done by the time you need to ship.
I'm working on a little sprite editor this weekend. I need both a subset and superset of the tools that most paint programs have, so I'm just building my own. (I wrote my first paint program for the Atari 800 around twenty years ago.) At one point, I was tempted to write down all the tasks that I needed to complete before I considered the editor done, then decided to throw the list away. I don't really need to know when the tool will be done. I know what's important, and I'm doing the big, obviously important stuff first. Things like frickin drawing. Saving, loading, color picking.
I'm not trying to build the uberest components ever, before the app runs. I'm not amassing giant graphics libraries, full of all the latest algorithms and hippest widgets. Screw that. If I need that stuff, I can add it when I need it. Right now, I need a way to use an Eyedropper without having to take my hand off the mouse, or reach across the keyboard (usually meaning looking at the keyboard), with the chance for error and time wasted. That feature? That's in already! Although minimally functional, it's already useful.
And hence another problem with schedules: you might have the most accurate schedule known to man, but if your programmers are busy adding features that no-one will ever use, then you've wasted their time. Your project is less likely to succeed because you were more worried about being 'done' on time than you were about what you had working.
The stuff that slows down programming, for me, is almost always completely outside the scope of programming. Things like 3rd-party libraries being incompatible (from things like XNA not building under Visual Studio 2008, to creating MS Exchange mailboxes on Vista requiring not a simple code library like in XP but a completely separate scripting system). That shit set me back days, on tasks that were, otherwise, only a few hours long. On my current game, the conversation editor took me over a week to complete, when nearly every single other editor took on the order of 3 hours. Why? The design kept changing. (My designer is a fucktard. But all the chicks say he's sexy so I'm stuck with him.)
So, you might ask, how in the world can you schedule if you can't schedule? Again with the iterative development: you'll be releasing on a regular basis anyway. If you have a drop-dead ship date, quick iterations mean you'll have something steady. Prioritize the tasks that you give your programmers, and they'll get the important stuff (and the low-hanging fruit) done. The better your programming team, the more features you'll have at ship. And because they released new builds all the time, it'll also be a robust game.
They're great for work that's like manufacturing, where you have a clear assembly-line perspective on how the work is supposed to be done. Gantt charts are useful for resolving dependencies between different parts and finding out what the gates on development are.
But programming isn't manufacturing. It's not engineering, either. I'm a big proponent of software engineering, but that body of knowledge is more like disciplined craft than engineering. An engineer can plan out what needs to be developed, what each piece needs, and how long each piece will take to build. He can make you detailed drawings on how the piece looks. He'll give you the formulas for all the chemical reactions that go on in your plant. He'll tell you how strong that girder needs to be to hold the weight and stress that will be put on it.
But programmers can't do that. How long will it take to write the AI? Hah! The designers don't even know what the AI will do yet, or how complex the collision maps are, why would a programmer have any idea how long an AI will take?
I've written many different particle systems. I could clone one of my old systems fairly quickly, and also give you an accurate estimate of how long it would take to do. But no system exists in a vacuum. A particle system has to integrate with the underlying graphics engine. You might want to add some constraints to the system to prevent it from bogging down the CPU. And then there's always new features to add halfway through development.
If you ask me to build the same system that I built last year, then we're getting close to engineering. If I built that same system a half-dozen more times, it'd be the sort of streamlined, organized project worthy of being called engineering. But hah! And hah again! The technology moves to fast for that to happen.
New systems are research projects. If we could properly schedule research projects, then we'd have a cure for cancer already. We woulda had one decades ago. But research isn't like that, and most systems in games are new systems, similar to last year's model but with a new twist. There's no good way to figure out how that twist will effect the project until it's built, though.
And that brings us back to agile development. Scheduling is just a loopy fallacy. If it wasn't for programmers being exempt employees (or employers dodging the law and refusing to pay overtime)... well, actually, I think that's the sort of mess that would force a lot of people to reassess how they schedule programming.
An iterative model means frequent releases. Agile development means prioritizing features. You might not know when everything you want will be done, but at least you'll be sure the important bits were done by the time you need to ship.
I'm working on a little sprite editor this weekend. I need both a subset and superset of the tools that most paint programs have, so I'm just building my own. (I wrote my first paint program for the Atari 800 around twenty years ago.) At one point, I was tempted to write down all the tasks that I needed to complete before I considered the editor done, then decided to throw the list away. I don't really need to know when the tool will be done. I know what's important, and I'm doing the big, obviously important stuff first. Things like frickin drawing. Saving, loading, color picking.
I'm not trying to build the uberest components ever, before the app runs. I'm not amassing giant graphics libraries, full of all the latest algorithms and hippest widgets. Screw that. If I need that stuff, I can add it when I need it. Right now, I need a way to use an Eyedropper without having to take my hand off the mouse, or reach across the keyboard (usually meaning looking at the keyboard), with the chance for error and time wasted. That feature? That's in already! Although minimally functional, it's already useful.
And hence another problem with schedules: you might have the most accurate schedule known to man, but if your programmers are busy adding features that no-one will ever use, then you've wasted their time. Your project is less likely to succeed because you were more worried about being 'done' on time than you were about what you had working.
The stuff that slows down programming, for me, is almost always completely outside the scope of programming. Things like 3rd-party libraries being incompatible (from things like XNA not building under Visual Studio 2008, to creating MS Exchange mailboxes on Vista requiring not a simple code library like in XP but a completely separate scripting system). That shit set me back days, on tasks that were, otherwise, only a few hours long. On my current game, the conversation editor took me over a week to complete, when nearly every single other editor took on the order of 3 hours. Why? The design kept changing. (My designer is a fucktard. But all the chicks say he's sexy so I'm stuck with him.)
So, you might ask, how in the world can you schedule if you can't schedule? Again with the iterative development: you'll be releasing on a regular basis anyway. If you have a drop-dead ship date, quick iterations mean you'll have something steady. Prioritize the tasks that you give your programmers, and they'll get the important stuff (and the low-hanging fruit) done. The better your programming team, the more features you'll have at ship. And because they released new builds all the time, it'll also be a robust game.
Labels:
agile,
management
Wednesday, June 4, 2008
Agile Development
I'm an agile developer, but I don't use XP or Scrum. "Agile" is the umbrella term that includes XP, Scrum, and a host of other practices. "Agile Development" isn't a methodology unto itself.
To me, the heart of Agile is adapting to change; being flexible; being able to change quickly. Citing "continuous attention to technical excellence and good design" as a principle sounds disingenuous to me. That's not what separates Agile methods from other approaches. Likewise, citing Simplicity isn't very useful unless you make it clear what you think Complexity is. Yeah, messy code is bad. I'm not sure who you're going to find that's going to disagree with that.
The key observation that leads developers to agile methods is that, as programmers, we learn much more about a problem and its solution by solving it than by staring at it. Agile is the opposite of Big Design Up Front. Planning is a good exercise, and I think any competent dev team has a plan -- but when they learn more about what they're building, they'll be more willing to change that plan (and come up with better changes to boot). The core argument here is that solving a problem produces information about a problem faster and more effectively than planning alone.
There are three major practices central to agile methods. They are iterative development, top-down development, and flexible code.
Iterative development means making frequent releases. If you want to adapt to change, then you need to adapt; to change. Grossly, there are two approaches here: (1) build the whole thing, then stand back and see what you need to do different, or (2) build some of it, then stand back and assess what your next step is. Iterative development breaks up a big project into small chunks so that you can get something finished -- to make progress -- before you go gallivanting off in another direction.
An agile developer can shuffle priorities around at each iteration. Quick iterations also means more visibility up the management chain; they know where you are because you released something recently that shows where you are. Iterations also give your customers a chance to give feedback before you've gone "too far," and it provides a chance to reign in a wayward developer who has gone too long without contributing to an iteration.
I think most of the benefit of frequent iterations is incorporating feedback. To me, a big up-front design that calls for frequent releases seems squirrelly. Why are you releasing if you're not going to listen to feedback? If you're using a waterfall model, then your interim releases are going to be full of bugs, untested, and probably won't have a cohesive set of useful features.
I like planning using something small and physical; I haven't been too happy with any of the automated tools I've seen. I write down features on index cards, and sort them by priority. Every time someone completes a task, they go to the table-o-index-cards to grab their next task. (Usually they know ahead of time which set of cards are going to be theirs.) When too many features show up and/or developing an important feature takes too long, it is real fucking easy to see what tasks fall off at the back end.
Top-down development is making something simple that runs and slowly adding new features to it. I think one of the great bugaboos in software development is YAGNI, and top-down development is its cure.
In games, it's fun to get something working quickly. Plus, something working quickly gives your team more time to figure out what works and what doesn't, to see models and animations in action, and to start playtesting and thinking about balance.
Adding new features can be difficult if your code is a mess. Working around that is my third pillar today:
Flexible Code. It's hard for me to explain what this is, because unflexible code just looks so alien to me that it makes my brain hurt. The bad practices that hurt the worst here are hidden side-effects, large functions, blob classes, opaque data structures, and automation.
Automation makes a lot of things easier, but it's horribly inflexible. By "automation" I mean code structures that encapsulate some set of features into a magic little opaque blob. (You're not detecting any bias here, are you?) "Automated" code magically sticks items into queues, deletes items, changes variables, etc etc, usually in ways that someone debugging code would never know about. "Automation" often makes systems significantly easier to build, once you've learned your way around the code. But bugs can be a nightmare to remove, and god forbid someone new has to learn the system. Or its original developer leaves.
Ultimately, the goal of agile methods is to increase productivity. The two main ways this happen is by only writing code that you actually need -- by prioritizing up front and coding top-down -- and by making it easy to add new features. Some of the most frustrating experiences I've had while coding have been when I've been faced with a system that violates some of these rules. 3rd-party systems are often the worst, because (for contractual reasons) you can't see their source code, so the whole thing becomes magic. How does it work? Who knows! Plus they never bothered to document anything! Grrr.
There's tons of resources out there on agile development, but my favorite is the c2 wiki, aka WardsWiki, the firstest Wiki evar. I learn a lot more from reading both sides of an issue and making up my own mind. The C2 wiki is a great place for that. :)
To me, the heart of Agile is adapting to change; being flexible; being able to change quickly. Citing "continuous attention to technical excellence and good design" as a principle sounds disingenuous to me. That's not what separates Agile methods from other approaches. Likewise, citing Simplicity isn't very useful unless you make it clear what you think Complexity is. Yeah, messy code is bad. I'm not sure who you're going to find that's going to disagree with that.
The key observation that leads developers to agile methods is that, as programmers, we learn much more about a problem and its solution by solving it than by staring at it. Agile is the opposite of Big Design Up Front. Planning is a good exercise, and I think any competent dev team has a plan -- but when they learn more about what they're building, they'll be more willing to change that plan (and come up with better changes to boot). The core argument here is that solving a problem produces information about a problem faster and more effectively than planning alone.
There are three major practices central to agile methods. They are iterative development, top-down development, and flexible code.
Iterative development means making frequent releases. If you want to adapt to change, then you need to adapt; to change. Grossly, there are two approaches here: (1) build the whole thing, then stand back and see what you need to do different, or (2) build some of it, then stand back and assess what your next step is. Iterative development breaks up a big project into small chunks so that you can get something finished -- to make progress -- before you go gallivanting off in another direction.
An agile developer can shuffle priorities around at each iteration. Quick iterations also means more visibility up the management chain; they know where you are because you released something recently that shows where you are. Iterations also give your customers a chance to give feedback before you've gone "too far," and it provides a chance to reign in a wayward developer who has gone too long without contributing to an iteration.
I think most of the benefit of frequent iterations is incorporating feedback. To me, a big up-front design that calls for frequent releases seems squirrelly. Why are you releasing if you're not going to listen to feedback? If you're using a waterfall model, then your interim releases are going to be full of bugs, untested, and probably won't have a cohesive set of useful features.
I like planning using something small and physical; I haven't been too happy with any of the automated tools I've seen. I write down features on index cards, and sort them by priority. Every time someone completes a task, they go to the table-o-index-cards to grab their next task. (Usually they know ahead of time which set of cards are going to be theirs.) When too many features show up and/or developing an important feature takes too long, it is real fucking easy to see what tasks fall off at the back end.
Top-down development is making something simple that runs and slowly adding new features to it. I think one of the great bugaboos in software development is YAGNI, and top-down development is its cure.
In games, it's fun to get something working quickly. Plus, something working quickly gives your team more time to figure out what works and what doesn't, to see models and animations in action, and to start playtesting and thinking about balance.
Adding new features can be difficult if your code is a mess. Working around that is my third pillar today:
Flexible Code. It's hard for me to explain what this is, because unflexible code just looks so alien to me that it makes my brain hurt. The bad practices that hurt the worst here are hidden side-effects, large functions, blob classes, opaque data structures, and automation.
Automation makes a lot of things easier, but it's horribly inflexible. By "automation" I mean code structures that encapsulate some set of features into a magic little opaque blob. (You're not detecting any bias here, are you?) "Automated" code magically sticks items into queues, deletes items, changes variables, etc etc, usually in ways that someone debugging code would never know about. "Automation" often makes systems significantly easier to build, once you've learned your way around the code. But bugs can be a nightmare to remove, and god forbid someone new has to learn the system. Or its original developer leaves.
Ultimately, the goal of agile methods is to increase productivity. The two main ways this happen is by only writing code that you actually need -- by prioritizing up front and coding top-down -- and by making it easy to add new features. Some of the most frustrating experiences I've had while coding have been when I've been faced with a system that violates some of these rules. 3rd-party systems are often the worst, because (for contractual reasons) you can't see their source code, so the whole thing becomes magic. How does it work? Who knows! Plus they never bothered to document anything! Grrr.
There's tons of resources out there on agile development, but my favorite is the c2 wiki, aka WardsWiki, the firstest Wiki evar. I learn a lot more from reading both sides of an issue and making up my own mind. The C2 wiki is a great place for that. :)
Subscribe to:
Posts (Atom)