Wednesday, October 31, 2012


I am proud to say that we have a date. 7th of November for the release of Curiosity - what's inside the cube on both iOS and Android platforms.

I am excited and terrified to see how my server code holds up to thousands of people tapping away at the cube on multiple devices in multiple time zones.

Saturday, October 13, 2012

Changes, pain, process

A big part of games development is the constantly changing design. To think you can build a game from a design document given to you on day one and work your way through it until you reach feature complete on some distant day long away with no interruptions is the illusion not just held by publishers but the illusion that is the root cause of overtime and underappreciation in our industry.

For some reason, publishers still expect development companies to produce precisely what they presented in initial meetings. For some reason they also think that research and development can be estimated with tiny margins for error and the black-swan effect will never hit them.

This is one of the reasons companies like Rockstar are great. They don't have milestones for some arbitrary features laid out in some design document that defines whether you get your milestone payment. They just do what needs to be done to make a game great until the game is great enough to sell and then they finish it and sell it.

This is what makes working for a publisher so hard and painful. You end up balancing development tasks with milestone tasks that you know are completely unnecessary but you have to do them otherwise you won't get your money you need to live.

Changes are a necessity for good game development. They're a necessity for non game development too, but from the outside looking in, it appears that other industries ideas of change only include small changes. Changes that don't have far reaching effects.

Why is this? I have been told many times over the years that "games developers could do it right if only they employed proper project management techniques." but I'm not convinced. The reason I'm not convinced comes down to two differences between games development and traditional software engineering.

Reason 1

Games are actually harder to make. They are. Looking at any codebase I've worked on over my whole software development history, the ones that are more complex, have the most bugs, and have the highest number of special cases; they have always been the games. I've worked on tools coding, server side coding for gameplay, robot control software, and other small things, but even the smallest game becomes a complex beast of a codebase. Why? Because games are naturally interconnected. Every game is a web of events and reactions. Every game is a combination of input leading to many results that are combined into many other results and only then do you get an output in the form of a complex rendering of an internal game state.

Games are hard to write because there is so much going on and everything has a side effect.
I learned that Bloomberg had a 100 mloc codebase a while back. That doesn't surprise me any more. They have a lot if code in a very large number of interconnected modules, but they don't have any single project that uses 10 mloc. That means they can go into any of their hundreds of projects and make changes without fear of causing a cascade of changes just to support one new feature. It's not a property inherent of good software engineering, but a property of not having to interact with multiple modules all running proprietary binary interfaces in the same process.

Games are just harder to write, and even harder to modify once they are written.

Reason 2

New must have features that massively change the game are introduced into the last days of the project.
This isn't a lie. It happens. The last days of any game are frought with danger as the balance and trading of just one last feature against a submission fail bug possibility causes the studio to ramp up the overtime and bring out the bug guns of empty promises of bonuses and the chance to work on future projects, projects that have been sitting on a back burner just waiting for the chance, for a moment of downtime.

These moments never come for many as once a project is completed, the development company cannot expect any more funding from their publisher as usually contracts contain no extra money to pay for second submissions or any rework required to fix bugs. Once any small excess has been eaten up by finally getting the game through submission, the company is out of money and has to start looking at pushing its workforce into more work, or worse, out the door.

So how do we fix this? Can we?

I think we can, and but we have to teach everyone with which we come into contact a bit of the Rockstar mentality. Sure, they force their workers to do some horrific overtime, but having been there, I wouldn't say that it was at all necessary. I think it's just a culture thing. What they do right is still right. What they do right is not decide on what is going to be in a game until they've almost finished developing it. Once they're at their last few months, they look at what they have and take that as their base game. They look to cut what isn't great, look to polish what could be, and implement any last features that might be awesome. That's it. It's a simple formula, and it's worked for years.

Make a game until you only have enough money for a few more months of development, then polish and debug the shit out of that mud covered diamond.

Friday, October 12, 2012

Working in multiple languages.

When I first started out in programming there was a feeling that I didn't need to know any other language than c++. As the years went by I picked up python and thought that that was all I needed to round out the languages I knew. Over the last couple of years though, I find that it's become more and more the case that I really don't think you can get by without dabbling in many languages.

The case in point is Web development. But I found that there are so many languages that it makes no sense to try to develop a game or anything else without using multiple languages. You just have to use html and Javascript and php and C or python if you want to produce something interesting. I some of those languages could be replaced. You could switch our python for perl or Ruby. You could switch out c for c# or java of you only needed a little but of a boost in your server side coding. You can eve switch out Javascript if you allow for flash. But the point remains:
If you're doing Web development you're going to need more than one language.

So how does this affect how I think about games development?

Invest in scripting languages.
Don't invest in just one language. Try out a few and use them whenever they are the best for for a problem.
Don't shy away from low level languages. Use them when appropriate. I don't assume that two languages is enough. Then stick some medium level on there too such as java or c#. I won't ever enjoy working in those languages, but many gameplay programmers and tool developers love them for quick UI integration. They also have the benefit of being very fast to compile. Where did you find the most coupling your c++? Is it gameplay core by any chance? The abi way of handling linking fixes those horrible link times and even helps oxmplie times as there is less parsing going on in the first place.

Anyway. My point.

One language is not enough. The only programmer that thinks that one programming language is enough is a programmer that only knows one language.

Tuesday, April 24, 2012

30 days and 30 nights of Vim

or, how I learned to stop worrying and love the registers

My trial period for ViEmu from ran out yesterday. I have been using it in my day and night coding during the last 30 days of work, hoping that having it installed and trying my best not to turn it off, would force me into learning how to use it. I would say that I've pretty much managed to get the most out of my 30 day trial period, being that I've been in crunch the whole time. So I reckon I've managed to easily amass over 300 hours working with the plug-in, or using Vim directly in other contexts. I have gone from really not getting it at all, to installing Vim itself and making it the default program to read, write, and edit text files of all varieties from logs to xml to batch files to source.

So, 30 days later, 30 days from installing the plugin, what are my thoughts? Why did I change from "Visual Studio is just fine as it is" to "Oh my, it feels so clunky" ?

There were downsides, and most of them came from ViEmu being in visual studio. I don't miss +w, c to close windows. That was quite infuriating, and in the case of find-in-files and compilation windows, it just didn't work, just like searching in those windows didn't work either. but it's only a shortcoming of me not knowing how to push past the unbindings of ViEmu and set up a mapping. That much I've learned: the reason you're not productive in vi is never vi's fault, it's that you didn't configure it for your purpose. vi is naturally configured for all purposes at once, and once you get your head around the fact that this "programming" tool, is also a diffing tool, a writing tool, and a reading tool, you begin to see why, even with lots of special key combinations there's only so much you can fit on one keyboard. In short, vi is setup for everyone by default, you need to make it your own.

Post ViEmu, I now find myself opening a file and hitting / to find things, then after realising that's not available, I go back to +i, and then when I find the first occurrence, I find myself replacing it with n. Okay, so I get it, the muscle memory has already soaked in for the basic stuff. I now find Visual Studio slow to manoeuvre around. Finding forwards, backwards, moving my counts of words, all those things I'm missing already, but the main thing when it comes to editing existing code is operations on text. The most basic one I miss is the one mentioned by the developer of ViEmu: duplicating a line ready to make changes.

What else I am missing already is how fast the macros were. Visual studio macros used to be fast, then they seem to have stuck in an entire visual basic backend the size of a small house and creating a quick macro takes ages, but not quite as long as it seems to take to play it back. I'm not sure how they took a system that worked and broke it so hard, but I've not seen any point in using visual studio macros since C++6.0. They're just too slow. Anyway, in vi macros are amazing. They can be created and reused incredibly fast, and they can be edited too without requiring switching out to a vb editor. In fact, they can be edited in the file you're working on. For those of you not in the know, I'd suggest that macros and registers are actually the main feature you want to get your head around. Once you have tried and fallen for this mind-blowingly simple system that has so many emergent properties, you'll never want to go back. For myself, it was at least two weeks into my learning before it was pointed out to me that macros and registers were the same thing. That was a long time to wait for learning about such an amazing feature.

But I'm sure there are more things that are amazing. I learned that I need to know about ctags, but still haven't. I have learned that I should learn how to read the help, but I haven't. I know that I need to start compiling my own .vimrc, and I still haven't properly other than some basics like the font and my preferred windowsize.

Take the time to do it. Learn vi, not learn how to open a file and press i, but actually learn vi. You won't regret it unless you give up. It took me, a long in the tooth modeless editor fan, two weeks to get good enough with it to say it was bearable, and four to say I can't live without it.

It's on very machine I work with now and it's not coming off.

Sunday, April 08, 2012

Remember time then doing analysis of space

Some elements or development have time and space tied to each other in such a literal way that's it hard to think of a reason not to worry about both at the same time.

Take asset compression.

For physical media, there is a certain amount of compression required in order to fit your game on a disc. Without any compression at all, many games would either take up multiple discs, or just not have much content. With compression, load times go down and processor usage goes up. But, how much compression is wanted? There are some extremely high compression ratio algorithms around that would allow some multiple DVD titles to fit on one disc, but would it be worth investing the time and effort in them?

The only way to find out is to look at the data, in this case, the time it takes to load an asset from disc vs the time it takes to decompress it. If the time to decompress is less than the time to read, then it's normally safe to assume you can try a more complex compression algorithm, but that's only in the case where you have the compute resources to do the decompression.

Imagine a game where the level is loaded, and play commences. In this environment, the likelyhood that you would have a lot of compute resources available during the asset loading is very high indeed. Loading screens cover the fact that the CPUs/GPUs are being fully utilised in mitigating disc throughput limits.

Imagine a free roaming game where once you're in the world, there's no loading screens. In this environment, the likleyhood of good compute resources going spare is low, so decompression algorithms have to be light-weight and assets need to be built so that streaming content is simple and fast too.

Always consider how the data is going to be used, and also what state the system is in when it is going to use it. Testing your technologies in isolation is a sure fire way to give you a horrible crunch period where you try to stick it all together at the end.

Sunday, March 25, 2012

Test Driven Development == Granting a Wish

In test driven development, if you're doing it right, then you're likely to be writing code that is wrong.

I don't mean that in a bad way, quite the contrary, but it's one of the oddities of test driven development that makes some people run away, fast.

For example, when your'e starting to write tests for some system, the general solution for the first version of the code is to write a function that just returns the value the assert was expecting:

test: assert( Foo( 5,3 ) == 1 )
code: Foo() { return 1 }

.. most people seeing this as a first step laugh, or just comment about how dumb it is. But this element of _the simplest thing that will work_ is often overlooked. It's actually one of the most important things to get hammered into your head as early as possible.

The tests are being tested too

TDD makes you evaluate your tests. This is really important because most software design methods take a problem and design a solution for it. TDD takes an unfinished solution (empty project) and refines it until it works for all tests you can think of.

Which brings me to my point. How many of you have granted a wish?

In Dungeons and Dragons, the wish spell is one of the most annoying and powerful spells in the whole game. It can do anything. Literally, but how the DM grants a wish is up to him. The player has to be very careful about how they ask for what they want, otherwise a completely innocuous request can go awry and could commit genocide.

and so he said, I wish that all traces of the disease was eradicated from the elves. at which point I told them that all the elves caught fire and died.

See, tests in TDD are just like wish spells. You make a test and then you go and do the easiest thing possible. Sometimes it's easier to just burn all the elves / return 1. Sometimes, the test requires better coding. It's the act of refining in TDD that gives it its strength. You can get inspiration for what test to write next from what you did to make it pass last time, and every time you think of something to make the test pass, you can think of a new test that breaks it.

That's how we get to a final working solution. We keep on adding more wishes adding more criteria so eventually you find a way to cure the disease while keeping your elves raw.

Wednesday, March 21, 2012

The death of the high-street game store

I was told on the train back to Aberystywth one time, by a student of economics, that during this time of recession, at least I wouldn't be hit as hard. He said that, though luxury goods sales dwindle, entertainment related products and services historically maintain a healthy trade.

But seeing this happening to Game, I wonder how true this can be.

It's the tipping point for my argument. No-one in games is getting anything like the money we would have seen before the recession, and yet I'm told to my face that at least my profession won't be affected.

So, what's going on?

I suspect that the economics student is right. I suspect that the market for entertainment is just as strong as it's always been, but much more worrying is that the market I am working in is not all of entertainment, but just one corner of it. And, due to the recession, anyone talking about a dwindling market, could well be blaming the slow death of profits on something that doesn't naturally or actually have any impact.

Maybe we're experiencing a recession at the same time as the games industry is experiencing a change of course. The way to make money is changing, and because the recession looms, it's not obvious that we're dying by standing still.

The highstreet retailer model is being crushed from two sides. Boxed games are a dying breed, but so is buying games at a shop.

They were definitely going to die.

I feel that a lot of games developers see the reason they're not being paid well in the shadows of other narratives. Some see Steam or Origin as a way to save them from pirates, which they assume to be the ones costing them their wages. Others see that it's the second hand game market. Neither seems ready to admit that people are actually buying more games than ever before. The only real problem seems to be people wanting to make AAA games when the majority doesn't value those games any higher than some grumpy avians.

Entertainment makes money, otherwise Apple wouldn't be the largest company in the world right now. The games industry needs to recognise that being stubborn and doing it the old way, making block-buster hits, though it brings in millions, might not be enough any more. When you bet millions to make millions, while others are tossing peanuts and making a good percentage of what you're expecting back... well, you've got to ask yourself: are you doing it wrong?

Thursday, February 23, 2012

The good programmer

There will always be a need for hardcore programmers

this video has been whirring around in my head ever since I posted about it : it's titled inventing on principle, taking the principle that creators need an immediate connection to their creation and running with it, saying that when that's not true, invent a solution. Reduced to developer speak, he says it's necessary to reduce iteration time down to nothig so it allows moments of inspiration through the immediate feedback.

I think it's a great talk, go watch it if you haven't already. It's inspiring.

But, the thing that's just stung me though is the idea that although it's great to work with these tools that allow less experienced programmers to create wonderful things, and let non-programmers get the feedback they need, there's still always going to be a need for highly experienced or _hardcore_ programmers.

The reason why is hidden in the talk.

"... and to a large extent the people we consider to be skilled software engineers are just those people that are really good at playing computer."

Now, let that mull. Let that fester. Let the idea that a good computer programmer, a good software engineer, someone that's good at writing code or debugging stuff, is in effect, someone who's good at just knowing what a computer will do when it runs the code.

Knowing : Not figuring it out step by step, not referencing online documentation to find out what function foobar() does in this context, but just knowing.

Knowing is a whole different world to being-able-to-figure-it-out.

Although his tools are wonderful, there are areas of development where it's very hard to make a tool that works that way. If you work in games, then you''ll possibly come across some area or another where side effects cannot be seen in real time, such as anything relying on player input, or some other completely random process like network connectivity.

So there will always be a place for the programmer who can play computer, just like there will always be a place for the architect that can see the materials and stresses in his drawings, or the psychologist who can feel the response of their words without needing test them on their patient.

There doesn't need to be a segregation of those who create from those who 
play computer, but as it turns out, there usually is. Even if it's only in the sense that an individual is either in creation / debug mode. In my understanding, the best games developers are the ones that can combine the power of internal simulation for both the design and implementation stages. But this post was about hardcore programmers, and how we will always need them. I think I've already made a convincing case that there will always be something horrible out there that can't have instant feedback (like checking if an e-mail server can support >4gb files), or bug that has a 0.001 probability of happening. In those cases we still need our hardcore programmers, but there's another area where the truly hardcore shine, its the same place the designers shine when they're given their 0 second iteration time. Hardcore programmers, the programmers that know, shine in new frontiers. They think of the side effects that others miss. Take as an example, a good programmer who chooses to use some container over another because the order of complexity is good for the task at hand. This is a good programmer because they consider what they know about the algorithm, about the data, and apply a solution based on some knowledge of both. Then take as a counter example, a hardcore programmer who chooses to not use an existing container at all, and instead uses what he knows of the data to think through the requests, the assignments, and generates an on the fly container that is nothing much more than a lookup that resides in the same cache-line as the data that will be needed if the initial lookup indicates it will be needed. The difference is that the good programmer is good, gets the job done, doesn't write bad code, his code is readable and understandably good. People can see that he's a good programmer because his code conforms to best practice guidelines, and his code is easy to read. The difference is that the hardcore programmer sees the machine bare it's guts to him, notices latency on instructions at such an innate level that they are always thinking about the critical path through a function, watching for unnecessary branches, estimating the probability of the ones taken and always thinking about where the data is and what values it has as much as the good programmer is thinking about quantity and big O notation.
The difference can be seen as one of depth of understanding, but more often than not, it manifests as a difference in approachability. the hardcore programmer doesn't seem to be a good programmer on initial inspection by those that aren't clued up, but at least he'll always be required. The hardcore programmer usually has to prove themselves by results rather than how they got there. The same is true of the hardcore architect. Other architects can come along and change "just one thing" and break the balance or strength of a building because they don't see the scope of the change that the hardcore architect can see easily. take this tragic example:
And this is happening all over your code when you hire good programmers and hold back the hardcore programmers because they're not following guidelines. Just because people aren't dying yet doesn't mean everything is okay.