Friday, August 12, 2011

How many machines do you need to develop a multiplayer game?

Assumption : You're working on internet PC game.
  • As long as you can run multiple instances of your application, then you can do all the network multiplayer testing you need.
This would be fine if you didn't want to test if your game grinds to a halt due to network traffic not being 100% reliable (like it is if you use local loopback).
  • You can run the game on two machines to do your testing, that would be indicative of network traffic in reality.
Except it's not, as you're testing on a Lan, which is about a thousand times more reliable than any real world internet connection.
  • You can run the game on two machines, but make sure they are located on different ISPs.
Much better, and would work for almost all cases of network errors such as NAT tunnelling.

But, what if you're working on action game? They normally have match-making, and no dedicated servers any more apart from login and match-making. Most games are actually host-client, but have more state information held in the clients so they can reliably survive a host drop. This P2P network gaming has it's own problems.
  • Two machines, one is virtual host, one is client.
Great, what about client vs client interaction?
  • Three machines, one is virtual host, two are clients, do most of your testing on a client vs client basis as host would probably know enough to be right anyway.
Good, but what about buggy situations where there are three clients involved in a dispute (such as when you get shot by two people, both client, you're a client, and you don't know who shot first and who shot second.)
  • Three or more machines depending on how many people are involved in each of the logical gameplay elements. (#involved + 1host == needed machines)
But what about network throughput, connection dropping and net-splits?
  • As many machines as you have decided maximum players in the game, and some funky network-switches/hubs you can pull plugs on.
Which is impossible for MMOs... no-one could get that machine machines together, let alone pay all the testers to operate the game.


Well, here's what I think you should probably do:
  • For initial development, three is a must.
  • Once the basic game is done, make sure you have some form of bot, and get as many computers as you can afford.
This works for everything but human error and tunnelling problems. You must have real humans play the game in their homes, so you must do an alpha/beta before release. If you don't there will be game breaking issues. This isn't just likely, it's a guarantee on all but the simplest multiplayer games.

Tuesday, August 09, 2011

"first draft of the GPHI doc / wiki"

That was the first line of an email. About what was to be my new approach to writing games. The email was sent on the 17th of April 2006.

That's how long I've been doing data-oriented development without knowing it's name, or even having any other people to talk to about it.

GPHI was a new layer into our code base that took the idea of classes and turned it on its head. I found out in 2007 that what I'd invented was quite similar to the dungeon siege component model. I took all the components we normally had in our classes, or inherited from, and made them into managers. This mean that we had a manager update loop, not an entity update loop....

fast forward a short while and I went on to work on (but not finish before the company went belly up) a new rendering engine that didn't use a scene graph, but instead stored all the renderables in an array and sorted by whatever criteria necessary (material/mesh/shader/shader-constants), even changing per frame.

fast forwards to now... I still haven't been involved in the release of a data-oriented game, but I'm still plugging along, and pushing component oriented development, entity systems, stream and transform oriented approaches to game object processing and even writing a new language to help teach and maybe even be useful and productive with.

It's been a long time since I had that epiphany back in early 2006, the world has moved on a long way, but I feel quite happy that the road I started on is the one leading to the future.

Monday, August 08, 2011

The reason why, and what it's not.

The language I'm working on (Transformative C or TC) is designed to force the programmer into doing things in such a way as it becomes hard to make software that is bad for the hardware.

Here is a list of things it provides in order to persuade the programmer to play with it:
  • Cannot memory leak. Behind the scenes memory management without a need for garbage collection.
  • All data manipulation is done through shader like transforms which take one or more input and output streams. (more than one input stream is not normally recommended though)
  • Duck-typing (kind of, you can provide a struct translation to use a non-conforming struct in a transform)
Here is the list of things it doesn't allow in order to restrict the programmer from breaking the simplicity:
  • No capacity for object oriented development built-in, no types other than the user defined streams and the elements provided by the language)
  • No pointers (and therefore, to some extent, memory) there is no real need of them.
  • No recursive function calls. This is just so it plays well with FPGAs and GPUs
How does this compare to OpenCL?
I think OpenCL might be something that this language can compile to, but at present, TC is much higher level. OpenCL gives the programmer all they need to make good parallel code run on various parallel architecture, whereas TC restricts you to be sure that whatever you write can be run on anything. This distinction can be be summed up by my interpretation of the goals of the two languages:
  • OpenCL was developed to give a unified API to access GPU and beyond by restricting only where necessary, while providing fine control over how jobs are issues and to where.
  • Transformative C is being developed to stop the programmer from being asked to think about how transforms are being ran, and instead just think about how to write transforms for their data.
Under the hood of OpenCL, developers can shoot themselves in the foot by allowing the data to be organised in any way they see fit, luckily, most hardware can use striding to get around the norm of programmers dropping things into structs. However, this does not help older hardware that can't stride, or newer hardware designs that might have an advantage if they could guarantee non-sparse reads.

Under the hood of Transformative C, everything will be the simplest array possible with inputs and outputs being more like Structs of Arrays and all data streaming handled by the management layer, which can choose which layout fits the data usage pattern best.

Wednesday, August 03, 2011

And so I finally publish

three years after I finish writing the book, I have finally decided to publish through kindle and be done with it.


http://www.amazon.co.uk/Theo-ebook/dp/B005FQN4QG/ref=sr_1_1?ie=UTF8&qid=1312419499&sr=8-1

Tuesday, August 02, 2011

Transformative C

The most important part of the project is over. I have given the language a name.

Presently I'm at the stage where I'm still playing with the lexer, parser, and the code generator, trying to make it do anything at all, but there are some things that have become more obvious through the short time it's been in development.

  • a program is made up of transforms that operate on streams of data
  • the selection of, and order in which the transforms are called is the program.
  • the transforms know nothing about what created the data they are operating on
  • the transforms know nothing about what will use the data they create
  • you can reuse code in two main ways
    1: sub functions called by transforms
    2: using programmes of transforms given streams as arguments
  • everyone is going to love or hate this language
Once I have a program that prints hello world, I'll start putting up source code snippets.

Monday, August 01, 2011

Patently obvious

Patents are there to keep inventors safe, yet the people they keep safe are hardly ever inventors.

Surely that fact alone must mean it's time for change?

Could there be a workable alternative, or is this just indicative that any invention needs to be backed up with actually doing something about it in order for you to deserve some reward?

I used to believe that patents would protect my ideas (if I were to have any of note) but now I ponder:
should I be allowed to rest on my laurels after having a good idea?

Why should one moment of inspiration award me a lifetime of cash?
Why shouldn't the second person to have the same idea also be allowed the same award?
Why should anyone be rewarded continually for only one act?

So, an idea is born, of not one mind, but of the collection of minds that helped bring that mind to th singular point at which the possibility of it's non existence reaches zero. Who's mind it falls from, is but a lottery that rewards unfairly in these new times of infinite communication. The reward for such a feat of inevitable invention is to curse all those else who thought of it and instead of acting, pursued a course of engineering, made it material with effort rather than made it legal with uttered words or written letters.
Being rewarded continually for one act reminds me of the very undemocratic method of ruling known as despotism. The cruel world where you could be happily self sufficient, except you have to pay a self proclaimed king because you didn't claim the idea in your name first. But then you would be king, and they the peasant rightfully objecting.
Are you an invention despot? If you have an idea, is it moral to keep it to yourself? Is it moral not to free ideas, to keep ideas to yourself, to try to earn a living by keeping back the human race?
Are we not, as a race, made grand by our ability to copy, share, remix and create from the ashes and fallen bodies of half-baked ideas and inventions? Made more than mere animals by our unique ability to be transformed by the memes that we hold so dear?

But what about the inventors that did try? The web site owners that had their idea taken wholesale (such as Apple vs Instapaper). Well, I say, Apple spent their money to make a product. So did the inventor. Hopefully they will have another good idea and make more money, but if they are living on the cash from one idea, then all your eggs in one basket comes to mind. If you're big enough that a company like Apple takes notice and steals from you, then you're probably not doing too bad anyway.

Ideas are cheap, doing the work costs time and effort. If you can't be bothered to invest your time or money in an idea, do you really deserve it?