Runtime polymorphism normally means runtime binding of code using virtual tables and the like, which cost you a double dereference at best. We are aware of the extra space taken up by these signatures at the beginning of all of our polymorphic structures, and the extra time involved in double dereferencing the function pointer on virtual calls. We also think it's impossible for a class to exhibit runtime binding without these costs. In games we normally use virtuals to allow us to specialise a class into a particular role, such as a person being either a player character, a normal badguy, or maybe just an innocent bystander. How they get around, their mode of transport, or even down to how they path find, it's different for each one. The player just uses controller input most of the time, sometimes controlled during cut-scenes, for a badguy, it's his template or setup from the level, for an innocent bystander it might be a canned animation for the most part, or some flocking algorithm. The important thing to remember is that whenever anything does a movement tick, it normally goes through a process of hunting down the virtual table by accessing the vtableptr at the beginning of the child class of the "person", then offsetting into the table to find the function pointer for "TickMovement()".
Normally I say because in my experience, people do it this way.
Existence based processing handles it differently. For virtuals that define a behaviour like in the case above, the table your person belongs to affects how they move. Literally, the table the person is in defines what processing will be done on them to get them to move. If the person has their motion data in the pedestrian table, they get handled like a pedestrian, looking for a random path somewhere. If they're in the badguy table, they use their start points and their patrol routes to choose their motion. If they're in the player table (which might not just be one entity, think about network games), then they'll have some input to process, then get on with moving too. But in each of these cases, there is no need for a double dereferencing vtable pointer.
That's one benefit. The other benefit is using existence to provide an alternative to bools.
Summarising compression technology: encoding probability. All known compression technologies work on the simple premise that if it's likely, use less bits to descibe that it happened. Existence based processing offers us a little of that, but for our CPU time. If most of the people in the world are at full health, only process those who have non-full health. This is simple, whenever someone gets hurt, insert a row in the "am hurt" table, and process the table it every time you would have processed health. If that person dies, then you could convert them into a deadperson (who also doesn't need an "am hurt" as all dead people have zero health, presumably). So, for things that are sometimes not default, you can reduce the amount of time spent on their processing and reduce their memory usage.
- anim: in a modal animation for some task, like throwing a grenade, or changing weapon
- anyone: hurt, but regenerating health
- player: under water, losing oxygen or just surfaced and replenishing
- people: heard some gunfire, so actively looking out for suspicious activity
- shops: player interacted with me so keep track of what happened so I don't repeat myself or suddenly forget he sold me a +5 longsword in the ten seconds he's been away.
Now, let me reiterate, this gives you runtime changable polymorphism, costs you a pointer probably due to point back to parent class, which is equivalent to vtable pointer cost, but saves you the double dereference for accessing the polymorphic qualities. Also, you get to change your "type" at runtime quite easily because your type is defined by your existence in a set / table.