Not smarter or cleaner, just faster.
The faster you go, the better you get as experience accumulates and decisions find their outcomes sooner and you learn more about why and how things happen.
The more work you do, the better you get at doing it.
The faster you get, the more work you can do, so you know you're going to get better quicker too, which in turn makes you faster again. Concentrating on being productive is something I tried to promote at Broadsword, but we were already scuppered by the time I really started pushing this so it fell a bit flat. What have I learned since? Anyone that doesn't embrace speed over cautiousness is a large lumbering beast that takes one hit to kill. Anyone that does embrace speed seems to do well even in a dangerously volatile market like games development.
Faster games are easier to debug.
When your game runs faster it means it's doing less, doing less means there's less code for bugs to exist in. But doing less is what coders like to do anyway, so why isn't there less code already?
Special cases make a game what it is.
My opinion is that all games are generic games with special cases. This is either obvious to you, or absurd, either way, it spells out what's really in our code bases: exceptions to the norm. So games that do stuff other than just sitting there blankly have the special case that they take input. The moment that happens, you have an exception that needs to be handled by some code, and therein lies the opportunity for bugs. The amount of work you do to implement your exceptions is your productivity metric. If one programmer builds a magnificent framework that lets him add features at a rate of one per minute, whereas another programmer doesn't and spends ten minutes per feature, we might assume that the first programmer was more productive. This false conclusion is perpetrated because we forget about the cost of building the framework, debugging the framework (as there is also a framework to hide bugs in), and also the time taken for another programmer to get used to the framework before adding features themselves.
Feature creep never goes where you expect it.
Another problem with a framework for solving exceptions is that the framework was probably built for the list of problems available at the time it was conceived, and sometimes for some that the programmer thought of while developing the framework too, but not for the unexpected exceptions. These unexpected exceptions are now just as hard to solve with code as they were before, but now they have to work alongside a lot of code not just some smaller, to the point, exception code that came about from the approach the second programmer took.
Code less to cause less code.
If you code less, it's easier to refactor. If you find you need to refactor, or in fact if you find that refactoring would solve some new exceptions without adding more code, then you've already won. But if you architect from the start, your invite performance loss of both your team and your game. If you prepare for the worst and write a brilliant all encompassing architecture, you're also in danger of hiding your bottlenecks. Imagine you had a load balancing system that automatically reduced asset quality, AI processing, physics model accuracy, all on the fly. If you have this, then it's great achievement, but you've probably just hidden all the stupid behind this barrier of guaranteed consistency. How will you know that AreaX is really badly put together if you can't tell that you're not seeing the highest level graphics because they never came online?
Fast games makes smarter coders.
Wisdom is the product of failure, observation and reason. You need to do things and fail to get wise. Good developers are wise from effort, and to expend effort, there must have been an opportunity to develop. The more time you get to try things, the wiser you can get. The faster your code compiles, runs, gets to the bit your testing (in other words, your iteration time), the more you can learn and the more you can do.
Wasted cycles are not wasted if they're spare.
In optimization there is the concept of wasted cycles, that is cycles of the CPU or GPU that it's wasting while waiting for data to load, or waiting for some comparison operation to figure out if it needs to branch. This is real waste as it's trying to do something and it can't because it's stalled. There is another kind of wasted cycles though, and it's the cycles spent waiting for the VSync or some other real-time event. These are not wasted though, they're spare.
Why the distinction? Surely cycles that aren't being productive are a waste of resources either way. The result is the same for the consumer, a game with the same quality graphics running at the same fps with the same features.
The reason lies in whether they are recoverable. If your code is running faster than your frame rate then you can run unit tests of game play in accelerated time, you can add in logging features without affecting frame rate, you can try out mediocre quality implementations of new ideas and see how they would affect the game. Also, normally, when a game is running within it's frame, it's easier to have more than one game running at once, and easier to run on battery powered devices like laptops or phones.
Battery use = efficiency of code.
Limited battery life affects us but so does download size or computer spec. If you are a web games portal, would you prefer the game that ran at 60fps on a low end machine and only had a 1mb download, or the larger 2mb game that only ran great on good hardware? It's obvious, you choose the one that gets players looking at your ads, so the smaller one that costs less in bandwidth and runs on more people's machines.
Time is money.
Even if web portals isn't your demographic, or battery life on phones isn't important to you, or whether you can run or develop the game on your mum's laptop she bought from PCWorld, or even if you don't care about logging or adding features, the simple fact is, if you plan to be fast, you'll get more done with your time.
Fitter, happier, more productive.