Re-posted from the Atlastory blog.
In Nassim Taleb’s new book “Antifragile,” there’s an interesting segment about how an entire system can be antifragile (benefiting from variability / disorder / stressors) precisely because its individual parts remain fragile (harmed by variability). A few examples:
The engineer and historian of engineering Henry Petroski presents a very elegant point. Had the Titanic not had that famous accident, as fatal as it was, we would have kept building larger and larger ocean liners and the next disaster would have been even more tragic. So the people who perished were sacrificed for the greater good; they unarguable save more lives than were lost. . . . Every plane crash brings us closer to safety, improves the system, and makes the next flight safer.
Thankfully the errors we encounter while developing Atlastory don’t involve anyone dying. But the same principle applies — every bug, problem, server crash, chokepoint, or design flaw we encounter leads to a better system. We want to run into problems, because that means we know about them and can now fix them — eventually making the user experience better as a result.
“Some businesses love their own mistakes,” Taleb continues. “Reinsurance companies, who focus on insuring catastrophic risks . . . manage to do well after a calamity . . . All they need is to keep their mistakes small enough so they can survive them.”
The more you benefit from low-downside mistakes, the more “antifragile” your business is. I see this as a function of both the industry you’re in and the internal culture of the company.
If everyday work and life is viewed as a science experiment (the circle of observe > guess > test > interpret), then any screw-ups or failures are a good thing in the end. You know something’s wrong, and you can work on fixing it. Taleb again: “…every attempt becomes more valuable, more like an expense than an error. And of course you make discoveries along the way.”
Continual improvement is everyday life in software development, but it is only just catching on for personal development.
One thought on “Mistakes = information”
I know the Titanic disaster spawned lots of improvements in safety regulations, but the “bigger and bigger” part seems poorly researched. Modern ocean liners have twice the maximum capacity of the Titanic: