Talking with a friend last night, I completely failed an impromptu stats test. I said that with data distributed normally, the errors accumulate linearly, while with data distributed exponentially, the errors multiply. To which they replied, "You don't mean actual arithmetic multiplication, right?"
Well, I did. Only I wasn't able to pull together a quick demo of why this is so. To make up for that, here's a quick run-through.
Take height in humans, IIRC we have 6 genetic sequences that code for height, where each sequence can either mean Tall or Short. A string representation for someone's height could be something like TTTSSS, an error would flip one of the sequences. Say an error flipped the first sequence from tall to short STTSSS, and a second error flipped the second to short as well.
Each error took a bit of height off the human, but the same amount of height each time. I.e. they accumulate linearly.
Now take wealth in humans. We have an initial amount of wealth, and series of investment opportunities. To simplify, let's assume that a Failed investment breaks even, and Successful investment yields 20% on the original money. Someone's investment history could look like SFSSFF. Say an error flips one failure to success, and a second error flips another failure to success.
The difference in outcomes is 20% for the first error, and 24% (1.2^2 - 1.2) for the second. Each additional error that flips an investment from failure to success yields even greater gains. I.e. the errors multiply.