By the 19th century parts of a truly complex free-market economy were pretty much institutionalized in the United States. The theoretical justification was largely based on the earlier work of Adam Smith (1723-1790), including Wealth of Nations (1776). By the early 20th century thinkers like Ludwig Von Mises formalized the theory with works like Human Action, defining the roles of money, banking and free trade.
These became the theoretical underpinnings of free-market economics, as every Econ 101 student learns at some point. They largely describe the steady-state operation of a stable economy, and these principles and understandings heavily influence the operation and guidance of the global economies of today.
Every time you or I go to the grocery store to pick up a dozen eggs, we are contributing to the advancement and confirmation of those theories and the many derivative theories that depend on them.
Theory goes all to hell when we go to the store and place an order for 10 trillion eggs. You can’t generalize from a single instance – the laws of large numbers don’t lend themselves to analyzing unique, unprecendented transactions.
What happens when the state of the economy goes into disequilibrium, when economy becomes unstable?
In software design and testing, a lot of effort goes into what is called “boundary testing”. Suppose the allowable values of some NASA life-critical control program fall between zero and one. One tests to see what the program does for expected values, like 0.1, 0.3, 0.7 and 0.9. Now, what happens when our test value is exactly 0.0? Or when it’s exactly 1.0? The program is supposed to reject that and fall back on some default action instead.
But how does the program behave when the value is 0.00001? or 0.99999? Each are within the allowable range. Free-market economics should have spent more time looking at these boundaries. These are the areas of operation where the algorithm often goes out of control, like our blue giant star that has just run out of hydrogen and helium.
When “some clown” orders the 10 trillion eggs, you have to throw out the rule book and send out the hired guns to find out exactly what’s going on. You can build algorithmic models into computer programs to detect such transactions (or trends), and even to sound the alarm bells, but your mathematical model will never predict the outcome of an unstable event.
Lehman, Bear Stearns, Citibank, Wachovia, Goldman Sachs, AIG et al were sitting on a powder keg of derivatives and “toxic debt”. Everybody had a pretty good idea this was the case; this development didn’t happen overnight. But it didn’t fit a historical pattern. If there was going to be a problem, we thought, just control inflation, and the problem will fall into line.
And, here you go: The December 2008 Scientific American reports in “After the Crash” that “overreliance on financial software crafted by physics and math Ph.D.s helped to precipitate the Wall Street Collapse.”
“Wall Street’s version [of an airplane crash] stems from the SEC’s decision to allow overreliance on risk software in the middle of a historic housing bubble. The heady environment permitted traders to enter overoptimistic assumptions and faulty data into their models, jiggering the software to avoid setting off alarm bells.”
There’s a moral hazard in the whole debt underwriting process.
– Part VI of a continuing series –
818 total views, 1 views today