Abstract
This article develops an unconventional theory of infrastructure criticality based on decade-old ideas from a variety of disciplines. First, the concept of self-organized criticality (SOC) is explained using three simple simulations proposed by Per Bak, Newman, and Amaral-Meyers. Each simulation illustrates an aspect of SOC: self-organization, randomness as an underlying engine of disaster, and the role of interdependency or connectivity in complex systems. Next, the discussion shifts to an explanation of a general property shared by many major disasters: the fractal power law. Power laws turn out to be appropriate proxies for the insurance industry measure of likelihood called exceedence probability (probability of consequence equal to or greater than some size). The power law exceedence probability curve is associated with nearly all sectors prone to catastrophe. This is no coincidence, but more intriguing is the realization that power law exceedence probability curves can be produced from purely underlying randomness. This supports the author’s conjecture that catastrophic incidents (often) occur because of randomness – not strictly cause-and-effect.
Suggested Citation
Lewis, Ted G. “Cause-and-Effect or Fooled by Randomness?” Homeland Security Affairs 6, Article 6 (January 2010). https://www.hsaj.org/articles/93
INTRODUCTION
What is the nature of catastrophes? Can major incidents such as the terrorist attacks of September 11, 2001, Hurricane Katrina, the 2003 blackout of the Eastern power grid, and the financial meltdown of 2008 be explained by cause-and-effect, or are they simply random events in world history? Scientists always look for cause-and-effect, action and reaction, logical explanations of the real world, but what if catastrophes are a product of randomness? Scientists have failed to accurately predict the consequences of earthquakes, hurricanes, and terrorist attacks even though a considerable amount of effort has been spent on methods of “prediction.” Can we explain catastrophic events as the product of some motivating incident or series of incidents, or are we simply fooled by randomness? 1
One achievement of western reductionist thought and indeed the scientific method itself is the implied ability to explain nearly everything that happens in the natural world as cause-and-effect; every cause has an effect, and effects can be traced back to their causes. 2 If we understand the cause of earthquakes, floods, fires, and terrorist attacks, we can do something about them. At least this is the theory. But in practice, discovering the cause of catastrophe is mostly an exercise in hindsight. After the terrorist attacks of 9/11 it was obvious to many in the Intelligence community that an attack was eminent. After Hurricane Katrina it was obvious that the infrastructure of New Orleans was long overdue for strengthening. After the 2003 blackout, the cause was easily identified and rectified. Understanding cause-and-effect is the first step towards prevention, hence the urgency for understanding why something disastrous happens.
But there is another plausible explanation based on complex adaptive systems theory. This theory lies halfway between rational cause-and-effect logic and the unpredictability of “acts of God.” Essentially, it says that inevitable catastrophe is imbedded within many complex systems themselves. These so-called critical systems contain the seeds of their own destruction. Moreover, critical systems move toward the precipice of catastrophe rather than away from it, by their very own nature. They are subject to evolutionary forces that shape them, and if these forces are not controlled, a critical system evolves from a “normal state” to a “critical state”. Because it is a property of the system and its evolution, rather than some external force, these systems reach a state of self-organized criticality (SOC) under their own power. SOC systems are perched on the edge of chaos, near a tipping point between normal operation, and disaster.
According to the SOC theory, political systems that lead to terrorist attacks, financial systems that lead to resounding stock market crashes, electrical power grids that experience 100-year magnitude failures every decade, and hurricanes that wipe out entire cities are the result of a form of emergence called self-organized criticality. A small (random) perturbation in these systems can trip a major collapse, unexpectedly, dramatically, and resoundingly. Because the cause is not obvious (until after the fact), and it is often a very minor perturbation, the collapse comes as a shock. Is it possible that an unfortunate event is psychologically surprising only because of its magnitude, and not because it is unexpected?
What is the nature of SOC systems, how do infrastructure systems get that way, and what can be done to prevent the impending catastrophe? This article develops an unconventional theory of infrastructure criticality based on decade-old ideas from a variety of disciplines. First, the concept of SOC is explained using three simple simulations proposed by Chao Tang Per Bak, 3 Mark Newman, and Amaral-Meyers. 4 Each simulation illustrates an aspect of SOC: self-organization, randomness as an underlying engine of disaster, and the role of interdependency or connectivity in complex systems.
Next, the discussion shifts to an explanation of a general property shared by many major disasters: the fractal power law. 6 but makes sense if we believe in the NIPP’s risk-informed decision-making policy.
Sandpiles, Sticks, and Nature
Per Bak’s simple and elegant illustration of a self-organized system had little to do with homeland security, critical infrastructure, or risk-informed decision making. He was simply trying to understand the alarming discontinuity that occurs in many complex systems that suddenly collapse for apparently no reason. Imagine a sand pile built from grains of sand slowly dropping onto a flat surface. Over time a cone-shaped pyramid forms. As more and more grains of sand fall on the pile, the cone grows in height and breadth. Suddenly a portion of the cone breaks away, causing a landslide or avalanche. Per Bak asked if it was possible to predict the size of the landslide and compute exactly when the landslide would break away from the cone. As it turns out, the timing and size of individual landslides cannot be determined with any precision. Instead, Per Bak observed many different sized landslides and plotted them on an exceedence probability curve. Interestingly, the sand pile exceedence probability curve is a fat-tailed power law. This curve has subsequently become the center of attraction for scientists from a variety of disciplines, because it keeps showing up over and over again.
Exceedence probability curves like the one in Figure 1 simply plot the likelihood of an event such as a landslide occurring of size greater than or equal to x. 7 They differ from frequency or histogram distributions, because of the “greater than or equal to” part of the definition of exceedence. Exceedence probability curves are used by the insurance industry to compute maximum expected loss due to a calamity, which is a form of risk. Multiplying exceedence probability times consequence yields probable maximum loss or PML risk, which is used as the basis for calculating insurance premiums. For example, the point in Figure 1 where the exceedence probability is 20% appears along the x-axis at x = 3, which is the likelihood of a landslide of size 3, 4, 5, … or 10. PML risk at this point is (0.20 x 3 = 0.6). This says that the probability of an event with consequence greater than or equal to 3 is 60%.

Figure 1. Exceedence probability is a power law: EP(x) = x -q; where q is an exponent defining the rate of decline of the curve. The probability of an incident with consequence equal to or greater than x falls dramatically as the consequence of the incident increases. A power law is “fat- or long-tailed”, because it declines slower than an exponential function.
Repeating Per Bak’s sand pile experiment many times, and measuring the sizes of landslides, we find that small landslides occur much more often than large ones. Extremely large landslides are extremely rare, but not impossible. Small incidents are much more common, but their consequences are much less. If the size and timing of each landslide were truly random, the exceedence probability curve would be S-shaped rather than shaped like a power law. The fact that an exceedence probability curve obeys a power law suggests a deeper meaning. The meaning of exceedence probability is probed further in this paper. Interestingly, real earthquakes obey a power law with exponent q = 0.41, as seen in Table 1 at the end of this article. [The larger the exponent, the more abruptly the curve declines as shown in Figure 1].
Per Bak’s playful sand pile demonstration became known as the BTW experiment – from the initials of the three authors of the 1987 publication describing it. 8 Its profound impact on a variety of disciplines is why so many writers from across many fields of study continue to reference and use it as the canonical illustration of SOC. Mark Buchanan may have been the first popular writer to note the generality of SOC, power laws, and catastrophes, but many others have adopted it as their own. 9 Malcolm Gladwell’s popular “tipping point” book introduced the BTW experiment to a wider audience, 10 and more recently, Joshua Cooper Ramo’s ”concept of world disorder” equates SOC with the unthinkable. 11 SOC, power laws, and randomness seem to be a common property of both natural and human-made catastrophes.
At first glance the BTW experiment seems too specialized to apply broadly to homeland security and infrastructure protection. However, Buchanan’s treatise on catastrophe provides additional evidence of the generality of SOC. Consider Buchanan’s description of an experiment proposed by Mark Newman that illustrates the role of randomness in SOC. Newman’s experiment (see Figure 2) is strikingly real and yet simple. Consider a collection of sticks varying in length from 0 to 100% (choose your own units of measurement – it doesn’t matter). Repeatedly produce a random threshold number T, between zero and 100%, representing the length of survivor sticks. Replace sticks of length less than T with new sticks that are also of random length between 0 and 100%. Repeat this experiment, forever, and observe what happens.

Figure 2. Newman’s Sticks: Sticks shorter than randomly selected threshold value T are replaced with sticks of randomly selected length, see six rows of sticks in top panel. Consequence is defined as the number of replaced sticks after each incident. The upper plot is experimentally obtained exceedence probability and the lower plot is experimentally obtained “number of survivors versus time”. Download and run the Catastrophes simulation [.jar] at http://www.chds.us/?media/openmedia&alt&id=2260. 12
In this experiment, consequence is equal to the number of replaced sticks on each round of replacements. The exceedence probability curve is obtained by placing the fraction of replaced sticks into bins of size 1%, 2%, 3%…. 100% consequence, and normalizing the fractions so they add up to 100%. Exceedence probability EP(x ≥ Consequence) is the sum of the subtotals in bins 100%, 99%, 98%… x%. That is, exceedence probability is the probability that x or more percent of the sticks are replaced after each round.
The exceedence probability of this experiment is a power law like the exceedence probability curve of the BTW experiment. And they are both shaped like the curve in Figure 1. These two seemingly different examples produce power laws – with possibly different exponents. Power laws are fractal or self-similar (they look the same at all scales – magnifying a portion of Figure 1 produces a curve just like Figure 1). Regardless of the scale used to measure consequence, the resulting curve has the same power law shape. Because of the fractal or self-similar property of power laws, small incidents are just miniature versions of large incidents.
Scientists from a number of fields of study have observed hazards and recorded their exceedence probability curves and found they are re-scaled power law fractals (see Table I). Hence, power law incidents are also called scale-free. Whether the incident is an earthquake, hurricane, terrorist attack, airline accident, or power grid failure, they all obey a power law. Thus, fractals, self-similar, and scale-free are simply different terms for the same power law property. Self-similarity is the important concept, because it relates small and large consequences to an underlying randomness.
The rate of decline of the exceedence probability curve differs for different classes of catastrophe as shown in Table I, but they are all self-similar fractals. This intriguing result suggests a cause-and-effect, but in fact the author shows that power laws are produced by an underlying randomness, independent of any cause-and-effect. If an underlying cause-and-effect existed, the exceedence probability would not be a power law. For example, both BTW and the Newman stick simulation have an underlying randomness that produces a power law. But fatal automobile accidents in the USA do not. 13
Normal Accidents
Charles Perrow’s 1984 book, Normal Accidents, pre-dates the BTW experiment. 14 Even so, Perrow suggested that accidents such as the Three Mile Island nuclear power plant disaster were caused by many small fractures or failures building up into bigger failures. He recognized that disaster is the end-result of interactions internal to complex, highly connected systems. His near-encyclopedic treatment of accidents always led to the same conclusion: accidents are normal (as in to-be-expected) because of system complexity and connectivity. In reference to the Three Mile Island incident, Perrow says, “The cause of the accident is to be found in the complexity of the system….It is the interaction of the multiple failures that explains the accident.” 15
Small incidents spread and magnify into larger incidents as in the SOC model, but Perrow added an element: fractures in complex systems propagate via various forms of connection or links among the parts of the system. In other words, complex systems are networks. Their interacting parts are network nodes and their interactions travel via network links connecting them. This idea is dramatically illustrated by the simple “food network” experiment proposed by Amaral and Meyer, and described by Buchanan. Figure 3 is taken from the author’s simulation of the Amaral-Meyer model of network collapse. 16

Figure 3. Amaral-Meyer Network: Fractures or “extinctions” percolate up from random extinctions of nodes at the lowest level of the six-level network. Nodes go extinct whenever all links below them are removed. Consequence is defined as the number of extinctions following each random extinction occurring at the lowest level. Upper plot is the exceedence probability curve, and lower plot shows number of survivors versus time. Download and run the Catastrophes simulation [.jar] at http://www.chds.us/?media/openmedia&alt&id=2260. 17
Consider a 6-tiered ecological or “food network” consisting of niches, represented by nodes in Figure 3, and links, represented by lines connecting pairs of nodes. Nodes are colored black if they are occupied by a surviving species, and colored white, if unoccupied. Amaral and Meyer imagined a world in which species at each level occasionally and randomly mutate and fill an empty node or slot above, below, or on either side of themselves (at the same level). Mutations occur with small probability and tend to increase the population of occupied nodes, as long as they can link to at least one occupied node immediately below them. Linking establishes a food chain, supporting nodes above, but not below or at the same level. If a node is unable to establish a link, or if the link is broken because the lower-level node becomes extinct, the upper-level node also becomes extinct.
A curious thing happens when a lower-level node randomly goes extinct and all of its links are removed. This small “accident” propagates to all of the nodes connected to the removed node. If all links are removed from a node in a level above the extinction, it too becomes extinct. The accident is propagated up to the next level by repeatedly removing links to higher-level nodes, etc. The Amaral-Meyer network simulates Perrow’s normal accidents.
The supply of new nodes is replenished by mutations and diminished as extinctions remove them, so what eventually happens in the long run? The number of nodes steadily increases as the population fills out nodes and expands across levels. Then growth levels off and stays level for a long period of time (dictated by the rate of extinctions and the rate of mutations). Suddenly and unpredictably the network collapses. Like the BTW experiment, the timing of this collapse is unpredictable, but its exceedence probability obeys a fractal power law. This suggests randomness as the underlying property of network collapse, rather than individual extinctions. 18
Meltdowns
The Amaral-Meyer network may model many natural and human-made complex systems that unexpectedly fail after a long period of stability. Typically, these systems crash following a small, unassuming incident that upsets the system’s equilibrium rendering it unstable. The timing and size of consequence cannot be predicted ahead of time. But, eventually the Amaral-Meyer network self-organizes into a state of criticality and collapses. It is a beautiful illustration of SOC.
Figure 4 casts the 2008 financial meltdown as an Amaral-Meyer network. 19 This complex system had at least four levels of financial institutions, all dependent on lower level “feeder nodes”. Prior to the financial collapse, feeder nodes made loans, sold them to the nodes directly above them, which in turn packaged mortgage-backed securities together and sold them to upper level nodes. Eventually, the packaged securities and packaged credit default swap derivatives were sold to non-USA investors at the top level of this food network. As the number of links increased over time, the financial network evolved toward a self-organized criticality. In this case, criticality emerged because the number of network connections around “overly connected” institutions meant the institution was directly and indirectly connected to virtually all others. Like the BTW experiment, the sand pile of financial connections eventually collapsed.

Figure 4. 2008 Financial Meltdown Network: A few of the nodes of a financial food network representing the financial system as of late 2008. Links are representative, only. The financial network is strikingly similar to a four-level Amaral-Meyer food network.
Small numbers of extinctions have little overall effect unless they propagate along connections and spread to most other institutions. But the “no propagation assumption” turned out to be false, because the network became self-organized. As the network evolved, links between levels increased as well as the number of nodes, causing the network to edge ever closer to its tipping point. The closer the network came to its maximum capacity (all nodes connected to all others below them) the closer it came to its critical point. The built-up network eventually collapsed; not because a large financial institution like Lehman Brothers was too big to fail, but because it was too heavily connected. (One can argue that Lehman Brothers Inc. was both big and connected, but its connectivity is what made it critical).
Fooled By Randomness
The foregoing simulations and the real world suggest that catastrophe is a combination of self-organized criticality, randomness, and self-similar system architecture. Newman’s Stick experiment illustrates the impact of a random external incident on failure of a simple system: extinctions may be caused by random externalities. The Amaral-Meyer experiment illustrates the impact of criticality in a connected system, whereby catastrophic failure is intrinsic to the system itself. Such systems can fail without any outside influence.
These simple simulations suggest randomness as an alternative explanation for catastrophes. They support Perrow’s normal accident theory, but moreover, they can be explained as purely random phenomena. The author performed an even simpler simulation based on purely random number generation, and obtained results identical to the BTW experiment, Sticks, and Amaral-Meyer simulations (see Figure 5). Randomness yields consequences that obey a power law, independent of any hazard. The details are skipped here for brevity.

Figure 5. Random Catastrophes. Upper plot shows results of multiplying six random numbers together to obtain consequence. The lower plot shows the size of these random consequences versus time. The upper plot shows the exceedence probability obtained by placing thousands of random products into bins and tallying them. Download and run the Catastrophes simulation [.jar] at http://www.chds.us/?media/openmedia&id=2260. 20
Policy Implications
The random SOC theory described here provides an alternate explanation for financial system meltdowns, earthquakes, power grid blackouts, and epidemics that sometimes flare up instead of dying off. We know that many of our critical infrastructure sectors have reached their self-organized criticality. 21 Overly-connected hubs are found in the public switched telecommunications network, near-capacity tie lines in the power grid, congestion on highways, lack of surge capacity in hospitals, and viruses worming their way through the Internet. Fortunately, an understanding of SOC suggests new strategies.
Several mechanisms can be used to reverse self-organized criticality. Of course, the problem can be solved at the engineering level: addition of surge capacity, operating systems below capacity, and restructuring networks to back them away from SOC. Each of these solutions has corresponding costs, however, and is the subject of another paper. A more global solution is to change regulatory policy, affecting infrastructures across the entire nation. Re-design of regulation is a better approach, because it spreads the economic burden across the entire industry. An overhaul of regulatory policy can reshape these critical infrastructures, backing them away from SOC. Sub-SOC systems are more resilient, which means they withstand failures with lower consequences.
For example, the electric power grid has evolved into a state of self-organized criticality after decades of operating at near capacity, compounded by incremental patching of its transmission network. Regulatory policies that motivate the utilities to build out more transmission capacity or promote locally-distributed generation (reducing the need for transmission capacity) would reduce the sector’s criticality. A similar criticality exists in the communications sector due to the rise of telecommunications hotels. 22 The existence of telecom hotel hubs is a direct consequence of the 1996 Telecommunications Act that advocates peering among competitors and promotes co-location of switching equipment. This regulation needs to be changed before a normal accident results in a national telecommunications blackout.
Similar self-organized criticalities exist in other infrastructure sectors. Financial systems tend to self-organize into criticality; public health/hospital systems have inadequate surge capacity; the World Wide Web/Internet is notoriously near its critical point with respect to denial of service attacks, worms, and cyber threats. However, these second-tier infrastructures have not been thoroughly studied from this new perspective. This work needs to be done.
Table I enumerates two classes of hazards to infrastructure: low risk and high risk. Hazards with a power law exponent greater than or equal to one are considered low risk, while hazards with a power law exponent less than one are considered high risk. The high-low distinction is shown in Figure 6 as a risk curve that increases as consequence increases (high risk), versus a curve that decreases, after initially increasing. This classification of hazards has important implications for risk-informed decision making.
To illustrate the application of this theory to an existing infrastructure, consider results for the telecommunications sector, circa 1990. 23 Data collected by Richard Kuhn and analyzed by the author was used to construct the exceedence probability curve, and then plugged into the PML risk equation to obtain the high-risk curve of Figure 6. Consequence was measured in millions of customer-minutes lost due to all kinds of incidents. As you can see, high-risk systems rise more-or-less monotonically, as consequence increases. The telecommunications system studied by Kuhn exhibits high-risk self-organized criticality.
The form of these curves has an interesting bearing on strategy. Risk-informed decision making recommends prioritization of investments according to risk: buy down high-risk assets starting with the highest risk. In Table I, risk is highest when consequences are small for low-risk hazards such as airline accidents, floods, terrorism, and large fires in cities. Conversely, risk increases with consequence for high-risk hazards such as hurricanes, earthquakes, wars, whooping cough, and measles, which rarely occur.

Figure 6. PML Risk versus Consequence for low- and high-risk exceedence probability curves. Exponent q is shown for power law equivalents. The low-risk curve (q = 1.5) is hypothetical, while the high-risk (q = 0.85) was obtained by analyzing telecommunications outages reported by Kuhn. 24 Note: PML risk decreases for low-risk hazards as consequence increases. This is illustrated by the lower curve.
Should a prevention strategy be applied to high-risk, low-probability hazards, because they are rare? Similarly, a response policy might be appropriate for low-risk, frequent hazards, because risk is highest for small consequence incidents. Perhaps an 80-20 percent rule should be applied: invest 80 percent in prevention and 20 percent in response for high-risk hazards; and invest 80 percent in response and 20 percent in prevention for low-risk hazards. This dual-mode strategy avoids the dangers of putting all eggs in one basket.
Ted G. Lewis is a professor of computer science and executive director of the Center for Homeland Defense and Security at the Naval Postgraduate School. He has forty years experience in academic, industrial, and advisory capacities, ranging from academic appointments at the University of Missouri-Rolla, University of Louisiana, and Oregon State University, to senior vice president of Eastman Kodak Company, to CEO and president of DaimlerChrysler Research and Technology, North America. Dr. Lewis has published over thirty books and 100 research papers. He is the author of Critical Infrastructure Protection in Homeland Security: Defending a Networked Nation (2006) and, most recently, Network Science: Theory and Applications (2009). He received his PhD in computer science from Washington State University. Dr. Lewis may be contacted at tlewis@nps.edu.
Table I. Exceedence Probability Exponents for Low-Risk and
High-Risk Incidents 25
Asset/Sector | Consequence | Exponent |
Low Risk | ||
S&P500 (1974-1999) | $Volatility | 3.1-2.7 |
Large Fires in Cities | $Loss | 2.1 |
Airline Accidents | Deaths | 1.6 |
Tornadoes | Deaths | 1.4 |
Terrorism | Deaths | 1.4 |
Floods | Deaths | 1.35 |
Forest Fires in China | Land Area | 1.25 |
East/West Power Grid | Megawatts | 1 |
Earthquakes | Energy, Area | 1 |
Asteroids | Energy | 1 |
Pacific Hurricanes | Energy | 1 |
High Risk | ||
Hurricanes | $Loss | 0.98 |
Public Switched Telephone | Customer-Minutes | 0.91 |
Forest Fires | Land Area | 0.66 |
Hurricanes | Deaths | 0.58 |
Earthquakes | $Loss | 0.41 |
Earthquakes | Deaths | 0.41 |
Wars | Deaths | 0.41 |
Whooping Cough | Deaths | 0.26 |
Measles | Deaths | 0.26 |
Small Fires in Cities | $Loss | 0.07 |
- Nassim Nicholas Taleb, Fooled by Randomness (New York: Random House, 2005).↵
- Thomas Kuhn, Structure of Scientific Revolutions (Chicago: University of Chicago Press, 1961).↵
- Chao Tang Per Bak and Kurt Weisenfeld, “Self-Organized Criticality: An Explanation of 1/f Noise”, Phy. Rev. Ltrs. 59 (1987): 381-384.↵
- Mark Buchanan, Ubiquity: Why Catastrophes Happen (New York: Three Rivers Press, 2000, 2001).↵
- The shape of a power law curve is dictated by its exponent, q: y = x-q.↵
- U.S. Department of Homeland Security, National Infrastructure Protection Plan (2009), www.DHS.gov.↵
- Patricia Grossi and Howard Kunreuther, Catastrophe Modeling: A New Approach to Managing Risk (New York: Springer, 2005).↵
- Per Bak and Weisenfeld, “Self-Organized Criticality.”↵
- Buchanan, Ubiquity: Why Catastrophes Happen.↵
- Malcolm Gladwell, The Tipping Point (New York: Little, Brown and Company, 2000, 2001).↵
- Joshua Cooper Ramo, The Age of the Unthinkable (New York: Little, Brown & Company, 2009).↵
- Catastrophe.jar contains the Amaral-Meyer simulation as well as the Sticks simulator and random event simulator described in this paper. To run the Amaral-Meyer simulation, select the ’Amaral-Meyer Network’ radio button at the bottom, center of the display, then press ’Continuous’, in the right-hand lower part of the screen, and watch as nodes mutate, go extinct, links appear/disappear, and the exceedence probability develops.↵
- Fatalities in automobile accidents are the result of driving while intoxicated and other causes rather than randomness. “For fatal crashes occurring from midnight to 3 a.m., 65% involved alcohol-impaired driving;” http://www-fars.nhtsa.dot.gov/Main/index.aspx.↵
- Charles Perrow, Normal Accidents (Princeton, NJ: Princeton University Press, 1999).↵
- Ibid.↵
- Buchanan, Ubiquity: Why Catastrophes Happen.↵
- See Note 12↵
- Not all individual extinctions result in collapse. In fact, collapse is rare.↵
- David Faber, And then The Roof caved In (Hoboken, NJ: John Wiley & Sons, 2009).↵
- See Note 12.↵
- I. Dobson, B.A. Carreras, V.E. Lynch, D.E. Newman, “Complex systems analysis of series of blackouts: cascading failure, critical points, and self-organization,” Chaos 17, 026103 (June 2007); B.D. Malamud, and D.L. Turcotte, “The Applicability of Power-law Frequency Statistics to Floods,” Journal of Hydrology, 322 (2006): 168-180, www.elesevier.com/jhydrol.↵
- National Security Telecommunications Advisory Committee, NSTAC Task Force on Concentration of Assets: Telecom Hotels (February 12, 2003).↵
- Richard Kuhn, “Sources of Failure in the Public Switched Telephone Network,” IEEE COMPUTER 30 (April 1997): n419.↵
- Ibid.↵
- Robin Hanson, “Catastrophe, Social Collapse, and Human Extinction,” Global Catastrophic Risks, ed. Martin Rees, Nick Bostrom, and Milan Cirkovic, eds. (Oxford University Press: July 17, 2008), 363-377; Kuhn, “Sources of Failure in the Public Switched Telephone Network;” Yanhui Liu, Parameswaran Gopikrishnan, Pierre Cizeau, Martin Meyer, Chung-Kang Peng, and H. Eugene Stanley, “Statistical properties of the volatility of price fluctuations,” Phys. Rev. E. 60, no. 2 (August 1999): 1390-1400; Weiguo Song, Fan Weicheng, Wang Binghong, Zhou Jianjun, “Self-organized criticality of forest fire in China,” Ecological Modelling 145 (2001): 61 – 68; Jie-Jun Tseng, Ming-Jer Lee, and Sai-Ping Li, “Heavy-tailed distributions in fatal traffic accidents: role of human activities,” http://www.arxiv.org/abs/0901.3183v1.↵
This article was originally published at the URLs https://www.hsaj.org/?article=6.1.6 and https://www.hsaj.org/?fullarticle=6.1.6.
Copyright © 2010 by the author(s). Homeland Security Affairs is an academic journal available free of charge to individuals and institutions. Because the purpose of this publication is the widest possible dissemination of knowledge, copies of this journal and the articles contained herein may be printed or downloaded and redistributed for personal, research or educational purposes free of charge and without permission. Any commercial use of Homeland Security Affairs or the articles published herein is expressly prohibited without the written consent of the copyright holder. The copyright of all articles published in Homeland Security Affairs rests with the author(s) of the article. Homeland Security Affairs is the online journal of the Naval Postgraduate School Center for Homeland Defense and Security (CHDS). https://www.hsaj.org
I’m one of the (very) few economists who has given an empirical mapping of network interconnections of the main US banks for their Credit Default Swap (CDS) obligations to one another (Markose et al 2010). The CDS obligations on mortgage backed securites stand implicated in the recent financial crisis. Likewise, we have tried to generate a power law distribution for wealth in an artificial stock market along the lines of replacing investors who fail to fulfil the average wealth in the market at each relevant period (time step) of the simulation exercise. The need to perform to a benchmark or retain market share is attributed by us to the so called Red Queen dynamic of competing agents relevant to an evolutionary framework. This not well understood in the traditional economic models or by econo-physics where competitive co-evolution is missing. While the famous Bak experiment and the Newman model of replacing sticks when random thresholds apply – may suggest the whole process is random, the behaviour of those involved in competitive co-evolution may also produce the same power-law outcomes. Often in interacting agents with uber intelligence, innovations or surprise outputs emerge in a highly contextual way that are far from random and may drive the process to super criticality (Markose,2005,2004). Nevertheless, the need to study network connectivity in economic and financial systems and the policy implications of Prof Lewis’s paper is something I fully endorse.
References:
Sheri Markose, Simone Giansante, Mateusz Gatkowski and Ali Rais Shaghaghi (2010), “Too Interconnected To Fail: Financial Contagion and Systemic Risk in Network Model of CDS and Other Credit Enhancement Obligations of US Banks”, Economics Department Working Paper 683, University of Essex.
Markose, S.M, 2005 , “Computability and Evolutionary Complexity : Markets as Complex Adaptive Systems (CAS)”, Economic Journal ,vol. 115, pp.F159-F192. ISSN 0013-0133
Markose, S. M, Edward T., Serafin M. (2005), “The Red Queen Principle and the Emergence of Efficient Financial Markets: An Agent Based Approach”, In: Thomas Lux, Stefan Reitz and Eleni Samanodou (Eds.) Nonlinear Dynamics and Heterogeneous Interacting Agents, Lecture Notes in Economics and Mathematical Systems 550, Springer, Berlin, Heidelberg.
Markose, S.M, 2004, “Novelty in Complex Adaptive Systems (CAS): A Computational Theory of Actor Innovation”, Physica A: Statistical Mechanics and Its Applications, vol. 344, pp. 41- 49.