Take a sneak peek at the new NIST.gov and let us know what you think!
(Please note: some content may not be complete on the beta site.).
A Complex System is any structure, process, organization, or organism characterized by a large number of interconnected components whose micro-interactions lead to macro-behavior which is nonlinear and/or non-predictable. Examples of complex systems include the physical (earthquakes, avalanches,forest fires), the biological (ant colonies, slime molds, humans), the social (cities, economies, transportation networks), and the informational (compute grids, web services, the internet). The ITL Complex Systems Program in general seeks to understand the fundamental science of these systems and develop rigorous descriptions (analytic, statistical, or semantic) that enable prediction and control of their behavior. The specific programmatic motivation was that there was "no science today (2006) that offers the fundamental knowledge necessary to design large complex networks so that behaviors can be predicted prior to building them" (Network Science, 2006 NRC Report).
The particular complex system application reported on here focuses on a system dealt with on a daily basis, namely, the internet. In particular, the goal of the study was 3-fold:
1. to quantify (and understand) internet congestion by determining
Highly efficient fractional factorial experiment designs were constructed and a battery of custom data analysis graphics were assembled and applied.
Complex systems are generically characterized by a large number of factors and a large number of potential responses, hence statistical questions immediately present themselves as to
1. What should be measured?
Problem: Component 1 in the framework is to enumerate specific questions
that the study will address . Some of the specific questions (and experiments) of interest addressed in the project are:
Q1. What are the important factors that influence internet
An added bonus from the statistical framework for this project was that it also served to provide an answer to the validation question:
Experiment Design: Component 2 (experiment design) in the 5-step framework played a particularly important role. It allowed for a common vocabulary, it provided a simplifying 2-element structure:
k = # factors = ?, and
n = affordable number of internet experiments = ?),
it forced the specificity of purpose required to translate any of the large number of possible internet questions into the specific, concrete question that the "next experiment" was going to address, and it took advantage of the interactive statistical process of starting with an amorphous problem with an "endless" number of (mostly continuous) factors and converging to a finite, workable, scientifically-prioritized subset of factors with well-defined discrete settings. It also opened up a new way of thinking about this IT problem, and a new, powerful tool at the disposal of the scientist, namely the orthogonal fractional factorial experiment design, by which the IT scientist may efficiently, effectively, and systematically probe an information system as complex as the internet. The experiments were carried out by internet simulation programs run on a battery of computers. To carry out a single simulation run required considerable "wall clock" time. The project testing became feasible only by combination of the following 2 tools (one statistical and one computational):
1. the fractional factorial experiment designs which reduced the
decades' worth of running into a year; and
2. the use of 48 processors which reduced the year's worth of
Both components were critical.
Data Analysis: Furthermore, the necessities of dealing with a large (15 to 50) number of responses--each one sensitive to a different aspect of internet behavior--led to the design and application of a variety of custom graphical data analysis techniques to extract from the multi-factor/multi-response data the maximal amount of underlying structure and insight into primary factors and interactions alike (see figures 2 and 3).
In addition, the method outlined in this paper was repeated in each of our experiments, where we exercised each congestion control mechanism under 32 conditions that were spread throughput the parameter search space in an orthogonal and balanced fashion. (We have proposed a case study based on this method for presentation at the 2010 Winter Simulation Conference.) NIST-906588 "Comparison of Two Dimension-Reduction Methods for Network Simulation Models"(Kevin L. Mills and James. J. Filliben) discusses two different approaches we used to reduce the response space in our experiments. In this particular paper we reduced a 22 dimensional response space to 4 (using one method) and 7 (using a second method).
The goal of response space reduction is to identify the most important aspects of our model's behavior, which allows us to represent those aspects in a complete and balanced way within our experiments.(We have proposed a case study based on this method for presentation at the 2010 Winter Simulation Conference.) The main power of the methods we used arises from 2-level, orthogonal fractional factorial (OFF) experiment designs (NIST-904961), which allow us to vary all experiment parameters simultaneously while probing the search space widely in a balanced fashion. This leads to broader insights for lower computational cost than the one factor-at-a-time (1-FAT) experiment designs that are typically adopted by network simulation practitioners. In addition, the broader insights provided by OFF experiment designs can reveal areas that might be profitable to study with FAT designs.
While the design and analysis methods described here were utilized to study specific internet congestion-control mechanisms, the methods are in fact quite general and can be applied to many experiments that require simulating a complex search space that is otherwise deemed to be computationally infeasible. In fact, we are currently (12/10) using similar such methods to investigate resource allocation algorithms that might be used in Cloud Computing.
This project has resulted in an significant amount of insight about the relative merits of various internet congestion-control algorithms, what traffic factors affect algorithmic performance, and what interactions between factors affect relative performance. The IT results and the Stat methodology for modeling and analyzing global behavior has generated considerable interest in the IT measurement science community.
We have produced three technical papers:
"A Study of Proposed Congestions-Control Mechanisms"(Kevin L. Mills and James. J. Filliben) describes in detail (500 pages) all of the details involved in the experiment design, the simulation, and the data analysis for this project. This appeared as a NIST Special Publication SP 500-282.
NIST-904682 "How to Model a TCP/IP Network using only 20 Parameters"(Kevin L. Mills and James. J. Filliben) describes the reduced-parameter simulation model that we constructed to use for our study. Using this reduced-parameter model enabled us to reduce the computation requirements for our experiments. (A slightly shorter version of this paper will be presented at the 2010 Winter Simulation Conference.)
NIST-904961 "An Efficient Sensitivity Analysis Method for Network Simulation Models" (Kevin L. Mills and James. J. Filliben) describes our method to identify those parameters/factors (from among the 20 in our model) that produced the most significant changes in model responses. We could base further experiments in our study on this subset of parameters, which would further reduce the computational requirements needed to conduct our simulations.
End Date:October 1, 2010
Lead Organizational Unit:itl
Kevin Mills, Advanced Network Tech. Div., ITL
Sandy Ressler, Information Tech. Lab. Office, ITL
Chris Dabrowski, Advanced Network Tech. Div., ITL
Dan Genin, Advanced Network Tech. Div., ITL
James Filliben, Statistical Eng. Division, ITL
Kevin L. Mills