Overview:

This program conducts four statistical tests for continuous data: 1) autocorrelation, 2) distribution, 3) variance, 4) mean

Autocorrelation:
This test quantifies the "randomness" or independence of data samples. Since tests (2) - (4) require sufficiently independent sample sets, the autocorrelation
test is a preliminary indicator for the validity of using these tests directly.

Reference: http://itl.nist.gov/div898/handbook/eda/section3/eda35c.htm

Distribution:
This test uses the Kolmogorov-Smirnov algorithm to compute whether or not to accept the null hypothesis (Ho) when comparing two sets of
one-dimensional, continuous data. Ho states that the two sets of data belong to the same population. Accepting Ho implies that you cannot distinguish between
the two datasets, while rejecting Ho implies that you cannot accept that they are the same. If the test accepts Ho, then it is less likely that
a difference in variances or means will be seen. Likewise, if the test accepts Ha, then it is more likely (but not gauranteed) that a difference in variances
or means will be seen.

Reference: http://itl.nist.gov/div898/handbook/eda/section3/eda35g.htm

Variance:
This test uses the Levene's modified test with the Brown-Forsythe metric to determine whether or not the sample variances are equal. Ho indicates that the
two variances are equal, while Ha indicates that they are not. The results of this test directs the following test for comparing sample means. Namely,
if the variances are equal, one variant of the T-test will be used. If the variances are unequal, a different variant of the T-test will be used.

Reference: http://itl.nist.gov/div898/handbook/eda/section3/eda35a.htm

Mean:
This test uses two variants of the T-test as discussed above. This test calculates the effect size from the datasets. This number dictates the minimium 
separation required between the means of the two samples in order to conclude that the means of the two samples are statistically different. 
The difference in means of the two samples minus the effect size is an estimate of the minimum difference between the means of the two samples.

References: http://itl.nist.gov/div898/handbook/eda/section3/eda353.htm
	    http://sphweb.bumc.bu.edu/otlt/MPH-Modules/BS/BS704_Power/BS704_Power_print.html
	    https://en.wikipedia.org/wiki/Student%27s_t-test

Instructions:
1) Place two CSV files (one per dataset) in same directory as KS.exe. The files should be called "data1.csv" and "data2.csv".
	- Each file should contain a single column of numbers representing that dataset (see existing files)

2) Running the program
	- Open terminal, and navigate to directory that contains KS.exe. Type, 'KS' to run.
	- Optionally, type 'KS param value', where 'param' is either '-b' for beta value, or '-a' for alpha value, 
	  each followed by its appropriate value. Ex: KS -b 0.2 -a 0.1
	- Default value for '-b' is 0.2. Supported values are 0.2, 0.15, 0.10, 0.05
	- Default value for '-a' is 0.05. Supported values are 0.10, 0.05, 0.025, 0.01, 0.005, 0.001

*Notes: Alpha value represents the likelihood of falsely rejecting Ho. Beta value represents the likelihood of falsely accepting Ho. These set values
are propagated throughout all tests, where relevant.