Systems one wishes to simulate are usually stochastic; that is they have some random components. Thus the simulation models of the systems are also usually stochastic. The randomness in the models mean that each time you run the simulation (each replication) you get different answers (just as each time you go to the fast food restaurant you have to wait a different amount of time to get served). We think of each run of a simulation as a statistical experiment. Just as you don't roll a pair of dice once to estimate the probability of snake eyes (or to try to determine if the dice are fair as opposed to loaded), you don't run a simulation just once to come up with an "answer" such as the average length of a queue. One of the pitfalls of simulation listed on page 93 of the text is treating the output of a single replication (single run) as the "true answer." With any luck, after doing this exercise you will believe that is indeed a pitfall!
A simple MM1 queuing system has an analytic, closed form solution: If E(A) denotes the mean interarrival time and E(S) is the mean service time, then the following formulas hold (we'll discuss this in class soon):
       utilization:           rho = E(S) / E(A)
       average queue length:  rho^2 / (1 - rho)
       average delay in the queue: rho * E(S) / (1 - rho)
Your textbook generally uses a mean interarrival time of 1 minute and a mean service time of 0.5 minutes in its examples. For those times we get the following theoretical results:
    utilization:   rho = 0.5/1 = 0.5 (meaning the server is busy 50% of the time)
    average queue length:  (0.5)2 / (1 - 0.5) = 0.5
    average delay:  0.5 * 0.5 / (1 - 0.5) = 0.5
For a mean interarrival time of 4 and a mean service time of 3 we get
the following theoretical results:
    utilization:   rho = 3/4 = .75 (meaning the server is busy 75% of the time)
    average queue length:  (0.75)2 / (1 - 0.75) = 2.25
    average delay:  0.75 * 3 / (1 - 0.75) = 9
These numbers represent the "true" averages for the system but for any given run of a simulation of the system you will not get these numbers exactly (as for any given day in the system the averages for customers that day will not be exactly these numbers). The question is how close are the numbers you get and how do you use the simulation program to get an accurate estimate of the "true" values.
Using the C++ Program: The version of the C++ program that simulates the single server queuing system is written to run the simulation just once (there is another version in the file driverE.cc that adds a loop and will let you "replicate" the simulation - that is run it several different times and write the results to a file). Different replications of the simulation differ only in the times that customers come into the system and the amount of time each needs for service. These times are determined by random numbers. As you may recall, in a computer "random" numbers are actually generated by some sort of formula (we'll learn more details later in the course). The basic idea is to start with an initial value (a "seed"), plug it into the formula to get the next random value, plug that in to get the next and so on. This generates what we call a stream of random numbers. It is very important that you start with a good seed (to maximize the chance of a stream of numbers that does have random-appearing properties). Hence, simulation programs don't leave it up to the user to choose the seed -- "good" seeds are stored somewhere in the program (in an array in this program) and the user has a choice of which one to start with (which "stream" -- in this case an index into the array).
The program initially requests five inputs:
        tar xzf mm1tar.tgz
Look in your directory.  You should see that you now have an mm1 subdirectory.
Get into this subdirectory and see what's there. It should be the .h and .cc
files plus a Makefile.  
Do the following:
Describe the spread of the data for the three different cases above (480 customers vs. 2000 customers vs. 10000 customers). What differences do you notice?