Homework #3 Math 2377 June 7, 2000

Question 1) p. 228 Exercise 5.3.

Here is the control chart:

UCL = 100 + 3(8/SQRT(4)) = 112

LCL = 100 – 3(8/SQRT(4)) = 88

Comment: There are 5 out of control points. Something seems to be going wrong. However, we may also question whether 4 samples per subgroup is sufficient to assume the Central Limit Theorem holds. Here is a histogram of the averages of the subgroups. It looks somewhat normal, but is not perfect.

Here is a histogram of the data before the first out of control point:

And here is a histogram of the data after the first point that was out of control:

 

Question 2) p 250 Exercise 5.16

Part (a) Here is the s^2 chart.

s^2(bar)= 0.046417

UCL = chi-squared(3,.00135) (0.046417)/3 = (0.003)(0.046417)/3 = 0.0000464

LCL = chi-squared(3,.99865)(0.046417)/3 = (15.630)(0.046417)/3 = 0.242

Comments: The variance appears to be in control.

Part b) Here is the Xbar chart. Note that this is an Xbar chart based on S (the kind that Minitab will do for you). The question actually asked for an Xbar-S^2 chart, but an Xbar based on S is okay.

The formulas for the ULC and LCL can be found on p. 248 of the text book. However, these are for the Xbar chart based on s^2.

Comments: The process seems to be out of control.

Part c) To make an s^2 chart we assume: 1) the process is in control during the base period 2) the data follow a normal distribution. (That is, that the data at each point in time when the subgroup was sampled.) To make an Xbar chart based on s^2 we assume: 1) the process variance is constant, ie the s^2 chart is in control, 2) the subgroup means are in control during the base period, 3) the sample size is large enough to assume that the subgroup means follow a normal distribution by the Central Limit Theorem.

Here is a histogram of all the data. It looks like two normal distributions superimposed on each other. One might want to conclude that the data does not come from a normal distribution, and therefore assumption 2) for the s^2 chart is violated. However, since all we really need is normality at each point in time (and since it appears to be quite clear that the mean is shifting upwards so it is very unlikely that all of the data come from the same normal distribution), so since we concluded the variance was well in control, the conclusion that the process mean is out of control seems reasonable.

We can see the data appears to have two bumps. This is what we would expect if the variance was in control but there was a change in the process mean. This could be a maxtiure of different normal histograms

Question 3) Exercise 5.23.

Here is the np-chart.

Note: p^bar means the letter "p" with a bar (or line) on top of it. I don’t know how to do this in Word.

UCL = n p^bar + 3 SQRT(n p^bar q^bar)

LCL = n p^bar + 3 SQRT(n p^bar q^bar)

Where p^bar is given by the very first formula on the top of page 258, with n=200, m = 30. Here it is in LaTeX-speak:

p^bar = (1/(mn)) \sum_{j=1}^{m} y_j

Doing the calculations by hand results in exactly the same answers as given by Minitab.

Comments: the process seems to be in control.

Question 4.

Here is the CUSUM chart when we do not reset after an alarm:

Here is the CUSUM when we do reset after an alarm:

Here is the s-chart:

Conclusions: The process variance seems to be in control. However, at hour 49, the amount of ash seems to have increased unacceptably. Notice that we don’t see this in the Xbar-chart. (Students were not required to include an Xbar chart. It is here for educational purposes only.)

Question 5.

Here is a sample macro used to simulate the party of 10 engineers.

Random 10 c3;

Discrete C1 C2.

Sum C3 k1.

Let c4 = k1

Stack C4 C5 C5.

Then the column of interest that contains our data is C5. For the parties of 20 and 100 engineers, all that changed was the first line: Random 20 c3, or Random 100 c3.

The party of 10. Here is the original histogram.

Below is the normalized histogram. In this case, let T be the r.v. that denotes the total amount of beer consumed. Let X be r.v. that denotes the amount of beer consumed by one engineer. So T = X_1 + X_2 + … + X_10. To normalize we need to subtract the mean of T, and divide by the standard deviation of T. In this case E[T] = 10E[X] = 10 (1(1/6) + 2(1/2) + 3(1/3)) = 10(13/6) = 130/6 = 21.67. To get the STD of T, we first calculate Var(T) = 10Var(X) = 10(1(1/6) + 4(1/2) + 9(1/3))= 10(0.472) = 4.72. So STD(T) = SQRT(4.72) = 2.17.

Here is the normalized histogram:

By the method described below (with the party for 20), we see that we should buy 25 beers to be 90% sure we have enough.

Here is the histogram for the party of 20 engineers:

 

Once again, to normalize this histogram, we need to subtract the mean from every observation and divide by the standard deviation (for every observation). This time the mean of T = E[T] = 20E[X] = 20(13/6) = 43.33, and STD(T) = SQRT(20(0.472)) = SQRT(9.44) = 3.07.

By looking at the histogram, and by finding which number has 90% of the area of the histogram to the left of it (or alternatively by sorting the column of simulated data and finding the number which 90% of the data are less than) we see that we should buy 47 beers to be 90% sure we have enough for everyone.

Here is the normalized histogram:

Here is the histogram for the party of 100 engineers.

By the same method as above, we see that we should buy 225 beers to be 90% sure that we have enough.

Now to normalize we subtract the mean of 100(2.1667) = 216.67, and divide by the standard deviation of

SQRT(100(0.472)) = SQRT(47.2) = 6.87.

Here is the normalized histogram:

 

Here is a standard normal curve:

We can see that in all three cases, the normalized curves from our simulated parties are quite close to the theoretical curve, and as we have more people at the party, the resulting histograms are even closer to the standard normal curve. (This is a consequence of the Central Limit Theorem: The more r.v. you add together, the more normal the sum will appear.)