Untitled Page 14
- Page ID
- 125287
Chapter 9. Independent Classes of Random Variables
9.1. Independent Classes of Random Variables^{*}
Introduction
The concept of independence for classes of events is developed in terms of a product rule. In this unit, we extend the concept to classes of random variables.
Independent pairs
Recall that for a random variable X, the inverse image X^{–1}(M) (i.e., the set of all outcomes ω∈Ω which are mapped into M by X) is an event for each reasonable subset M on the real line. Similarly, the inverse image Y^{–1}(N) is an event determined by random variable Y for each reasonable set N. We extend the notion of independence to a pair of random variables by requiring independence of the events they determine. More precisely,
Definition
A pair {X,Y} of random variables is (stochastically) independent iff each pair of events is independent.
This condition may be stated in terms of the product rule
Independence implies
Note that the product rule on the distribution function is equivalent to the condition the product rule holds for the inverse images of a special class of sets {M,N} of the form M=(–∞,t] and N=(–∞,u]. An important theorem from measure theory ensures that if the product rule holds for this special class it holds for the general class of {M,N}. Thus we may assert
The pair is independent iff the following product rule holds
Suppose . Taking limits shows
so that the product rule holds. The pair is therefore independent.
If there is a joint density function, then the relationship to the joint distribution function makes it clear that the pair is independent iff the product rule holds for the density. That is, the pair is independent iff
Suppose the joint probability mass distributions induced by the pair is uniform on a rectangle with sides and . Since the area is (b–a)(d–c), the constant value of f_{XY} is 1/(b–a)(d–c). Simple integration gives
Thus it follows that X is uniform on , Y is uniform on , and for all t,u, so that the pair is independent. The converse is also true: if the pair is independent with X uniform on and Y is uniform on , the the pair has uniform joint distribution on I_{1}×I_{2}.
The joint mass distribution
It should be apparent that the independence condition puts restrictions on the character of the joint mass distribution on the plane. In order to describe this more succinctly, we employ the following terminology.
Definition
If M is a subset of the horizontal axis and N is a subset of the vertical axis, then the cartesian product M×N is the (generalized) rectangle consisting of those points on the plane such that t∈M and u∈N.
The rectangle in Example 9.2 is the Cartesian product I_{1}×I_{2}, consisting of all those points such that a≤t≤b and c≤u≤d (i.e., t∈I_{1} and u∈I_{2}).
We restate the product rule for independence in terms of cartesian product sets.
Reference to Figure 9.1 illustrates the basic pattern. If M, N are intervals on the horizontal and vertical axes, respectively, then the rectangle M×N is the intersection of the vertical strip meeting the horizontal axis in M with the horizontal strip meeting the vertical axis in N. The probability X∈M is the portion of the joint probability mass in the vertical strip; the probability Y∈N is the part of the joint probability in the horizontal strip. The probability in the rectangle is the product of these marginal probabilities.
This suggests a useful test for nonindependence which we call the rectangle test. We illustrate with a simple example.
Supose probability mass is uniformly distributed over the square with vertices at (1,0), (2,1), (1,2), (0,1). It is evident from Figure 9.2 that a value of X determines the possible values of Y and vice versa, so that we would not expect independence of the pair. To establish this, consider the small rectangle M×N shown on the figure. There is no probability mass in the region. Yet P(X∈M)>0 and P(Y∈N)>0, so that
, but . The product rule fails; hence the pair cannot be stochastically independent.
Remark. There are nonindependent cases for which this test does not work. And it does not provide a test for independence. In spite of these limitations, it is frequently useful. Because of the information contained in the independence condition, in many cases the complete joint and marginal distributions may be obtained with appropriate partial information. The following is a simple example.
Suppose the pair is independent and each has three possible values. The following four items of information are available.
These values are shown in bold type on Figure 9.3. A combination of the product rule and the fact that the total probability mass is one are used to calculate each of the marginal and joint probabilities. For example and
implies . Then
. Others are calculated similarly. There is no unique procedure for solution. And it has not seemed useful to develop MATLAB procedures to accomplish this.
A pair has the joint normal distribution iff the joint density is
where
The marginal densities are obtained with the aid of some algebraic tricks to integrate the joint density. The result is that and . If the parameter ρ is set to zero, the result is
so that the pair is independent iff ρ=0. The details are left as an exercise for the interested reader.
Remark. While it is true that every independent pair of normally distributed random variables is joint normal, not every pair of normally distributed random variables has the joint normal distribution.
We start with the distribution for a joint normal pair and derive a joint distribution for a normal pair which is not joint normal. The function
is the joint normal density for an independent pair (ρ=0) of standardized normal random variables. Now define the joint density for a pair by
Both and . However, they cannot be joint normal, since the joint normal distribution is positive for all .
Independent classes
Since independence of random variables is independence of the events determined by the random variables, extension to general classes is simple and immediate.
Definition
A class of random variables is (stochastically) independent iff the product rule holds for every finite subclass of two or more.
Remark. The index set J in the definition may be finite or infinite.
For a finite class , independence is equivalent to the product rule
Since we may obtain the joint distribution function for any finite subclass by letting the arguments for the others be ∞ (i.e., by taking the limits as the appropriate t_{i} increase without bound), the single product rule suffices to account for all finite subclasses.
Absolutely continuous random variables
If a class is independent and the individual variables are absolutely continuous (i.e., have densities), then any finite subclass is jointly absolutely continuous and the product rule holds for the densities of such subclasses
Similarly, if each finite subclass is jointly absolutely continuous, then each individual variable is absolutely continuous and the product rule holds for the densities. Frequently we deal with independent classes in which each random variable has the same marginal distribution. Such classes are referred to as iid classes (an acronym for independent,identically distributed). Examples are simple random samples from a given population, or the results of repetitive trials with the same distribution on the outcome of each component trial. A Bernoulli sequence is a simple example.
Simple random variables
Consider a pair of simple random variables in canonical form
Since and the pair is independent iff each of the pairs is independent. The joint distribution has probability mass at each point in the range of . Thus at every point on the grid,
According to the rectangle test, no gridpoint having one of the t_{i} or u_{j} as a coordinate has zero probability mass . The marginal distributions determine the joint distributions. If X has n distinct values and Y has m distinct values, then the n+m marginal probabilities suffice to determine the m·n joint probabilities. Since the marginal probabilities for each variable must add to one, only (n–1)+(m–1)=m+n–2 values are needed.
Suppose X and Y are in affine form. That is,
Since is the union of minterms generated by the E_{i} and is the union of minterms generated by the F_{j}, the pair is independent iff each pair of minterms generated by the two classes, respectivly, is independent. Independence of the minterm pairs is implied by independence of the combined class
Calculations in the joint simple case are readily handled by appropriate m-functions and m-procedures.
MATLAB and independent simple random variables
In the general case of pairs of joint simple random variables we have the m-procedure jcalc, which uses information in matrices and P to determine the marginal probabilities and the calculation matrices t and u. In the independent case, we need only the marginal distributions in matrices X, PX, Y, and PY to determine the joint probability matrix (hence the joint distribution) and the calculation matrices t and u. If the random variables are given in canonical form, we have the marginal distributions. If they are in affine form, we may use canonic (or the function form canonicf) to obtain the marginal distributions.
Once we have both marginal distributions, we use an m-procedure we call icalc. Formation of the joint probability matrix is simply a matter of determining all the joint probabilities
Once these are calculated, formation of the calculation matrices t and u is achieved exactly as in jcalc.
X = [-4 -2 0 1 3]; Y = [0 1 2 4]; PX = 0.01*[12 18 27 19 24]; PY = 0.01*[15 43 31 11]; icalc Enter row matrix of X-values X Enter row matrix of Y-values Y Enter X probabilities PX Enter Y probabilities PY Use array operations on matrices X, Y, PX, PY, t, u, and P disp(P) % Optional display of the joint matrix 0.0132 0.0198 0.0297 0.0209 0.0264 0.0372 0.0558 0.0837 0.0589 0.0744 0.0516 0.0774 0.1161 0.0817 0.1032 0.0180 0.0270 0.0405 0.0285 0.0360 disp(t) % Calculation matrix t -4 -2 0 1 3 -4 -2 0 1 3 -4 -2 0 1 3 -4 -2 0 1 3 disp(u) % Calculation matrix u 4 4 4 4 4 2 2 2 2 2 1 1 1 1 1 0 0 0 0 0 M = (t>=-3)&(t<=2); % M = [-3, 2] PM = total(M.*P) % P(X in M) PM = 0.6400 N = (u>0)&(u.^2<=15); % N = {u: u > 0, u^2 <= 15} PN = total(N.*P) % P(Y in N) PN = 0.7400 Q = M&N; % Rectangle MxN PQ = total(Q.*P) % P((X,Y) in MxN) PQ = 0.4736 p = PM*PN p = 0.4736 % P((X,Y) in MxN) = P(X in M)P(Y in N)
As an example, consider again the problem of joint Bernoulli trials described in the treatment of Composite trials.
1 Bill and Mary take ten basketball free throws each. We assume the two seqences of trials are independent of each other, and each is a Bernoulli sequence.
Mary: Has probability 0.80 of success on each trial.
Bill: Has probability 0.85 of success on each trial.
What is the probability Mary makes more free throws than Bill?
SOLUTION
Let X be the number of goals that Mary makes and Y be the number that Bill makes. Then X∼ binomial and Y∼ binomial .
X = 0:10; Y = 0:10; PX = ibinom(10,0.8,X); PY = ibinom(10,0.85,Y); icalc Enter row matrix of X-values X % Could enter 0:10 Enter row matrix of Y-values Y % Could enter 0:10 Enter X probabilities PX % Could enter ibinom(10,0.8,X) Enter Y probabilities PY % Could enter ibinom(10,0.85,Y) Use array operations on matrices X, Y, PX, PY, t, u, and P PM = total((t>u).*P) PM = 0.2738 % Agrees with solution in Example 9 from "Composite Trials". Pe = total((u==t).*P) % Additional information is more easily Pe = 0.2276 % obtained than in the event formulation Pm = total((t>=u).*P) % of Example 9 from "Composite Trials". Pm = 0.5014
Twelve world class sprinters in a meet are running in two heats of six persons each. Each runner has a reasonable chance of breaking the track record. We suppose results for individuals are independent.
First heat probabilities: 0.61 0.73 0.55 0.81 0.66 0.43
Second heat probabilities: 0.75 0.48 0.62 0.58 0.77 0.51
Compare the two heats for numbers who break the track record.
SOLUTION
Let X be the number of successes in the first heat and Y be the number who are successful in the second heat. Then the pair is independent. We use the m-function canonicf to determine the distributions for X and for Y, then icalc to get the joint distribution.
c1 = [ones(1,6) 0]; c2 = [ones(1,6) 0]; P1 = [0.61 0.73 0.55 0.81 0.66 0.43]; P2 = [0.75 0.48 0.62 0.58 0.77 0.51]; [X,PX] = canonicf(c1,minprob(P1)); [Y,PY] = canonicf(c2,minprob(P2)); icalc Enter row matrix of X-values X Enter row matrix of Y-values Y Enter X probabilities PX Enter Y probabilities PY Use array operations on matrices X, Y, PX, PY, t, u, and P Pm1 = total((t>u).*P) % Prob first heat has most Pm1 = 0.3986 Pm2 = total((u>t).*P) % Prob second heat has most Pm2 = 0.3606 Peq = total((t==u).*P) % Prob both have the same Peq = 0.2408 Px3 = (X>=3)*PX' % Prob first has 3 or more Px3 = 0.8708 Py3 = (Y>=3)*PY' % Prob second has 3 or more Py3 = 0.8525
As in the case of jcalc, we have an m-function version icalcf
We have a related m-function idbn for obtaining the joint probability matrix from the marginal probabilities. Its formation of the joint matrix utilizes the same operations as icalc.
PX = 0.1*[3 5 2]; PY = 0.01*[20 15 40 25]; P = idbn(PX,PY) P = 0.0750 0.1250 0.0500 0.1200 0.2000 0.0800 0.0450 0.0750 0.0300 0.0600 0.1000 0.0400
An m- procedure itest checks a joint distribution for independence. It does this by calculating the marginals, then forming an independent joint test matrix, which is compared with the original. We do not ordinarily exhibit the matrix P to be tested. However, this is a case in which the product rule holds for most of the minterms, and it would be very difficult to pick out those for which it fails. The m-procedure simply checks all of them.
idemo1 % Joint matrix in datafile idemo1 P = 0.0091 0.0147 0.0035 0.0049 0.0105 0.0161 0.0112 0.0117 0.0189 0.0045 0.0063 0.0135 0.0207 0.0144 0.0104 0.0168 0.0040 0.0056 0.0120 0.0184 0.0128 0.0169 0.0273 0.0065 0.0091 0.0095 0.0299 0.0208 0.0052 0.0084 0.0020 0.0028 0.0060 0.0092 0.0064 0.0169 0.0273 0.0065 0.0091 0.0195 0.0299 0.0208 0.0104 0.0168 0.0040 0.0056 0.0120 0.0184 0.0128 0.0078 0.0126 0.0030 0.0042 0.0190 0.0138 0.0096 0.0117 0.0189 0.0045 0.0063 0.0135 0.0207 0.0144 0.0091 0.0147 0.0035 0.0049 0.0105 0.0161 0.0112 0.0065 0.0105 0.0025 0.0035 0.0075 0.0115 0.0080 0.0143 0.0231 0.0055 0.0077 0.0165 0.0253 0.0176 itest Enter matrix of joint probabilities P The pair {X,Y} is NOT independent % Result of test
To see where the product rule fails, call for D disp(D) % Optional call for D 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Next, we consider an example in which the pair is known to be independent.
jdemo3 % call for data in m-file disp(P) % call to display P 0.0132 0.0198 0.0297 0.0209 0.0264 0.0372 0.0558 0.0837 0.0589 0.0744 0.0516 0.0774 0.1161 0.0817 0.1032 0.0180 0.0270 0.0405 0.0285 0.0360 itest Enter matrix of joint probabilities P The pair {X,Y} is independent % Result of test
The procedure icalc can be extended to deal with an independent class of three random variables. We call the m-procedure icalc3. The following is a simple example of its use.
X = 0:4; Y = 1:2:7; Z = 0:3:12; PX = 0.1*[1 3 2 3 1]; PY = 0.1*[2 2 3 3]; PZ = 0.1*[2 2 1 3 2]; icalc3 Enter row matrix of X-values X Enter row matrix of Y-values Y Enter row matrix of Z-values Z Enter X probabilities PX Enter Y probabilities PY Enter Z probabilities PZ Use array operations on matrices X, Y, Z, PX, PY, PZ, t, u, v, and P G = 3*t + 2*u - 4*v; % W = 3X + 2Y -4Z [W,PW] = csort(G,P); % Distribution for W PG = total((G>0).*P) % P(g(X,Y,Z) > 0) PG = 0.3370 Pg = (W>0)*PW' % P(Z > 0) Pg = 0.3370
An m-procedure icalc4 to handle an independent class of four variables is also available. Also several variations of the m-function mgsum and the m-function diidsum are used for obtaining distributions for sums of independent random variables. We consider them in various contexts in other units.
Approximation for the absolutely continuous case
In the study of functions of random variables, we show that an approximating simple random variable X_{s} of the type we use is a function of the random variable X which is approximated. Also, we show that if {X,Y} is an independent pair, so is {g(X),h(Y)} for any reasonable functions g and h. Thus if {X,Y} is an independent pair, so is any pair of approximating simple functions of the type considered. Now it is theoretically possible for the approximating pair to be independent, yet have the approximated pair {X,Y} not independent. But this is highly unlikely. For all practical purposes, we may consider {X,Y} to be independent iff is independent. When in doubt, consider a second pair of approximating simple functions with more subdivision points. This decreases even further the likelihood of a false indication of independence by the approximating random variables.
Suppose X∼ exponential (3) and Y∼ exponential (2) with
Since e^{–12}≈6×10^{–6}, we approximate X for values up to 4 and Y for values up to 6.
tuappr Enter matrix [a b] of X-range endpoints [0 4] Enter matrix [c d] of Y-range endpoints [0 6] Enter number of X approximation points 200 Enter number of Y approximation points 300 Enter expression for joint density 6*exp(-(3*t + 2*u)) Use array operations on X, Y, PX, PY, t, u, and P itest Enter matrix of joint probabilities P The pair {X,Y} is independent
The pair has joint density . It is easy enough to determine the marginals in this case. By symmetry, they are the same.
so that f_{XY}=f_{X}f_{Y} which ensures the pair is independent. Consider the solution using tuappr and itest.
tuappr Enter matrix [a b] of X-range endpoints [0 1] Enter matrix [c d] of Y-range endpoints [0 1] Enter number of X approximation points 100 Enter number of Y approximation points 100 Enter expression for joint density 4*t.*u Use array operations on X, Y, PX, PY, t, u, and P itest Enter matrix of joint probabilities P The pair {X,Y} is independent
9.2. Problems on Independent Classes of Random Variables^{*}
The pair has the joint distribution (in m-file npr08_06.m):
Determine whether or not the pair {X,Y} is independent.
npr08_06 Data are in X, Y, P itest Enter matrix of joint probabilities P The pair {X,Y} is NOT independent To see where the product rule fails, call for D disp(D) 0 0 0 1 1 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1
The pair has the joint distribution (in m-file npr09_02.m):
Determine whether or not the pair {X,Y} is independent.
npr09_02 Data are in X, Y, P itest Enter matrix of joint probabilities P The pair {X,Y} is NOT independent To see where the product rule fails, call for D disp(D) 0 0 0 0 0 0 1 1 0 0 0 1 1 0 0 0 0 0 0 0
The pair has the joint distribution (in m-file npr08_07.m):
t = | -3.1 | -0.5 | 1.2 | 2.4 | 3.7 | 4.9 |
u = 7.5 | 0.0090 | 0.0396 | 0.0594 | 0.0216 | 0.0440 | 0.0203 |
4.1 | 0.0495 | 0 | 0.1089 | 0.0528 | 0.0363 | 0.0231 |
-2.0 | 0.0405 | 0.1320 | 0.0891 | 0.0324 | 0.0297 | 0.0189 |
-3.8 | 0.0510 | 0.0484 | 0.0726 | 0.0132 | 0 | 0.0077 |
Determine whether or not the pair {X,Y} is independent.
npr08_07 Data are in X, Y, P itest Enter matrix of joint probabilities P The pair {X,Y} is NOT independent To see where the product rule fails, call for D disp(D) 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
For the distributions in Exercises 4-10 below
Determine whether or not the pair is independent.
Use a discrete approximation and an independence test to verify results in part (a).
f_{XY}(t,u)=1/π on the circle with radius one, center at (0,0).
Not independent by the rectangle test.
tuappr Enter matrix [a b] of X-range endpoints [-1 1] Enter matrix [c d] of Y-range endpoints [-1 1] Enter number of X approximation points 100 Enter number of Y approximation points 100 Enter expression for joint density (1/pi)*(t.^2 + u.^2<=1) Use array operations on X, Y, PX, PY, t, u, and P itest Enter matrix of joint probabilities P The pair {X,Y} is NOT independent To see where the product rule fails, call for D % Not practical-- too large
f_{XY}(t,u)=1/2 on the square with vertices at (see Exercise 11 from "Problems on Random Vectors and Joint Distributions").
Not independent, by the rectangle test.
tuappr Enter matrix [a b] of X-range endpoints [0 2] Enter matrix [c d] of Y-range endpoints [0 2] Enter number of X approximation points 200 Enter number of Y approximation points 200 Enter expression for joint density (1/2)*(u<=min(1+t,3-t)).* ... (u>=max(1-t,t-1)) Use array operations on X, Y, PX, PY, t, u, and P itest Enter matrix of joint probabilities P The pair {X,Y} is NOT independent To see where the product rule fails, call for D
f_{XY}(t,u)=4t(1–u) for 0≤t≤1, 0≤u≤1 (see Exercise 12 from "Problems on Random Vectors and Joint Distributions").
From the solution for Exercise 12 from "Problems on Random Vectors and Joint Distributions" we have
so the pair is independent.
tuappr Enter matrix [a b] of X-range endpoints [0 1] Enter matrix [c d] of Y-range endpoints [0 1] Enter number of X approximation points 100 Enter number of Y approximation points 100 Enter expression for joint density 4*t.*(1-u) Use array operations on X, Y, PX, PY, t, u, and P itest Enter matrix of joint probabilities P The pair {X,Y} is independent
for 0≤t≤2, 0≤u≤2 (see Exercise 13 from "Problems on Random Vectors and Joint Distributions").
From the solution of Exercise 13 from "Problems on Random Vectors and Joint Distributions" we have
so f_{XY} ≠ f_{X} f_{Y} which implies the pair is not independent.
tuappr Enter matrix [a b] of X-range endpoints [0 2] Enter matrix [c d] of Y-range endpoints [0 2] Enter number of X approximation points 100 Enter number of Y approximation points 100 Enter expression for joint density (1/8)*(t+u) Use array operations on X, Y, PX, PY, t, u, and P itest Enter matrix of joint probabilities P The pair {X,Y} is NOT independent To see where the product rule fails, call for D
f_{XY}(t,u)=4ue^{–2t} for 0≤t,0≤u≤1 (see Exercise 14 from "Problems on Random Vectors and Joint Distributions").
From the solution for Exercise 14 from "Problems on Random Vectors and Joint Distribution" we have
so that f_{XY}=f_{X}f_{Y} and the pair is independent.
tuappr Enter matrix [a b] of X-range endpoints [0 5] Enter matrix [c d] of Y-range endpoints [0 1] Enter number of X approximation points 500 Enter number of Y approximation points 100 Enter expression for joint density 4*u.*exp(-2*t) Use array operations on X, Y, PX, PY, t, u, and P itest Enter matrix of joint probabilities P The pair {X,Y} is independent % Product rule holds to within 10^{-9}
f_{XY}(t,u)=12t^{2}u on the parallelogram with vertices
(see Exercise 16 from "Problems on Random Vectors and Joint Distributions").
Not independent by the rectangle test.
tuappr Enter matrix [a b] of X-range endpoints [-1 1] Enter matrix [c d] of Y-range endpoints [0 1] Enter number of X approximation points 200 Enter number of Y approximation points 100 Enter expression for joint density 12*t.^2.*u.*(u<=min(t+1,1)).* ... (u>=max(0,t)) Use array operations on X, Y, PX, PY, t, u, and P itest Enter matrix of joint probabilities P The pair {X,Y} is NOT independent To see where the product rule fails, call for D
for 0≤t≤2, 0≤u≤min{1,2–t} (see Exercise 17 from "Problems on Random Vectors and Joint Distributions").
By the rectangle test, the pair is not independent.
tuappr Enter matrix [a b] of X-range endpoints [0 2] Enter matrix [c d] of Y-range endpoints [0 1] Enter number of X approximation points 200 Enter number of Y approximation points 100 Enter expression for joint density (24/11)*t.*u.*(u<=min(1,2-t)) Use array operations on X, Y, PX, PY, t, u, and P itest Enter matrix of joint probabilities P The pair {X,Y} is NOT independent To see where the product rule fails, call for D
Two software companies, MicroWare and BusiCorp, are preparing a new business package in time for a computer trade show 180 days in the future. They work independently. MicroWare has anticipated completion time, in days, exponential (1/150). BusiCorp has time to completion, in days, exponential (1/130). What is the probability both will complete on time; that at least one will complete on time; that neither will complete on time?
p1 = 1 - exp(-180/150) p1 = 0.6988 p2 = 1 - exp(-180/130) p2 = 0.7496 Pboth = p1*p2 Pboth = 0.5238 Poneormore = 1 - (1 - p1)*(1 - p2) % 1 - Pneither Poneormore = 0.9246 Pneither = (1 - p1)*(1 - p2) Pneither = 0.0754
Eight similar units are put into operation at a given time. The time to failure (in hours) of each unit is exponential (1/750). If the units fail independently, what is the probability that five or more units will be operating at the end of 500 hours?
p = exp(-500/750); % Probability any one will survive P = cbinom(8,p,5) % Probability five or more will survive P = 0.3930
The location of ten points along a line may be considered iid random variables with symmytric triangular distribution on [1,3]. What is the probability that three or more will lie within distance 1/2 of the point t=2?
P = cbinom(10,p,3) = 0.9996
.A Christmas display has 200 lights. The times to failure are iid, exponential (1/10000). The display is on continuously for 750 hours (approximately one month). Determine the probability the number of lights which survive the entire period is at least 175, 180, 185, 190.
p = exp(-750/10000) p = 0.9277 k = 175:5:190; P = cbinom(200,p,k); disp([k;P]') 175.0000 0.9973 180.0000 0.9449 185.0000 0.6263 190.0000 0.1381
A critical module in a network server has time to failure (in hours of machine time) exponential (1/3000). The machine operates continuously, except for brief times for maintenance or repair. The module is replaced routinely every 30 days (720 hours), unless failure occurs. If successive units fail independently, what is the probability of no breakdown due to the module for one year?
p = exp(-720/3000) p = 0.7866 % Probability any unit survives P = p^12 % Probability all twelve survive (assuming 12 periods) P = 0.056
Joan is trying to decide which of two sales opportunities to take.
In the first, she makes three independent calls. Payoffs are $570, $525, and $465, with respective probabilities of 0.57, 0.41, and 0.35.
In the second, she makes eight independent calls, with probability of success on each call p=0.57. She realizes $150 profit on each successful sale.
Let X be the net profit on the first alternative and Y be the net gain on the second. Assume the pair is independent.
Which alternative offers the maximum possible gain?
Compare probabilities in the two schemes that total sales are at least $600, $900, $1000, $1100.
What is the probability the second exceeds the first— i.e., what is P(Y>X)?
X=570I_{A}+525I_{B}+465I_{C} with [P(A)P(B)P(C)]=[0.570.410.35]. Y=150S, where S∼ binomial (8, 0.57).
c = [570 525 465 0]; pm = minprob([0.57 0.41 0.35]); canonic % Distribution for X Enter row vector of coefficients c Enter row vector of minterm probabilities pm Use row matrices X and PX for calculations Call for XDBN to view the distribution Y = 150*[0:8]; % Distribution for Y PY = ibinom(8,0.57,0:8); icalc % Joint distribution Enter row matrix of X-values X Enter row matrix of Y-values Y Enter X probabilities PX Enter Y probabilities PY Use array operations on matrices X, Y, PX, PY, t, u, and P xmax = max(X) xmax = 1560 ymax = max(Y) ymax = 1200 k = [600 900 1000 1100]; px = zeros(1,4); for i = 1:4 px(i) = (X>=k(i))*PX'; end py = zeros(1,4); for i = 1:4 py(i) = (Y>=k(i))*PY'; end disp([px;py]') 0.4131 0.7765 0.4131 0.2560 0.3514 0.0784 0.0818 0.0111 M = u > t; PM = total(M.*P) PM = 0.5081 % P(Y>X)
Margaret considers five purchases in the amounts 5, 17, 21, 8, 15 dollars with respective probabilities 0.37, 0.22, 0.38, 0.81, 0.63. Anne contemplates six purchases in the amounts 8, 15, 12, 18, 15, 12 dollars. with respective probabilities 0.77, 0.52, 0.23, 0.41, 0.83, 0.58. Assume that all eleven possible purchases form an independent class.
What is the probability Anne spends at least twice as much as Margaret?
What is the probability Anne spends at least $30 more than Margaret?
cx = [5 17 21 8 15 0]; pmx = minprob(0.01*[37 22 38 81 63]); cy = [8 15 12 18 15 12 0]; pmy = minprob(0.01*[77 52 23 41 83 58]); [X,PX] = canonicf(cx,pmx); [Y,PY] = canonicf(cy,pmy); icalc Enter row matrix of X-values X Enter row matrix of Y-values Y Enter X probabilities PX Enter Y probabilities PY Use array operations on matrices X, Y, PX, PY, t, u, and P M1 = u >= 2*t; PM1 = total(M1.*P) PM1 = 0.3448 M2 = u - t >=30; PM2 = total(M2.*P) PM2 = 0.2431
James is trying to decide which of two sales opportunities to take.
In the first, he makes three independent calls. Payoffs are $310, $380, and $350, with respective probabilities of 0.35, 0.41, and 0.57.
In the second, he makes eight independent calls, with probability of success on each call p=0.57. He realizes $100 profit on each successful sale.
Let X be the net profit on the first alternative and Y be the net gain on the second. Assume the pair is independent.
Which alternative offers the maximum possible gain?
What is the probability the second exceeds the first— i.e., what is P(Y>X)?
Compare probabilities in the two schemes that total sales are at least $600, $700, $750.
cx = [310 380 350 0]; pmx = minprob(0.01*[35 41 57]); Y = 100*[0:8]; PY = ibinom(8,0.57,0:8); canonic Enter row vector of coefficients cx Enter row vector of minterm probabilities pmx Use row matrices X and PX for calculations Call for XDBN to view the distribution icalc Enter row matrix of X-values X Enter row matrix of Y-values Y Enter X probabilities PX Enter Y probabilities PY Use array operations on matrices X, Y, PX, PY, t, u, and P xmax = max(X) xmax = 1040 ymax = max(Y) ymax = 800 PYgX = total((u>t).*P) PYgX = 0.5081 k = [600 700 750]; px = zeros(1,3); py = zeros(1,3); for i = 1:3 px(i) = (X>=k(i))*PX'; end for i = 1:3 py(i) = (Y>=k(i))*PY'; end disp([px;py]') 0.4131 0.2560 0.2337 0.0784 0.0818 0.0111
A residential College plans to raise money by selling “chances” on a board. There are two games:
Game 1: Pay $5 to play; win $20 with probability p_{1}=0.05 (one in twenty) |
Game 2: Pay $10 to play; win $30 with probability p_{2}=0.2 (one in five) |
Thirty chances are sold on Game 1 and fifty chances are sold on Game 2. If X and Y are the profits on the respective games, then
where are the numbers of winners on the respective games. It is reasonable to suppose N_{1}∼ binomial and N_{2}∼ binomial . It is reasonable to suppose the pair is independent, so that is independent. Determine the marginal distributions for X and Y then use icalc to obtain the joint distribution and the calculating matrices. The total profit for the College is Z=X+Y. What is the probability the College will lose money? What is the probability the profit will be $400 or more, less than $200, between $200 and $450?
N1 = 0:30; PN1 = ibinom(30,0.05,0:30); x = 150 - 20*N1; [X,PX] = csort(x,PN1); N2 = 0:50; PN2 = ibinom(50,0.2,0:50); y = 500 - 30*N2; [Y,PY] = csort(y,PN2); icalc Enter row matrix of X-values X Enter row matrix of Y-values Y Enter X probabilities PX Enter Y probabilities PY Use array operations on matrices X, Y, PX, PY, t, u, and P G = t + u; Mlose = G < 0; Mm400 = G >= 400; Ml200 = G < 200; M200_450 = (G>=200)&(G<=450); Plose = total(Mlose.*P) Plose = 3.5249e-04 Pm400 = total(Mm400.*P) Pm400 = 0.1957 Pl200 = total(Ml200.*P) Pl200 = 0.0828 P200_450 = total(M200_450.*P) P200_450 = 0.8636
The class of random variables is iid (independent, identically distributed) with common distribution
Let W=3X–4Y+2Z. Determine the distribution for W and from this determine P(W>0) and P(–20≤W≤10). Do this with icalc, then repeat with icalc3 and compare results.
Since icalc uses X and PX in its output, we avoid a renaming problem by using x and px for data vectors X and PX.
x = [-5 -1 3 4 7]; px = 0.01*[15 20 30 25 10]; icalc Enter row matrix of X-values 3*x Enter row matrix of Y-values -4*x Enter X probabilities px Enter Y probabilities px Use array operations on matrices X, Y, PX, PY, t, u, and P a = t + u; [V,PV] = csort(a,P); icalc Enter row matrix of X-values V Enter row matrix of Y-values 2*x Enter X probabilities PV Enter Y probabilities px Use array operations on matrices X, Y, PX, PY, t, u, and P b = t + u; [W,PW] = csort(b,P); P1 = (W>0)*PW' P1 = 0.5300 P2 = ((-20<=W)&(W<=10))*PW' P2 = 0.5514 icalc3 % Alternate using icalc3 Enter row matrix of X-values x Enter row matrix of Y-values x Enter row matrix of Z-values x Enter X probabilities px Enter Y probabilities px Enter Z probabilities px Use array operations on matrices X, Y, Z, PX, PY, PZ, t, u, v, and P a = 3*t - 4*u + 2*v; [W,PW] = csort(a,P); P1 = (W>0)*PW' P1 = 0.5300 P2 = ((-20<=W)&(W<=10))*PW' P2 = 0.5514
The class is independent; the respective probabilites for these events are . Consider the simple random variables
Determine P(Y>X), P(Z>0), P(5≤Z≤25).
cx = [3 -9 4 0]; pmx = minprob(0.01*[42 27 33]); cy = [-2 6 2 -3]; pmy = minprob(0.01*[47 37 41]); [X,PX] = canonicf(cx,pmx); [Y,PY] = canonicf(cy,pmy); icalc Enter row matrix of X-values X Enter row matrix of Y-values Y Enter X probabilities PX Enter Y probabilities PY Use array operations on matrices X, Y, PX, PY, t, u, and P G = 2*t - 3*u; [Z,PZ] = csort(G,P); PYgX = total((u>t).*P) PYgX = 0.3752 PZpos = (Z>0)*PZ' PZpos = 0.5654 P5Z25 = ((5<=Z)&(Z<=25))*PZ' P5Z25 = 0.4745
Two players, Ronald and Mike, throw a pair of dice 30 times each. What is the probability Mike throws more “sevens” than does Ronald?
P = (ibinom(30,1/6,0:29))*(cbinom(30,1/6,1:30))' = 0.4307
A class has fifteen boys and fifteen girls. They pair up and each tosses a coin 20 times. What is the probability that at least eight girls throw more heads than their partners?
pg = (ibinom(20,1/2,0:19))*(cbinom(20,1/2,1:20))' pg = 0.4373 % Probability each girl throws more P = cbinom(15,pg,8) P = 0.3100 % Probability eight or more girls throw more
Glenn makes five sales calls, with probabilities 0.37, 0.52, 0.48, 0.71, 0.63, of success on the respective calls. Margaret makes four sales calls with probabilities 0.77, 0.82, 0.75, 0.91, of success on the respective calls. Assume that all nine events form an independent class. If Glenn realizes a profit of $18.00 on each sale and Margaret earns $20.00 on each sale, what is the probability Margaret's gain is at least $10.00 more than Glenn's?
cg = [18*ones(1,5) 0]; cm = [20*ones(1,4) 0]; pmg = minprob(0.01*[37 52 48 71 63]); pmm = minprob(0.01*[77 82 75 91]); [G,PG] = canonicf(cg,pmg); [M,PM] = canonicf(cm,pmm); icalc Enter row matrix of X-values G Enter row matrix of Y-values M Enter X probabilities PG Enter Y probabilities PM Use array operations on matrices X, Y, PX, PY, t, u, and P H = u-t>=10; p1 = total(H.*P) p1 = 0.5197
Mike and Harry have a basketball shooting contest.
Mike shoots 10 ordinary free throws, worth two points each, with probability 0.75 of success on each shot.
Harry shoots 12 “three point” shots, with probability 0.40 of success on each shot.
Let X,Y be the number of points scored by Mike and Harry, respectively. Determine P(X≥15), and P(Y≥15),P(X≥Y).
X = 2*[0:10]; PX = ibinom(10,0.75,0:10); Y = 3*[0:12]; PY = ibinom(12,0.40,0:12); icalc Enter row matrix of X-values X Enter row matrix of Y-values Y Enter X probabilities PX Enter Y probabilities PY Use array operations on matrices X, Y, PX, PY, t, u, and P PX15 = (X>=15)*PX' PX15 = 0.5256 PY15 = (Y>=15)*PY' PY15 = 0.5618 G = t>=u; PG = total(G.*P) PG = 0.5811
Martha has the choice of two games.
Game 1: Pay ten dollars for each “play.” If she wins, she receives $20, for a net gain of $10 on the play; otherwise, she loses her $10. The probability of a win is 1/2, so the game is “fair.” |
Game 2: Pay five dollars to play; receive $15 for a win. The probability of a win on any play is 1/3. |
Martha has $100 to bet. She is trying to decide whether to play Game 1 ten times or Game 2 twenty times. Let W1 and W2 be the respective net winnings (payoff minus fee to play).
Determine P(W2≥W1).
Compare the two games further by calculating P(W1>0) and P(W2>0)
Which game seems preferable?
W1 = 20*[0:10] - 100; PW1 = ibinom(10,1/2,0:10); W2 = 15*[0:20] - 100; PW2 = ibinom(20,1/3,0:20); P1pos = (W1>0)*PW1' P1pos = 0.3770 P2pos = (W2>0)*PW2' P2pos = 0.5207 icalc Enter row matrix of X-values W1 Enter row matrix of Y-values W2 Enter X probabilities PW1 Enter Y probabilities PW2 Use array operations on matrices X, Y, PX, PY, t, u, and P G = u >= t; PG = total(G.*P) PG = 0.5182
Jim and Bill of the men's basketball team challenge women players Mary and Ellen to a free throw contest. Each takes five free throws. Make the usual independence assumptions. Jim, Bill, Mary, and Ellen have respective probabilities p=0.82,0.87,0.80, and 0.85 of making each shot tried. What is the probability Mary and Ellen make a total number of free throws at least as great as the total made by the guys?
x = 0:5; PJ = ibinom(5,0.82,x); PB = ibinom(5,0.87,x); PM = ibinom(5,0.80,x); PE = ibinom(5,0.85,x); icalc Enter row matrix of X-values x Enter row matrix of Y-values x Enter X probabilities PJ Enter Y probabilities PB Use array operations on matrices X, Y, PX, PY, t, u, and P H = t+u; [Tm,Pm] = csort(H,P); icalc Enter row matrix of X-values x Enter row matrix of Y-values x Enter X probabilities PM Enter Y probabilities PE Use array operations on matrices X, Y, PX, PY, t, u, and P G = t+u; [Tw,Pw] = csort(G,P); icalc Enter row matrix of X-values Tm Enter row matrix of Y-values Tw Enter X probabilities Pm Enter Y probabilities Pw Use array operations on matrices X, Y, PX, PY, t, u, and P Gw = u>=t; PGw = total(Gw.*P) PGw = 0.5746 icalc4 % Alternate using icalc4 Enter row matrix of X-values x Enter row matrix of Y-values x Enter row matrix of Z-values x Enter row matrix of W-values x Enter X probabilities PJ Enter Y probabilities PB Enter Z probabilities PM Enter W probabilities PE Use array operations on matrices X, Y, Z,W PX, PY, PZ, PW t, u, v, w, and P H = v+w >= t+u; PH = total(H.*P) PH = 0.5746