# Untitled Page 15

- Page ID
- 125288

# Chapter 10. Functions of Random Variables

## 10.1. Functions of a Random Variable^{*}

**Introduction**

Frequently, we observe a value of some random variable, but are really interested
in a value derived from this by a function rule. If *X* is a random variable and *g* is
a reasonable function (technically, a *Borel function*), then *Z*=*g*(*X*) is a new random
variable which has the value *g*(*t*) for any *ω* such that *X*(*ω*)=*t*.
Thus *Z*(*ω*)=*g*(*X*(*ω*)).

### The problem; an approach

We consider, first, functions of a single random variable. A wide variety of functions are utilized in practice.

In a quality control check on a production line for ball bearings it may be easier to weigh the
balls than measure the diameters. If we can assume true spherical shape and *w*
is the weight, then diameter is *k**w*^{1/3}, where *k* is a factor depending upon the
formula for the volume of a sphere, the units of measurement, and the density of the steel.
Thus, if *X* is the weight of the sampled ball, the desired random variable is *D*=*k**X*^{1/3}.

The cultural committee of a student organization has arranged a special deal for tickets to a concert. The agreement is that the organization will purchase ten tickets at $20 each (regardless of the number of individual buyers). Additional tickets are available according to the following schedule:

11-20, $18 each

21-30, $16 each

31-50, $15 each

51-100, $13 each

If the number of purchasers is a random variable *X*, the total cost (in dollars) is
a random quantity *Z*=*g*(*X*) described by

*g*(

*X*) = 200 + 18

*I*

_{ M 1 }(

*X*) (

*X*– 10 ) + ( 16 – 18 )

*I*

_{ M 2 }(

*X*) (

*X*– 20 )

*I*

_{ M 3 }(

*X*) (

*X*– 30 ) + ( 13 – 15 )

*I*

_{ M 4 }(

*X*) (

*X*– 50 )

The function rule is more complicated than in Example 10.1, but the essential problem is the same.

** The problem**

If *X* is a random variable, then *Z*=*g*(*X*) is a new random variable.
Suppose we have the distribution for *X*. How can we determine *P*(*Z*∈*M*), the probability
*Z* takes a value in the set *M*?

** An approach to a solution**

We consider two equivalent approaches

To find

*P*(*X*∈*M*).*Mapping approach*. Simply find the amount of probability mass mapped into the set*M*by the random variable*X*.In the absolutely continuous case, calculate .

In the discrete case, identify those values

*t*of_{i}*X*which are in the set*M*and add the associated probabilities.

*Discrete alternative*. Consider each value*t*of_{i}*X*. Select those which meet the defining conditions for*M*and add the associated probabilities. This is the approach we use in the MATLAB calculations. Note that it is not necessary to describe geometrically the set*M*; merely use the defining conditions.

To find

*P*(*g*(*X*)∈*M*).*Mapping approach*. Determine the set*N*of all those*t*which are mapped into*M*by the function*g*. Now if*X*(*ω*)∈*N*, then*g*(*X*(*ω*))∈*M*, and if*g*(*X*(*ω*))∈*M*, then*X*(*ω*)∈*N*. Hence(10.4){*ω*:*g*(*X*(*ω*))∈*M*}={*ω*:*X*(*ω*)∈*N*}Since these are the same event, they must have the same probability. Once

*N*is identified, determine*P*(*X*∈*N*) in the usual manner (see part a, above).*Discrete alternative*. For each possible value*t*of_{i}*X*, determine whether meets the defining condition for*M*. Select those*t*which do and add the associated probabilities._{i}

— □

*Remark*. The set *N* in the mapping approach is called the *inverse image **N*=*g*^{–1}(*M*).

Suppose *X* has values -2, 0, 1, 3, 6, with respective probabilities 0.2, 0.1, 0.2, 0.3 0.2.

Consider *Z*=*g*(*X*)=(*X*+1)(*X*–4). Determine *P*(*Z*>0).

SOLUTION

*First solution*. The mapping approach

*g*(*t*)=(*t*+1)(*t*–4). *N*={*t*:*g*(*t*)>0} is the set of points to the left of –1 or
to the right of 4. The *X*-values –2 and 6 lie in this set. Hence

*P*(

*g*(

*X*) > 0 ) =

*P*(

*X*= – 2 ) +

*P*(

*X*= 6 ) = 0 . 2 + 0 . 2 = 0 . 4

*Second solution*. The discrete alternative

X
=
| -2 | 0 | 1 | 3 | 6 |

P
X
=
| 0.2 | 0.1 | 0.2 | 0.3 | 0.2 |

Z
=
| 6 | -4 | -6 | -4 | 14 |

Z
>
0
| 1 | 0 | 0 | 0 | 1 |

Picking out and adding the indicated probabilities, we have

*P*(

*Z*> 0 ) = 0 . 2 + 0 . 2 = 0 . 4

In this case (and often for “hand calculations”) the mapping approach requires less calculation. However, for MATLAB calculations (as we show below), the discrete alternative is more readily implemented.

Suppose *X*∼ uniform [–3,7]. Then *f*_{X}(*t*)=0.1,–3≤*t*≤7 (and zero
elsewhere). Let

*Z*=

*g*(

*X*) = (

*X*+ 1 ) (

*X*– 4 )

Determine *P*(*Z*>0).

SOLUTION

First we determine *N*={*t*:*g*(*t*)>0}. As in Example 10.3, *g*(*t*)=(*t*+1)(*t*–4)>0
for *t*<–1 or *t*>4. Because of the uniform distribution, the integral of the density
over any subinterval of is 0.1 times the length of that subinterval. Thus, the
desired probability is

*P*(

*g*(

*X*) > 0 ) = 0 . 1 [ ( – 1 – ( – 3 ) ) + ( 7 – 4 ) ] = 0 . 5

We consider, next, some important examples.

To show that if then

VERIFICATION

We wish to show the denity function for *Z* is

Now

Hence, for given the inverse image is , so that

*F*

_{Z}(

*v*) =

*P*(

*Z*≤

*v*) =

*P*(

*Z*∈

*M*) =

*P*(

*X*∈

*N*) =

*P*(

*X*≤

*σ*

*v*+

*μ*) =

*F*

_{X}(

*σ*

*v*+

*μ*)

Since the density is the derivative of the distribution function,

*f*

_{Z}(

*v*) =

*F*

_{Z}

^{'}(

*v*) =

*F*

_{X}

^{'}(

*σ*

*v*+

*μ*)

*σ*=

*σ*

*f*

_{X}(

*σ*

*v*+

*μ*)

Thus

We conclude that

Suppose *X* has distribution function *F _{X}*. If it is absolutely continuous, the corresponding
density is

*f*. Consider . Here

_{X}*g*(

*t*)=

*a*

*t*+

*b*, an affine function (linear plus a constant). Determine the distribution function for

*Z*(and the density in the absolutely continuous case).

SOLUTION

*F*

_{Z}(

*v*) =

*P*(

*Z*≤

*v*) =

*P*(

*a*

*X*+

*b*≤

*v*)

There are two cases

*a*>0:(10.16)*a*<0(10.17)So that

(10.18)

For the absolutely continuous case, , and by differentiation

for

for

Since for *a*<0, –*a*=|*a*|, the two cases may be combined into one formula.

Suppose . Show that is .

VERIFICATION

Use of the result of Example 10.6 on affine functions shows that

Suppose *X*≥0 and *Z*=*g*(*X*)=*X*^{1/a} for *a*>1. Since for *t*≥0, *t*^{1/a}
is increasing, we have 0≤*t*^{1/a}≤*v* iff 0≤*t*≤*v*^{a}. Thus

In the absolutely continuous case

Suppose *X*∼ exponential (*λ*). Then *Z*=*X*^{1/a}∼
Weibull .

According to the result of Example 10.8,

which is the distribution function for *Z*∼ Weibull .

*X*

If *X* is a random variable, a simple function approximation may be constructed (see
Distribution Approximations). We limit our discussion to the bounded case, in which the range
of *X* is limited to a bounded interval *I*=[*a*,*b*]. Suppose *I* is partitioned into
*n* subintervals by points *t _{i}*, 1≤

*i*≤

*n*–1, with

*a*=

*t*

_{0}and

*b*=

*t*

_{n}. Let be the

*i*th subinterval, 1≤

*i*≤

*n*–1 and . Let be the set of points mapped into

*M*by

_{i}*X*. Then the

*E*form a partition of the basic space

_{i}*Ω*. For the given subdivision, we form a simple random variable

*X*as follows. In each subinterval, pick a point

_{s}*s*

_{i},

*t*

_{i–1}≤

*s*

_{i}<

*t*

_{i}. The simple random variable

approximates *X* to within the length of the largest subinterval *M _{i}*. Now

*I*

_{Ei}=

*I*

_{Mi}(

*X*), since

*I*

_{Ei}(

*ω*)=1 iff

*X*(

*ω*)∈

*M*

_{i}iff

*I*

_{Mi}(

*X*(

*ω*))=1. We may thus write

### Use of MATLAB on simple random variables

For simple random variables, we use the discrete alternative approach, since this may be
implemented easily with MATLAB. Suppose the distribution for *X* is
expressed in the row vectors *X* and **PX**.

We perform

*array operations*on vector*X*to obtain(10.26)We use

*relational*and*logical*operations on*G*to obtain a matrix*M*which has ones for those*t*(values of_{i}*X*) such that satisfies the desired condition (and zeros elsewhere).The zero-one matrix

*M*is used to select the the corresponding and sum them by the taking the dot product of*M*and**PX**.

X = -5:10; % Values of X PX = ibinom(15,0.6,0:15); % Probabilities for X G = (X + 6).*(X - 1).*(X - 8); % Array operations on X matrix to get G = g(X) M = (G > - 100)&(G < 130); % Relational and logical operations on G PM = M*PX' % Sum of probabilities for selected values PM = 0.4800 disp([X;G;M;PX]') % Display of various matrices (as columns) -5.0000 78.0000 1.0000 0.0000 -4.0000 120.0000 1.0000 0.0000 -3.0000 132.0000 0 0.0003 -2.0000 120.0000 1.0000 0.0016 -1.0000 90.0000 1.0000 0.0074 0 48.0000 1.0000 0.0245 1.0000 0 1.0000 0.0612 2.0000 -48.0000 1.0000 0.1181 3.0000 -90.0000 1.0000 0.1771 4.0000 -120.0000 0 0.2066 5.0000 -132.0000 0 0.1859 6.0000 -120.0000 0 0.1268 7.0000 -78.0000 1.0000 0.0634 8.0000 0 1.0000 0.0219 9.0000 120.0000 1.0000 0.0047 10.0000 288.0000 0 0.0005 [Z,PZ] = csort(G,PX); % Sorting and consolidating to obtain disp([Z;PZ]') % the distribution for Z = g(X) -132.0000 0.1859 -120.0000 0.3334 -90.0000 0.1771 -78.0000 0.0634 -48.0000 0.1181 0 0.0832 48.0000 0.0245 78.0000 0.0000 90.0000 0.0074 120.0000 0.0064 132.0000 0.0003 288.0000 0.0005 P1 = (G<-120)*PX ' % Further calculation using G, PX P1 = 0.1859 p1 = (Z<-120)*PZ' % Alternate using Z, PZ p1 = 0.1859

*X*=10*I*_{A}+18*I*_{B}+10*I*_{C} with {*A*,*B*,*C*} independent and *P*=[0.60.30.5].

We calculate the distribution for *X*, then determine the distribution for

*Z*=

*X*

^{ 1 / 2 }–

*X*+ 50

c = [10 18 10 0]; pm = minprob(0.1*[6 3 5]); canonic Enter row vector of coefficients c Enter row vector of minterm probabilities pm Use row matrices X and PX for calculations Call for XDBN to view the distribution disp(XDBN) 0 0.1400 10.0000 0.3500 18.0000 0.0600 20.0000 0.2100 28.0000 0.1500 38.0000 0.0900 G = sqrt(X) - X + 50; % Formation of G matrix [Z,PZ] = csort(G,PX); % Sorts distinct values of g(X) disp([Z;PZ]') % consolidates probabilities 18.1644 0.0900 27.2915 0.1500 34.4721 0.2100 36.2426 0.0600 43.1623 0.3500 50.0000 0.1400 M = (Z < 20)|(Z >= 40) % Direct use of Z distribution M = 1 0 0 0 1 1 PZM = M*PZ' PZM = 0.5800

*Remark*. Note that with the m-function csort, we may name the output as desired.

H = 2*X.^2 - 3*X + 1; [W,PW] = csort(H,PX) W = 1 171 595 741 1485 2775 PW = 0.1400 0.3500 0.0600 0.2100 0.1500 0.0900

Suppose *X* has density function for
0≤*t*≤1. Then . Let
*Z*=*X*^{1/2}. We may use the approximation m-procedure
tappr to obtain an approximate discrete distribution. Then we work with the
approximating random variable as a simple random variable. Suppose we want
*P*(*Z*≤0.8). Now *Z*≤0.8 iff *X*≤0.8^{2}=0.64. The desired
probability may be calculated to be

Using the approximation procedure, we have

tappr Enter matrix [a b] of x-range endpoints [0 1] Enter number of x approximation points 200 Enter density as a function of t (3*t.^2 + 2*t)/2 Use row matrices X and PX as in the simple case G = X.^(1/2); M = G <= 0.8; PM = M*PX' PM = 0.3359 % Agrees quite closely with the theoretical

## 10.2. Function of Random Vectors^{*}

** Introduction**

The general mapping approach for a single random variable and the discrete alternative extends to functions of more than one variable. It is convenient to consider the case of two random variables, considered jointly. Extensions to more than two random variables are made similarly, although the details are more complicated.

### The general approach extended to a pair

Consider a pair {*X*,*Y*} having joint distribution on the plane. The approach is analogous
to that for a single random variable with distribution on the line.

To find .

*Mapping approach*. Simply find the amount of probability mass mapped into the set*Q*on the plane by the random vector .In the absolutely continuous case, calculate .

In the discrete case, identify those vector values of which are in the set

*Q*and add the associated probabilities.

*Discrete alternative*. Consider each vector value of . Select those which meet the defining conditions for*Q*and add the associated probabilities. This is the approach we use in the MATLAB calculations. It does not require that we describe geometrically the region*Q*.

To find

*P*(*g*(*X*,*Y*)∈*M*).*g*is real valued and*M*is a subset the real line.*Mapping approach*. Determine the set*Q*of all those which are mapped into*M*by the function*g*. Now(10.29)(10.30){*ω*:*g*(*X*(*ω*),*Y*(*ω*))∈*M*}={*ω*:(*X*(*ω*),*Y*(*ω*))∈*Q*}Since these are the same event, they must have the same probability. Once

*Q*is identified on the plane, determine*P*((*X*,*Y*)∈*Q*) in the usual manner (see part a, above).*Discrete alternative*. For each possible vector value of (*X*,*Y*), determine whether meets the defining condition for*M*. Select those which do and add the associated probabilities.

We illustrate the mapping approach in the absolutely continuous case. A key element in the
approach is finding the set *Q* on the plane such that iff . The desired probability is obtained by integrating *f*_{XY} over *Q*.

The pair {*X*,*Y*} has joint density
on the region bounded by *t*=0, *t*=2, *u*=0, *u*=max{1,*t*} (see Figure 1).
Determine *P*(*Y*≤*X*)=*P*(*X*–*Y*≥0). Here *g*(*t*,*u*)=*t*–*u* and *M*=[0,∞). Now *Q*={(*t*,*u*):*t*–*u*≥0}={(*t*,*u*):*u*≤*t*} which is the region on the
plane on or below the line *u*=*t*. Examination of the figure shows that for this region,
*f*_{XY} is different from zero on the triangle bounded by *t*=2, *u*=0, and *u*=*t*.
The desired probability is

*X*+

*Y*

Suppose the pair has joint density *f*_{XY}. Determine the density for

*Z*=

*X*+

*Y*

SOLUTION

For any fixed *v*, the region *Q _{v}* is the portion of the plane on or below the line

*u*=

*v*–

*t*(see Figure 10.2). Thus

Differentiating with the aid of the fundamental theorem of calculus, we get

This integral expresssion is known as a *convolution integral*.

Suppose the pair has joint uniform density on the unit square 0≤*t*≤1,
0≤*u*≤1. Determine the density for *Z*=*X*+*Y*.

SOLUTION

*F*_{Z}(*v*) is the probability in the region *Q*_{v}:*u*≤*v*–*t*. Now , where the complementary set *Q _{v}^{c}* is the set of points above the line.
As Figure 3 shows, for

*v*≤1, the part of

*Q*which has probability mass is the lower shaded triangular region on the figure, which has area (and hence probability)

_{v}*v*

^{2}/2. For

*v*>1, the complementary region

*Q*is the upper shaded region. It has area (2–

_{v}^{c}*v*)

^{2}/2. so that in this case,

. Thus,

Differentiation shows that *Z* has the symmetric triangular distribution on , since

With the use of indicator functions, these may be combined into a single expression

*f*

_{Z}(

*v*) =

*I*

_{ [ 0 , 1 ] }(

*v*)

*v*+

*I*

_{ ( 1 , 2 ] }(

*v*) ( 2 –

*v*)

ALTERNATE SOLUTION

Since , we have . Now 0≤*v*–*t*≤1
iff *v*–1≤*t*≤*v*, so that

Integration with respect to *t* gives the result above.

** Independence of functions of independent random variables**

Suppose {*X*,*Y*} is an independent pair. Let *Z*=*g*(*X*),*W*=*h*(*Y*). Since

the pair is independent for each pair . Thus, the pair is independent.

If {*X*,*Y*} is an independent pair and , then
the pair {*Z*,*W*} is independent. However, if *Z*=*g*(*X*,*Y*) and *W*=*h*(*X*,*Y*), then in
general {*Z*,*W*} is not independent. This is illustrated for simple random variables with
the aid of the m-procedure *jointzw* at the end of the next section.

Suppose is an independent pair with simple approximations *X _{s}* and

*Y*as described in Distribution Approximations.

_{s} As functions of *X* and *Y*, respectively, the pair is independent. Also
each pair is independent.

### Use of MATLAB on pairs of simple random variables

In the single-variable case, we use array operations on the values of *X* to determine
a matrix of values of *g*(*X*). In the two-variable case, we must use array operations on
the calculating matrices *t* and *u* to
obtain a matrix *G* whose elements are . To obtain the distribution for
, we may use the m-function csort on *G* and the joint probability matrix *P*.
A first step, then, is the use of jcalc or icalc to set up the joint distribution and
the calculating matrices. This is illustrated in the following example.

% file jdemo3.m % data for joint simple distribution X = [-4 -2 0 1 3]; Y = [0 1 2 4]; P = [0.0132 0.0198 0.0297 0.0209 0.0264; 0.0372 0.0558 0.0837 0.0589 0.0744; 0.0516 0.0774 0.1161 0.0817 0.1032; 0.0180 0.0270 0.0405 0.0285 0.0360]; jdemo3 % Call for data jcalc % Set up of calculating matrices t, u. Enter JOINT PROBABILITIES (as on the plane) P Enter row matrix of VALUES of X X Enter row matrix of VALUES of Y Y Use array operations on matrices X, Y, PX, PY, t, u, and P G = t.^2 -3*u; % Formation of G = [g(ti,uj)] M = G >= 1; % Calculation using the XY distribution PM = total(M.*P) % Alternately, use total((G>=1).*P) PM = 0.4665 [Z,PZ] = csort(G,P); PM = (Z>=1)*PZ' % Calculation using the Z distribution PM = 0.4665 disp([Z;PZ]') % Display of the Z distribution -12.0000 0.0297 -11.0000 0.0209 -8.0000 0.0198 -6.0000 0.0837 -5.0000 0.0589 -3.0000 0.1425 -2.0000 0.1375 0 0.0405 1.0000 0.1059 3.0000 0.0744 4.0000 0.0402 6.0000 0.1032 9.0000 0.0360 10.0000 0.0372 13.0000 0.0516 16.0000 0.0180

We extend the example above by considering a function which has a composite definition.

Let

H = t.*(t+u>=1) + (t.^2 + u.^2).*(t+u<1); % Specification of h(t,u) [W,PW] = csort(H,P); % Distribution for W = h(X,Y) disp([W;PW]') -2.0000 0.0198 0 0.2700 1.0000 0.1900 3.0000 0.2400 4.0000 0.0270 5.0000 0.0774 8.0000 0.0558 16.0000 0.0180 17.0000 0.0516 20.0000 0.0372 32.0000 0.0132 ddbn % Plot of distribution function Enter row matrix of values W Enter row matrix of probabilities PW print % See Figure 10.4

** Joint distributions for two functions of**
(
*X*
,
*Y*
)

In previous treatments, we use csort to obtain the *marginal* distribution for a
single function *Z*=*g*(*X*,*Y*). It is often desirable to have the *joint* distribution
for a pair *Z*=*g*(*X*,*Y*) and *W*=*h*(*X*,*Y*). As special cases, we may have *Z*=*X*
or *W*=*Y*. Suppose

The joint distribution requires the probability of each pair, .
Each such pair of values corresponds to a set of pairs of *X* and *Y* values. To determine
the joint probability matrix **PZW** for (*Z*,*W*) arranged as on the plane, we assign to
each position (*i*,*j*) the probability , with values of *W* increasing
upward. Each pair of (*W*,*Z*) values corresponds to one or more pairs of (*Y*,*X*) values.
If we select and add the probabilities corresponding to the latter pairs, we have
. This may be accomplished as follows:

Set up calculation matrices

*t*and*u*as with jcalc.Use array arithmetic to determine the matrices of values

*G*=[*g*(*t*,*u*)] and*H*=[*h*(*t*,*u*)].Use csort to determine the

*Z*and*W*value matrices and the**PZ**and**PW**marginal probability matrices.For each pair , use the MATLAB function

*find*to determine the positions*a*for which(10.44)(H==W(i))&(G==Z(j))Assign to the (

*i*,*j*) position in the joint probability matrix**PZW**for (*Z*,*W*) the probability(10.45)

We first examine the basic calculations, which are then implemented in the m-procedure
*jointzw*.

% file jdemo7.m P = [0.061 0.030 0.060 0.027 0.009; 0.015 0.001 0.048 0.058 0.013; 0.040 0.054 0.012 0.004 0.013; 0.032 0.029 0.026 0.023 0.039; 0.058 0.040 0.061 0.053 0.018; 0.050 0.052 0.060 0.001 0.013]; X = -2:2; Y = -2:3; jdemo7 % Call for data in jdemo7.m jcalc % Used to set up calculation matrices t, u - - - - - - - - - - H = u.^2 % Matrix of values for W = h(X,Y) H = 9 9 9 9 9 4 4 4 4 4 1 1 1 1 1 0 0 0 0 0 1 1 1 1 1 4 4 4 4 4 G = abs(t) % Matrix of values for Z = g(X,Y) G = 2 1 0 1 2 2 1 0 1 2 2 1 0 1 2 2 1 0 1 2 2 1 0 1 2 2 1 0 1 2 [W,PW] = csort(H,P) % Determination of marginal for W W = 0 1 4 9 PW = 0.1490 0.3530 0.3110 0.1870 [Z,PZ] = csort(G,P) % Determination of marginal for Z Z = 0 1 2 PZ = 0.2670 0.3720 0.3610 r = W(3) % Third value for W r = 4 s = Z(2) % Second value for Z s = 1

To determine *P*(*W*=4,*Z*=1), we need to determine the (*t*,*u*) positions for
which this pair of (*W*,*Z*) values is taken on. By inspection, we find these to be (2,2), (6,2),
(2,4), and (6,4). Then *P*(*W*=4,*Z*=1) is the total probability at these positions.
This is 0.001 + 0.052 + 0.058 + 0.001 = 0.112. We put this probability in the joint
probability matrix **PZW** at the *W*=4,*Z*=1 position. This may be achieved by
MATLAB with the following operations.

[i,j] = find((H==W(3))&(G==Z(2))); % Location of (t,u) positions disp([i j]) % Optional display of positions 2 2 6 2 2 4 6 4 a = find((H==W(3))&(G==Z(2))); % Location in more convenient form P0 = zeros(size(P)); % Setup of zero matrix P0(a) = P(a) % Display of designated probabilities in P P0 = 0 0 0 0 0 0 0.0010 0 0.0580 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.0520 0 0.0010 0 PZW = zeros(length(W),length(Z)) % Initialization of PZW matrix PZW(3,2) = total(P(a)) % Assignment to PZW matrix with PZW = 0 0 0 % W increasing downward 0 0 0 0 0.1120 0 0 0 0

PZW = flipud(PZW) % Assignment with W increasing upward PZW = 0 0 0 0 0.1120 0 0 0 0 0 0 0

The procedure *jointzw* carries out this operation for each possible pair of *W* and *Z* values (with the
`flipud`

operation coming only after all individual assignments are made).

*Z*=

*g*(

*X*,

*Y*)=||

*X*|–

*Y*| and

*W*=

*h*(

*X*,

*Y*) = |

*X*

*Y*|

% file jdemo3.m data for joint simple distribution X = [-4 -2 0 1 3]; Y = [0 1 2 4]; P = [0.0132 0.0198 0.0297 0.0209 0.0264; 0.0372 0.0558 0.0837 0.0589 0.0744; 0.0516 0.0774 0.1161 0.0817 0.1032; 0.0180 0.0270 0.0405 0.0285 0.0360]; jdemo3 % Call for data jointzw % Call for m-program Enter joint prob for (X,Y): P Enter values for X: X Enter values for Y: Y Enter expression for g(t,u): abs(abs(t)-u) Enter expression for h(t,u): abs(t.*u) Use array operations on Z, W, PZ, PW, v, w, PZW disp(PZW) 0.0132 0 0 0 0 0 0.0264 0 0 0 0 0 0.0570 0 0 0 0.0744 0 0 0 0.0558 0 0 0.0725 0 0 0 0.1032 0 0 0 0.1363 0 0 0 0.0817 0 0 0 0 0.0405 0.1446 0.1107 0.0360 0.0477 EZ = total(v.*PZW) EZ = 1.4398 ez = Z*PZ' % Alternate, using marginal dbn ez = 1.4398 EW = total(w.*PZW) EW = 2.6075 ew = W*PW' % Alternate, using marginal dbn ew = 2.6075 M = v > w; % P(Z>W) PM = total(M.*PZW) PM = 0.3390

At noted in the previous section, if {*X*,*Y*} is an independent pair and *Z*=*g*(*X*),

*W*=*h*(*Y*), then the pair {*Z*,*W*} is independent. However, if
*Z*=*g*(*X*,*Y*) and

*W*=*h*(*X*,*Y*), then in general the pair {*Z*,*W*} is not independent.
We may illustrate this with the aid of the m-procedure *jointzw*

jdemo3 itest Enter matrix of joint probabilities P The pair {X,Y} is independent % The pair {X,Y} is independent jointzw Enter joint prob for (X,Y): P Enter values for X: X Enter values for Y: Y Enter expression for g(t,u): t.^2 - 3*t % Z = g(X) Enter expression for h(t,u): abs(u) + 3 % W = h(Y) Use array operations on Z, W, PZ, PW, v, w, PZW itest Enter matrix of joint probabilities PZW The pair {X,Y} is independent % The pair {g(X),h(Y)} is independent jdemo3 % Refresh data jointzw Enter joint prob for (X,Y): P Enter values for X: X Enter values for Y: Y Enter expression for g(t,u): t+u % Z = g(X,Y) Enter expression for h(t,u): t.*u % W = h(X,Y) Use array operations on Z, W, PZ, PW, v, w, PZW

itest Enter matrix of joint probabilities PZW The pair {X,Y} is NOT independent % The pair {g(X,Y),h(X,Y)} is not indep To see where the product rule fails, call for D % Fails for all pairs

### Absolutely continuous case: analysis and approximation

As in the analysis Joint Distributions, we may set up a simple approximation to the joint distribution and proceed as for simple random variables. In this section, we solve several examples analytically, then obtain simple approximations.

Suppose the pair {*X*,*Y*} has joint density *f*_{XY}. Let *Z*=*X**Y*. Determine
*Q _{v}* such that

*P*(

*Z*≤

*v*)=

*P*((

*X*,

*Y*) ∈

*Q*

_{v}).

SOLUTION (see Figure 10.5)

{*X*,*Y*}∼ uniform on unit square

. Then (see Figure 10.6)

Integration shows

For .

% Note that although f = 1, it must be expressed in terms of t, u. tuappr Enter matrix [a b] of X-range endpoints [0 1] Enter matrix [c d] of Y-range endpoints [0 1] Enter number of X approximation points 200 Enter number of Y approximation points 200 Enter expression for joint density (u>=0)&(t>=0) Use array operations on X, Y, PX, PY, t, u, and P G = t.*u;

[Z,PZ] = csort(G,P); p = (Z<=0.5)*PZ' p = 0.8465 % Theoretical value 0.8466, above

The pair has joint density
on the region bounded by *t*=0, *t*=2, *u*=0, and (see Figure 7).
Let *Z*=*X**Y*. Determine *P*(*Z*≤1).

ANALYTIC SOLUTION

Reference to Figure 10.7 shows that

APPROXIMATE SOLUTION

tuappr Enter matrix [a b] of X-range endpoints [0 2] Enter matrix [c d] of Y-range endpoints [0 2] Enter number of X approximation points 300 Enter number of Y approximation points 300 Enter expression for joint density (6/37)*(t + 2*u).*(u<=max(t,1)) Use array operations on X, Y, PX, PY, t, u, and P Q = t.*u<=1; PQ = total(Q.*P) PQ = 0.4853 % Theoretical value 0.4865, above G = t.*u; % Alternate, using the distribution for Z [Z,PZ] = csort(G,P); PZ1 = (Z<=1)*PZ' PZ1 = 0.4853

In the following example, the function *g* has a compound definition. That is, it has
a different rule for different parts of the plane.

The pair {*X*,*Y*} has joint density
on the unit square 0≤*t*≤1,0≤*u*≤1.

for . Determine *P*(*Z*<=0.5).

ANALYTICAL SOLUTION

where and
. Reference to Figure 10.8 shows that this is the part of the unit square for which
. We may break up the integral into three parts. Let
1/2–*t*_{1}=*t*_{1}^{2} and *t*_{2}^{2}=1/2. Then

APPROXIMATE SOLUTION

tuappr Enter matrix [a b] of X-range endpoints [0 1] Enter matrix [c d] of Y-range endpoints [0 1] Enter number of X approximation points 200 Enter number of Y approximation points 200 Enter expression for joint density (2/3)*(t + 2*u) Use array operations on X, Y, PX, PY, t, u, and P Q = u <= t.^2; G = u.*Q + (t + u).*(1-Q); prob = total((G<=1/2).*P) prob = 0.2328 % Theoretical is 0.2322, above

The setup of the integrals involves careful attention to the geometry of the system. Once set up, the evaluation is elementary but tedious. On the other hand, the approximation proceeds in a straightforward manner from the normal description of the problem. The numerical result compares quite closely with the theoretical value and accuracy could be improved by taking more subdivision points.

## 10.3. The Quantile Function^{*}

### The Quantile Function

The quantile function for a probability distribution has many uses in both the theory
and application of probability. If *F* is a probability distribution function, the
quantile function may be used to “construct” a random variable having *F* as its
distributions function. This fact serves as the basis of a method of simulating the
“sampling” from an arbitrary distribution with the aid of a *random number generator*. Also, given any finite class

of random variables, an *independent class * may be constructed, with each *X _{i}* and associated

*Y*having the same (marginal) distribution. Quantile functions for simple random variables may be used to obtain an important Poisson approximation theorem (which we do not develop in this work). The quantile function is used to derive a number of useful special forms for mathematical expectation.

_{i} **General concept—properties, and examples**

If *F* is a probability distribution function, the associated quantile function *Q* is essentially
an inverse of *F*. The quantile function is defined on the unit
interval . For *F* continuous and strictly increasing at *t*, then
*Q*(*u*)=*t* iff *F*(*t*)=*u*. Thus, if *u* is a probability value, *t*=*Q*(*u*) is the value of
*t* for which *P*(*X*≤*t*)=*u*.

The m-function *norminv*, based on the MATLAB function *erfinv* (inverse error function),
calculates values of *Q* for the normal distribution.

The restriction to the continuous case is not essential. We consider a general definition which applies to any probability distribution function.

** Definition**: If *F* is a function having the properties of a probability
distribution function, then the *quantile function* for *F* is given by

We note

If , then

If , then

Hence, we have the important property:

**(Q1)***Q*(*u*)≤*t* iff .

The property (Q1) implies the following important property:

**(Q2)**If *U*∼ uniform , then *X*=*Q*(*U*) has distribution function
*F*_{X}=*F*.
To see this, note that *F*_{X}(*t*)=*P*[*Q*(*U*)≤*t*]=*P*[*U*≤*F*(*t*)]=*F*(*t*).

Property (Q2) implies that if *F* is any distribution function, with quantile function
*Q*, then the random variable *X*=*Q*(*U*), with *U* uniformly distributed on ,
has distribution function *F*.

Suppose is an arbitrary class of random variables with
corresponding distribution functions . Let
be the respective quantile functions. There is always
an independent class iid uniform (marginals for
the joint uniform distribution on the unit hypercube with sides ). Then
the random variables , form an independent class
with the same marginals as the *X _{i}*.

Several other important properties of the quantile function may be established.

*Q*is left-continuous, whereas*F*is right-continuous.If jumps are represented by vertical line segments, construction of the graph of

*u*=*Q*(*t*) may be obtained by the following two step procedure:Invert the entire figure (including axes), then

Rotate the resulting figure 90 degrees counterclockwise

This is illustrated in Figure 10.9. If jumps are represented by vertical line segments, then jumps go into flat segments and flat segments go into vertical segments.

If

*X*is discrete with probability*p*at_{i}*t*_{i},1≤*i*≤*n*, then*F*has jumps in the amount*p*at each_{i}*t*and is constant between. The quantile function is a left-continuous step function having value_{i}*t*on the interval , where_{i}*b*_{0}=0 and . This may be stated(10.56)

Suppose simple random variable *X* has distribution

Figure 1 shows a plot of the distribution function *F _{X}*. It is reflected in the
horizontal axis then rotated counterclockwise to give the graph of

*Q*(

*u*) versus

*u*.

We use the analytic characterization above in developing a number of m-functions and m-procedures.

** m-procedures for a simple random variable**

The basis for quantile function calculations for a simple random variable is
the formula above. This is implemented in the m-function *dquant*, which is used
as an element of several simulation procedures.
To plot the quantile function, we use *dquanplot* which employs the stairs function
and plots *X* vs the distribution function **FX**. The procedure *dsample* employs
dquant to obtain a “sample” from a population with simple distribution and to
calculate relative frequencies of the various values.

X = [-2.3 -1.1 3.3 5.4 7.1 9.8]; PX = 0.01*[18 15 23 19 13 12]; dquanplot Enter VALUES for X X Enter PROBABILITIES for X PX % See Figure 10.11 for plot of results rand('seed',0) % Reset random number generator for reference dsample Enter row matrix of values X Enter row matrix of probabilities PX Sample size n 10000

Value Prob Rel freq -2.3000 0.1800 0.1805 -1.1000 0.1500 0.1466 3.3000 0.2300 0.2320 5.4000 0.1900 0.1875 7.1000 0.1300 0.1333 9.8000 0.1200 0.1201 Sample average ex = 3.325 Population mean E[X] = 3.305 Sample variance = 16.32 Population variance Var[X] = 16.33

Sometimes it is desirable to know how many trials are required to reach a certain value, or
one of a set of values. A pair of m-procedures are available for simulation of that problem.
The first is called *targetset*. It calls for the population distribution and then
for the designation of a “target set” of possible values. The second procedure, *targetrun*,
calls for the number of repetitions of the experiment, and asks for the number of members of
the target set to be reached. After the runs are made, various statistics on the runs are
calculated and displayed.

X = [-1.3 0.2 3.7 5.5 7.3]; % Population values PX = [0.2 0.1 0.3 0.3 0.1]; % Population probabilities E = [-1.3 3.7]; % Set of target states targetset Enter population VALUES X Enter population PROBABILITIES PX The set of population values is -1.3000 0.2000 3.7000 5.5000 7.3000 Enter the set of target values E Call for targetrun

rand('seed',0) % Seed set for possible comparison targetrun Enter the number of repetitions 1000 The target set is -1.3000 3.7000 Enter the number of target values to visit 2 The average completion time is 6.32 The standard deviation is 4.089 The minimum completion time is 2 The maximum completion time is 30 To view a detailed count, call for D. The first column shows the various completion times; the second column shows the numbers of trials yielding those times % Figure 10.6.4 shows the fraction of runs requiring t steps or less

** m-procedures for distribution functions**

A procedure *dfsetup* utilizes the distribution function to set up an
approximate simple distribution. The m-procedure *quanplot* is used to plot
the quantile function. This procedure is essentially the same as dquanplot, except
the ordinary *plot* function is used in the continuous case whereas the plotting function
*stairs* is used in the discrete case. The m-procedure *qsample* is used to obtain
a sample from the population. Since there are so many possible values, these are
not displayed as in the discrete case.

F = '0.4*(t + 1).*(t < 0) + (0.6 + 0.4*t).*(t >= 0)'; % String dfsetup Distribution function F is entered as a string variable, either defined previously or upon call Enter matrix [a b] of X-range endpoints [-1 1] Enter number of X approximation points 1000 Enter distribution function F as function of t F Distribution is in row matrices X and PX quanplot Enter row matrix of values X Enter row matrix of probabilities PX Probability increment h 0.01 % See Figure 10.13 for plot qsample Enter row matrix of X values X Enter row matrix of X probabilities PX Sample size n 1000 Sample average ex = -0.004146 Approximate population mean E(X) = -0.0004002 % Theoretical = 0 Sample variance vx = 0.25 Approximate population variance V(X) = 0.2664

** m-procedures for density functions**

An m- procedure *acsetup* is used to obtain the simple approximate
distribution. This is essentially the same as the procedure tuappr, except that
the density function is entered as a string variable. Then the procedures quanplot and
qsample are used as in the case of distribution functions.

acsetup Density f is entered as a string variable. either defined previously or upon call. Enter matrix [a b] of x-range endpoints [0 3] Enter number of x approximation points 1000 Enter density as a function of t '(t.^2).*(t<1) + (1- t/3).*(1<=t)' Distribution is in row matrices X and PX quanplot Enter row matrix of values X Enter row matrix of probabilities PX Probability increment h 0.01 % See Figure 10.14 for plot rand('seed',0) qsample Enter row matrix of values X Enter row matrix of probabilities PX Sample size n 1000 Sample average ex = 1.352 Approximate population mean E(X) = 1.361 % Theoretical = 49/36 = 1.3622 Sample variance vx = 0.3242 Approximate population variance V(X) = 0.3474 % Theoretical = 0.3474

## 10.4. Problems on Functions of Random Variables^{*}

Suppose *X* is a nonnegative, absolutely continuous random variable. Let
*Z*=*g*(*X*)=*C**e*^{–aX}, where . Then 0<*Z*≤*C*. Use properties
of the exponential and natural log function to show that

*Z*=*C**e*^{–aX}≤*v* iff *e*^{–aX}≤*v*/*C* iff –*a**X*≤ln(*v*/*C*)
iff *X*≥–ln(*v*/*C*)/*a*, so that

Present value of future costs. Suppose money may be invested at an
annual rate *a*, compounded continually. Then one dollar in hand now, has a value
*e*^{ax} at the end of *x* years. Hence, one dollar spent *x* years in the future has a
*present value**e*^{–ax}. Suppose a device put into operation has time to
failure (in years) *X*∼ exponential (*λ*). If the cost of replacement
at failure is *C* dollars, then the present value of the replacement is *Z*=*C**e*^{–aX}.
Suppose *λ*=1/10, *a*=0.07, and *C*=*$*1000.

Use the result of Exercise 2. to determine the probability .

Use a discrete approximation for the exponential density to approximate the probabilities in part (a). Truncate

*X*at 1000 and use 10,000 approximation points.

v = [700 500 200]; P = (v/1000).^(10/7) P = 0.6008 0.3715 0.1003 tappr Enter matrix [a b] of x-range endpoints [0 1000] Enter number of x approximation points 10000 Enter density as a function of t 0.1*exp(-t/10) Use row matrices X and PX as in the simple case G = 1000*exp(-0.07*t); PM1 = (G<=700)*PX' PM1 = 0.6005 PM2 = (G<=500)*PX' PM2 = 0.3716 PM3 = (G<=200)*PX' PM3 = 0.1003

Optimal stocking of merchandise. A merchant is planning for the Christmas season. He intends to stock *m* units of a
certain item at a cost of *c* per unit. Experience indicates demand can be represented
by a random variable *D*∼ Poisson (*μ*). If units remain in stock at the
end of the season, they may be returned with recovery of *r* per unit. If demand
exceeds the number originally ordered, extra units may be ordered at a cost of
*s* each. Units are sold at a price *p* per unit.
If *Z*=*g*(*D*) is the gain from the sales, then

For

For

Let *M*=(–∞,*m*]. Then

*g*(

*t*) =

*I*

_{M}(

*t*) [ (

*p*–

*r*)

*t*+ (

*r*–

*c*)

*m*] +

*I*

_{M}(

*t*) [ (

*p*–

*s*)

*t*+ (

*s*–

*c*)

*m*]

*p*–

*s*)

*t*+ (

*s*–

*c*)

*m*+

*I*

_{M}(

*t*) (

*s*–

*r*) (

*t*–

*m*)

Suppose .

Approximate
the Poisson random variable *D* by truncating at 100. Determine *P*(500≤*Z*≤1100).

mu = 50; D = 0:100; c = 30; p = 50; r = 20; s = 40; m = 50; PD = ipoisson(mu,D); G = (p - s)*D + (s - c)*m +(s - r)*(D - m).*(D <= m); M = (500<=G)&(G<=1100); PM = M*PD' PM = 0.9209 [Z,PZ] = csort(G,PD); % Alternate: use dbn for Z m = (500<=Z)&(Z<=1100); pm = m*PZ' pm = 0.9209

(See Example 2 from "Functions of a Random Variable") The cultural committee of a student organization has arranged a special deal for tickets to a concert. The agreement is that the organization will purchase ten tickets at $20 each (regardless of the number of individual buyers). Additional tickets are available according to the following schedule:

11-20, $18 each

21-30, $16 each

31-50, $15 each

51-100, $13 each

If the number of purchasers is a random variable *X*, the total cost (in dollars) is
a random quantity *Z*=*g*(*X*) described by

Suppose *X*∼ Poisson (75). Approximate the Poisson distribution by truncating at 150.
Determine , and *P*(900≤*Z*≤1400).

X = 0:150; PX = ipoisson(75,X); G = 200 + 18*(X - 10).*(X>=10) + (16 - 18)*(X - 20).*(X>=20) + ... (15 - 16)*(X- 30).*(X>=30) + (13 - 15)*(X - 50).*(X>=50); P1 = (G>=1000)*PX' P1 = 0.9288 P2 = (G>=1300)*PX' P2 = 0.1142 P3 = ((900<=G)&(G<=1400))*PX' P3 = 0.9742 [Z,PZ] = csort(G,PX); % Alternate: use dbn for Z p1 = (Z>=1000)*PZ' p1 = 0.9288

(See Exercise 6 from "Problems on Random Vectors and Joint Distributions", and Exercise 1 from "Problems on Independent Classes of Random Variables")) The pair has the joint distribution

(in m-file npr08_06.m):

Determine . Let *Z*=3*X*^{3}+3*X*^{2}*Y*–*Y*^{3}.

Determine *P*(*Z*<0) and *P*(–5<*Z*≤300).

npr08_06 Data are in X, Y, P jcalc Enter JOINT PROBABILITIES (as on the plane) P Enter row matrix of VALUES of X X Enter row matrix of VALUES of Y Y Use array operations on matrices X, Y, PX, PY, t, u, and P P1 = total((max(t,u)<=4).*P) P1 = 0.4860 P2 = total((abs(t-u)>3).*P) P2 = 0.4516 G = 3*t.^3 + 3*t.^2.*u - u.^3; P3 = total((G<0).*P) P3 = 0.5420 P4 = total(((-5<G)&(G<=300)).*P) P4 = 0.3713 [Z,PZ] = csort(G,P); % Alternate: use dbn for Z p4 = ((-5<Z)&(Z<=300))*PZ' p4 = 0.3713

(See Exercise 2 from "Problems on Independent Classes of Random Variables") The pair has the joint distribution (in m-file npr09_02.m):

Determine *P*({*X*+*Y*≥5}∪{*Y*≤2}), .

npr09_02 Data are in X, Y, P jcalc Enter JOINT PROBABILITIES (as on the plane) P Enter row matrix of VALUES of X X Enter row matrix of VALUES of Y Y Use array operations on matrices X, Y, PX, PY, t, u, and P M1 = (t+u>=5)|(u<=2); P1 = total(M1.*P) P1 = 0.7054 M2 = t.^2 + u.^2 <= 10; P2 = total(M2.*P) P2 = 0.3282

(See Exercise 7 from "Problems on Random Vectors and Joint Distributions", and Exercise 3 from "Problems on Independent Classes of Random Variables") The pair has the joint distribution

(in m-file npr08_07.m):

t = | -3.1 | -0.5 | 1.2 | 2.4 | 3.7 | 4.9 |

u = 7.5 | 0.0090 | 0.0396 | 0.0594 | 0.0216 | 0.0440 | 0.0203 |

4.1 | 0.0495 | 0 | 0.1089 | 0.0528 | 0.0363 | 0.0231 |

-2.0 | 0.0405 | 0.1320 | 0.0891 | 0.0324 | 0.0297 | 0.0189 |

-3.8 | 0.0510 | 0.0484 | 0.0726 | 0.0132 | 0 | 0.0077 |

Determine , .

npr08_07 Data are in X, Y, P jcalc Enter JOINT PROBABILITIES (as on the plane) P Enter row matrix of VALUES of X X Enter row matrix of VALUES of Y Y Use array operations on matrices X, Y, PX, PY, t, u, and P M1 = t.^2 - 3*t <=0; P1 = total(M1.*P) P1 = 0.4500 M2 = t.^3 - 3*abs(u) < 3; P2 = total(M2.*P) P2 = 0.7876

For the pair in Exercise 8., let *Z*=*g*(*X*,*Y*)=3*X*^{2}+2*X**Y*–*Y*^{2}. Determine and plot the distribution function for *Z*.

G = 3*t.^2 + 2*t.*u - u.^2; % Determine g(X,Y) [Z,PZ] = csort(G,P); % Obtain dbn for Z = g(X,Y) ddbn % Call for plotting m-procedure Enter row matrix of VALUES Z Enter row matrix of PROBABILITIES PZ % Plot not reproduced here

H = t.*(t+u<=4) + 2*u.*(t+u>4); [W,PW] = csort(H,P); ddbn Enter row matrix of VALUES W Enter row matrix of PROBABILITIES PW % Plot not reproduced here

**For the distributions in Exercises 10-15 below**

Determine analytically the indicated probabilities.

Use a discrete approximation to calculate the same probablities.'

for
0≤*t*≤2, 0≤*u*≤1+*t* (see Exercise 15 from "Problems on Random Vectors and Joint Distributions").

*Z*=

*I*

_{ [ 0 , 1 ] }(

*X*) 4

*X*+

*I*

_{ ( 1 , 2 ] }(

*X*) (

*X*+

*Y*)

Determine *P*(*Z*≤2)

tuappr Enter matrix [a b] of X-range endpoints [0 2] Enter matrix [c d] of Y-range endpoints [0 3] Enter number of X approximation points 200 Enter number of Y approximation points 300 Enter expression for joint density (3/88)*(2*t + 3*u.^2).*(u<=1+t) Use array operations on X, Y, PX, PY, t, u, and P G = 4*t.*(t<=1) + (t+u).*(t>1); [Z,PZ] = csort(G,P); PZ2 = (Z<=2)*PZ' PZ2 = 0.1010 % Theoretical = 563/5632 = 0.1000

for 0≤*t*≤2,
0≤*u*≤min{1,2–*t*} (see Exercise 17 from "Problems on Random Vectors and Joint Distributions").

Determine *P*(*Z*≤1/4).

tuappr Enter matrix [a b] of X-range endpoints [0 2] Enter matrix [c d] of Y-range endpoints [0 1] Enter number of X approximation points 400 Enter number of Y approximation points 200 Enter expression for joint density (24/11)*t.*u.*(u<=min(1,2-t)) Use array operations on X, Y, PX, PY, t, u, and P G = 0.5*t.*(u>t) + u.^2.*(u<t); [Z,PZ] = csort(G,P); pp = (Z<=1/4)*PZ' pp = 0.4844 % Theoretical = 85/176 = 0.4830

for 0≤*t*≤2,
0≤*u*≤max{2–*t*,*t*} (see Exercise 18 from "Problems on Random Vectors and Joint Distributions").

Determine *P*(*Z*≤1).

tuappr Enter matrix [a b] of X-range endpoints [0 2] Enter matrix [c d] of Y-range endpoints [0 2] Enter number of X approximation points 300 Enter number of Y approximation points 300 Enter expression for joint density (3/23)*(t + 2*u).*(u<=max(2-t,t)) Use array operations on X, Y, PX, PY, t, u, and P M = max(t,u) <= 1; G = M.*(t + u) + (1 - M)*2.*u; p = total((G<=1).*P) p = 0.1960 % Theoretical = 9/46 = 0.1957

, for
0≤*t*≤2, 0≤*u*≤min{2,3–*t*} (see Exercise 19 from "Problems on Random Vectors and Joint Distributions").

Determine *P*(*Z*≤2).

tuappr Enter matrix [a b] of X-range endpoints [0 2] Enter matrix [c d] of Y-range endpoints [0 2] Enter number of X approximation points 300 Enter number of Y approximation points 300 Enter expression for joint density (12/179)*(3*t.^2 + u).*(u<=min(2,3-t)) Use array operations on X, Y, PX, PY, t, u, and P M = (t<=1)&(u>=1); Z = M.*(t + u) + (1 - M)*2.*u.^2; G = M.*(t + u) + (1 - M)*2.*u.^2; p = total((G<=2).*P) p = 0.6662 % Theoretical = 119/179 = 0.6648

, for
0≤*t*≤2, 0≤*u*≤min{1+*t*,2} (see Exercise 20 from "Problems on Random Variables and Joint Distributions")

Detemine *P*(*Z*≤1).

tuappr Enter matrix [a b] of X-range endpoints [0 2] Enter matrix [c d] of Y-range endpoints [0 2] Enter number of X approximation points 400 Enter number of Y approximation points 400 Enter expression for joint density (12/227)*(3*t+2*t.*u).*(u<=min(1+t,2)) Use array operations on X, Y, PX, PY, t, u, and P Q = (u<=1).*(t<=1) + (t>1).*(u>=2-t).*(u<=t); P = total(Q.*P) P = 0.5478 % Theoretical = 124/227 = 0.5463

The class is independent.

*X*=–2*I*_{A}+*I*_{B}+3*I*_{C}. Minterm probabilities are (in the usual order)

*Y*=*I*_{D}+3*I*_{E}+*I*_{F}–3. The class is independent with

*Z* has distribution

Value | -1.3 | 1.2 | 2.7 | 3.4 | 5.8 |

Probability | 0.12 | 0.24 | 0.43 | 0.13 | 0.08 |

Determine .

% file npr10_16.m Data for Exercise 16. cx = [-2 1 3 0]; pmx = 0.001*[255 25 375 45 108 12 162 18]; cy = [1 3 1 -3]; pmy = minprob(0.01*[32 56 40]); Z = [-1.3 1.2 2.7 3.4 5.8]; PZ = 0.01*[12 24 43 13 8]; disp('Data are in cx, pmx, cy, pmy, Z, PZ') npr10_16 % Call for data Data are in cx, pmx, cy, pmy, Z, PZ [X,PX] = canonicf(cx,pmx); [Y,PY] = canonicf(cy,pmy); icalc3 Enter row matrix of X-values X Enter row matrix of Y-values Y Enter row matrix of Z-values Z Enter X probabilities PX Enter Y probabilities PY Enter Z probabilities PZ Use array operations on matrices X, Y, Z, PX, PY, PZ, t, u, v, and P M = t.^2 + 3*t.*u.^2 > 3*v; PM = total(M.*P) PM = 0.3587

The simple random variable *X* has distribution

Plot the distribution function

*F*and the quantile function_{X}*Q*._{X}Take a random sample of size

*n*=10,000. Compare the relative frequency for each value with the probability that value is taken on.

X = [-3.1 -0.5 1.2 2.4 3.7 4.9]; PX = 0.01*[15 22 33 12 11 7]; ddbn Enter row matrix of VALUES X Enter row matrix of PROBABILITIES PX % Plot not reproduced here dquanplot Enter VALUES for X X Enter PROBABILITIES for X PX % Plot not reproduced here rand('seed',0) % Reset random number generator dsample % for comparison purposes Enter row matrix of VALUES X Enter row matrix of PROBABILITIES PX Sample size n 10000 Value Prob Rel freq -3.1000 0.1500 0.1490 -0.5000 0.2200 0.2164 1.2000 0.3300 0.3340 2.4000 0.1200 0.1184 3.7000 0.1100 0.1070 4.9000 0.0700 0.0752 Sample average ex = 0.8792 Population mean E[X] = 0.859 Sample variance vx = 5.146 Population variance Var[X] = 5.112