Linear algebra stephen h. friedberg pdf free download






















Insel, Lawrence E. Includes indexes. ISBN 1. Algebra, Linear. Insel, Arnold J. Spence, Lawrence E. Foto: Gunter Lepkowski, Berlin. Bauhaus-Archiv, Berlin, Inv. No part of this book may be reproduced, in any form or by any means, without permission in writing from the publisher. Limited, Sydney Education Singapore, Pte.

Vector Spaces. Linear Combinations and Systems of Linear Equations. Linear Dependence and Linear Independence. Bases and Dimension. Maximal Linearly Independent Subsets. Invertibility and Isomorphisms. The Change of Coordinate Matrix. Dual Spaces. Systems of Linear Equations—Theoretical Aspects. Determinants of Order n. Properties of Determinants. Matrix Limits and Markov Chains. Table of Contents vii 7 Canonical Forms 7.

The Rational Canonical Form. Complex Numbers Polynomials. Answers to Selected Exercises. In addition, linear algebra continues to be of great importance in modern treatments of geometry and analysis.

The primary purpose of this fourth edition of Linear Algebra is to present a careful treatment of the principal topics of linear algebra and to illustrate the power of the subject through a variety of applications. Our major thrust emphasizes the symbiotic relationship between linear transformations and matrices. Although the only formal prerequisite for this book is a one-year course in calculus, it requires the mathematical sophistication of typical junior and senior mathematics majors.

The core material vector spaces, linear transformations and matrices, systems of linear equations, determinants, diagonalization, and inner product spaces is found in Chapters 1 through 5 and Sections 6. Chapters 6 and 7, on inner product spaces and canonical forms, are completely independent and may be studied in either order. These applications are not central to the mathematical development, however, and may be excluded at the discretion of the instructor. We have attempted to make it possible for many of the important topics of linear algebra to be covered in a one-semester course.

This goal has led us to develop the major topics with fewer preliminaries than in a traditional approach. Our treatment of the Jordan canonical form, for instance, does not require any theory of polynomials. The resulting economy permits us to cover the core material of the book omitting many of the optional sections and a detailed discussion of determinants in a one-semester four-hour course for students who have had some prior exposure to linear algebra.

Chapter 1 of the book presents the basic theory of vector spaces: subspaces, linear combinations, linear dependence and independence, bases, and dimension. Linear transformations and their relationship to matrices are the subject of Chapter 2. We discuss the null space and range of a linear transformation, matrix representations of a linear transformation, isomorphisms, and change of coordinates.

The application of vector space theory and linear transformations to systems of linear equations is found in Chapter 3. We have chosen to defer this important subject so that it can be presented as a consequence of the preceding material. This approach allows the familiar topic of linear systems to illuminate the abstract theory and permits us to avoid messy matrix computations in the presentation of Chapters 1 and 2.

There are occasional examples in these chapters, however, where we solve systems of linear equations. Of course, these examples are not a part of the theoretical development. The necessary background is contained in Section 1. Determinants, the subject of Chapter 4, are of much less importance than they once were. In a short course less than one year , we prefer to treat determinants lightly so that more time may be devoted to the material in Chapters 5 through 7.

Consequently we have presented two alternatives in Chapter 4—a complete development of the theory Sections 4. Optional Section 4. Chapter 5 discusses eigenvalues, eigenvectors, and diagonalization.

One of the most important applications of this material occurs in computing matrix limits. We have therefore included an optional section on matrix limits and Markov chains in this chapter even though the most general statement of some of the results requires a knowledge of the Jordan canonical form. Section 5. Inner product spaces are the subject of Chapter 6. The basic mathematical theory inner products; the Gram—Schmidt process; orthogonal complements; the adjoint of an operator; normal, self-adjoint, orthogonal and unitary operators; orthogonal projections; and the spectral theorem is contained in Sections 6.

Sections 6. Canonical forms are treated in Chapter 7. Sections 7. Appendix E on polynomials is used primarily in Chapters 5 and 7, especially in Section 7. We prefer to cite particular results from the appendices as needed rather than to discuss the appendices Preface xi independently.

The following diagram illustrates the dependencies among the various chapters. Chapter 1? Chapter 2? Chapter 3?

Sections 4. Sections 5. Our approach is to treat this material as a generalization of our characterization of normal and self-adjoint operators. The organization of the text is essentially the same as in the third edition. Further improvements include revised proofs of some theorems, additional examples, new exercises, and literally hundreds of minor editorial changes.

We are especially indebted to Jane M. Day San Jose State University for her extensive and detailed comments on the fourth edition manuscript. We encourage comments, which can be sent to us by e-mail or ordinary post. Our web site and e-mail addresses are listed below. Spence 1 Vector Spaces 1. In this section the geometry of vectors is discussed. This geometry is derived from physical experiments that test the manner in which two vectors interact.

For example, a swimmer swimming upstream at the rate of 2 miles per hour against a current of 1 mile per hour does not progress at the rate of 3 miles per hour. For in this instance the motions of the swimmer and current oppose each other, and the rate of progress of the swimmer is only 1 mile per hour upstream. The magnitude of a velocity without regard for the direction of motion is called its speed.

This resultant vector is called the sum of the original vectors, and the rule for their combination is called the parallelogram law. See Figure 1. The sum of two vectors x and y that act at the same point P is the vector beginning at P that is represented by the diagonal of parallelogram having x and y as adjacent sides.

The addition of vectors can be described algebraically with the use of analytic geometry. In the plane containing x and y, introduce a coordinate system with P at the origin. Let a1 , a2 denote the endpoint of x and b1 , b2 denote the endpoint of y. Then as Figure 1. Henceforth, when a reference is made to the coordinates of the endpoint of a vector, the vector should be assumed to emanate from the origin.

Moreover, since a vector beginning at the origin is completely determined by its endpoint, we sometimes refer to the point x rather than the endpoint of the vector x if x is a vector emanating from the origin. This operation, called scalar multiplication, consists of multiplying the vector by a real number. Arguments similar to the preceding ones show that these eight properties, as well as the geometric interpretations of vector addition and scalar multiplication, are true also for vectors acting in space rather than in a plane.

These results can be used to write equations of lines and planes in space. Let O denote the origin of a coordinate system in space, and let u and v denote the vectors that begin at O and end at A and B, respectively.

Conversely, the endpoint of every vector of the form tw that begins at A lies on the line joining A and B. These points determine a unique plane, and its equation can be found by use of our previous observations about vectors. Let u and v denote vectors beginning at A and ending at B and C, respectively. The endpoint of su is the point of intersection of the line through A and B with the line through S Sec.

A similar procedure locates the endpoint of tv. Determine whether the vectors emanating from the origin and terminating at the following pairs of points are parallel. Find the equations of the lines through the following pairs of points in space. Find the equations of the planes containing the following points in space. Justify your answer. Prove that if the vector x emanates from the origin of the Euclidean plane and terminates at the point with coordinates a1 , a2 , then the vector tx that emanates from the origin terminates at the point with coordinates ta1 , ta2.

Prove that the diagonals of a parallelogram bisect each other. In the remainder of this section we introduce several important examples of vector spaces that are studied throughout this text. Observe that in describing a vector space, it is necessary to specify not only the vectors but also the operations of addition and scalar multiplication.

An object of the form a1 , a2 ,. The elements 8 Chap. Two n-tuples a1 , a2 ,. Thus R3 is a vector space over R. Similarly, C2 is a vector space over C.

Since a 1-tuple whose only entry is from F can be regarded as an element of F , we usually write F rather than F1 for the vector space of 1-tuples with entry from F. The entries ai1 , ai2 ,. The rows of the preceding matrix are regarded as vectors in Fn , and the columns are regarded as vectors in Fm. In addition, if the number of rows and columns of a matrix are equal, the matrix is called square.

Note that these are the familiar operations of addition and scalar multiplication for functions used in algebra and calculus. See page With these operations V is a vector space. Theorem 1. Corollary 1. The vector 0 described in VS 3 is unique. Corollary 2. The vector y described in VS 4 is unique. The next result contains some of the elementary properties of scalar multiplication. The proof of c is similar to the proof of a.

Label the following statements as true or false. Perform the indicated operations. Wildlife Management, 25, reports the following number of trout having crossed beaver dams in Sagehen Creek. At the end of May, a furniture store had the following inventory. To prepare for its June sale, the store decided to double its inventory on each of the items listed in the preceding table. How many suites were sold during the June sale? Prove Corollaries 1 and 2 of Theorem 1.

Prove that V is a vector space over F. V is called the zero vector space. Let V denote the set of ordered pairs of real numbers. Is V a vector space over R with these operations? Is V a vector space over F with these operations?

Prove that, with these operations, V is a vector space over R. See Appendix C. The appropriate notion of substructure for vector spaces is introduced in this section.

The latter is called the zero subspace of V. Fortunately it is not necessary to verify all of the vector space properties to prove that a subset is a subspace.

Thus a subset W of a vector space V is a subspace of V if and only if the following four properties hold. W is closed under addition. W is closed under scalar multiplication.

W has a zero vector. Each vector in W has an additive inverse in W. The next theorem shows that the zero vector of W must be the same as the zero vector of V and that property 4 is redundant. Let V be a vector space and W a subset of V. So condition a holds. Conversely, if conditions a , b , and c hold, the discussion preceding this theorem shows that W is a subspace of V if the additive inverse of each vector in W lies in W. Hence W is a subspace of V.

The preceding theorem provides a simple method for determining whether or not a given subset of a vector space is a subspace. Normally, it is this result that is used to prove that a subset is, in fact, a subspace. Clearly, a symmetric matrix must be square. The zero matrix is equal to its transpose and hence belongs to W.

See Exercise 3. Using this fact, we show that the set of symmetric matrices is closed under addition and scalar multiplication. The examples that follow provide further illustrations of the concept of a subspace. Example 1 Let n be a nonnegative integer, and let Pn F consist of all polynomials in P F having degree less than or equal to n.

Moreover, the sum of two polynomials with degrees less than or equal to n is another polynomial of degree less than or equal to n, and the product of a scalar and a polynomial of degree less than or equal to n is a polynomial of degree less than or equal to n. So Pn F is closed under addition and scalar multiplication.

It therefore follows from Theorem 1. Moreover, the sum of two continuous functions is continuous, and the product of a real number and a continuous function is continuous. Clearly the zero matrix is a diagonal matrix because all of its entries are 0. Any intersection of subspaces of a vector space V is a subspace of V.

Let C be a collection of subspaces of V, and let W denote the intersection of the subspaces in C. Then x and y are contained in each subspace in C. Having shown that the intersection of subspaces of a vector space V is a subspace of V, it is natural to consider whether or not the union of subspaces of V is a subspace of V. It is easily seen that the union of subspaces must contain the zero vector and be closed under scalar multiplication, but in general the union of subspaces of V need not be closed under addition.

In fact, it can be readily shown that the union of two subspaces of V is a subspace of V if and only if one of the subspaces contains the other. See Exercise There is, however, a natural way to combine two subspaces W1 and W2 to obtain a subspace that contains both W1 and W2.

This idea is explored in Exercise Determine the transpose of each of the matrices that follow. In addition, if the matrix is square, compute its trace. Prove that diagonal matrices are symmetric matrices. Justify your answers. Let W1 , W3 , and W4 be as in Exercise 8. Let W1 and W2 be subspaces of a vector space V. Clearly, a skewsymmetric matrix is square. Compare this exercise with Exercise An important special case occurs when A is the origin.

This is proved as Theorem 1. Let V be a vector space and S a nonempty subset of V. In this case we also say that v is a linear combination of u1 , u2 ,. Thus the zero vector is a linear combination of any nonempty subset of V. Watt and Annabel L. Department of Agriculture, Washington, D. This question often reduces to the problem of solving a system of linear equations.

In Chapter 3, we discuss a general method for using matrices to solve any system of linear equations. To solve system 1 , we replace it by another system with the same solutions, but which is easier to solve.

The procedure to be used expresses some of the unknowns in terms of others by eliminating certain unknowns from all the equations except one.

This need not happen in general. The procedure just illustrated uses three types of operations to simplify the original system: 1. In Section 3. Note that we employed these operations to obtain a system of equations that had the following properties: 1. Once a system with properties 1, 2, and 3 has been obtained, it is easy to solve for some of the unknowns in terms of the others as in the preceding example.

See Example 2. We return to the study of systems of linear equations in Chapter 3. We discuss there the theoretical basis for this method of solving systems of linear equations and further simplify the procedure by use of matrices.

We now name such a set of linear combinations. Let S be a nonempty subset of a vector space V. The span of S, denoted span S , is the set consisting of all linear combinations of the vectors in S. In this case, the span of the set is a subspace of R3. This fact is true in general. The span of any subset S of a vector space V is a subspace of V. Moreover, any subspace of V that contains S must also contain the span of S.

Then there exist vectors u1 , u2 ,. Thus span S is a subspace of V. Now let W denote any subspace of V that contains S. In this case, we also say that the vectors of S generate or span V. So any linear combination of these matrices has equal diagonal entries. It is natural to seek a subset of W that generates W and is as small as possible. In the next section we explore the circumstances under which a vector can be removed from a generating set to obtain a smaller generating set.

Solve the following systems of linear equations by the method introduced in this section. In each part, determine whether the given vector is in the span of S. Show that the vectors 1, 1, 0 , 1, 0, 1 , and 0, 1, 1 generate F3. In Fn , let ej denote the vector whose jth coordinate is 1 and whose other coordinates are 0. Interpret this result geometrically in R3.

Let S1 and S2 be subsets of a vector space V. Let V be a vector space and S a subset of V with the property that whenever v1 , v2 ,. Prove that every vector in the span of S can be uniquely written as a linear combination of vectors of S. Let W be a subspace of a vector space V. Indeed, the smaller that S is, the fewer computations that are required to represent vectors in W. The search for this subset is related to the question of whether or not some vector in S is a linear combination of the other vectors in S.

The reader should verify that no such solution exists. This does not, however, answer our question of whether some vector in S is a linear combination of the other vectors in S.

In this case we also say that the vectors of S are linearly dependent. For any vectors u1 , u2 ,. We call this the trivial representation of 0 as a linear combination of u1 , u2 ,. Thus, for a set to be linearly dependent, there must exist a nontrivial representation of 0 as a linear combination of vectors in the set. We show that S is linearly dependent and then express one of the vectors in S as a linear combination of the other vectors in S.

To show that Sec. A subset S of a vector space that is not linearly dependent is called linearly independent. As before, we also say that the vectors of S are linearly independent.

The following facts about linearly independent sets are true in any vector space. The empty set is linearly independent, for linearly dependent sets must be nonempty. A set consisting of a single nonzero vector is linearly independent. A set is linearly independent if and only if the only representations of 0 as linear combinations of its vectors are trivial representations.

This technique is illustrated in the examples that follow. Equating the corresponding coordinates of the vectors on the left and the right sides of this equation, we obtain the following system of linear equations.

If S1 is linearly dependent, then S2 is linearly dependent. If S2 is linearly independent, then S1 is linearly independent. Earlier in this section, we remarked that the issue of whether S is the smallest generating set for its span is related to the question of whether some vector in S is a linear combination of the other vectors in S. Thus the issue of whether S is the smallest generating set for its span is related to the question of whether S is linearly dependent.

This equation implies that u3 or alternatively, u1 or u2 is a linear combination of the other vectors in S. More generally, suppose that S is any linearly dependent set containing two or more vectors. It follows that if no proper subset of S generates the span of S, then S must be linearly independent.

Another way to view the preceding statement is given in Theorem 1. Let S be a linearly independent subset of a vector space V, and let v be a vector in V that is not in S. Since v is a linear combination of u2 ,. Then there exist vectors v1 , v2 ,. Linearly independent generating sets are investigated in detail in Section 1. Recall from Example 3 in Section 1.

Find a linearly independent set that generates this subspace. Give an example of three linearly dependent vectors in R3 such that none of the three is a multiple of another. How many vectors are there in span S? Prove Theorem 1. A linearly independent generating set for W possesses a very useful property—every vector in W can be expressed in one and only one way as a linear combination of the vectors in the set.

This property is proved below in Theorem 1. It is this property that makes linearly independent generating sets the building blocks of vector spaces.

We call this basis the standard basis for Pn F. The proof of the converse is an exercise. Thus v determines a unique n-tuple of scalars a1 , a2 ,. This fact suggests that V is like the vector space Fn , where n is the number of vectors in the basis for V. We see in Section 2. Otherwise S contains a nonzero vector u1. Continue, if possible, choosing vectors u2 ,. By Theorem 1. This method is illustrated in the next example. It can be shown that S generates R3.

We can select a basis for R3 that is a subset of S by the technique used in proving Theorem 1. Let V be a vector space that is generated by a set G containing exactly n vectors, and let L be a linearly independent subset of V containing exactly m vectors.

The proof is by mathematical induction on m. By the corollary to Theorem 1. Thus there exist scalars a1 , a2 ,. Moreover, some bi , say b1 , is nonzero, for otherwise we obtain the same contradiction. This completes the induction. Then every basis for V contains the same number of vectors. The unique number of vectors Sec. The following results are consequences of Examples 1 through 4. This set is, in fact, a basis for P F. In Section 1. Let V be a vector space with dimension n. Corollary 1 implies that H contains exactly n vectors.

Since a subset of G contains n vectors, G must contain at least n vectors. Since L is also linearly independent, L is a basis for V. Example 13 It follows from Example 4 of Section 1.

It follows from Example 4 of Section 1. This procedure also can be used to extend a linearly independent set to a basis, as c of Corollary 2 asserts is possible. For this reason, we summarize here the main results of this section in order to put them into better perspective. A basis for a vector space V is a linearly independent subset of V that generates V. Thus if the dimension of V is n, every basis for V contains exactly n vectors.

Moreover, every linearly independent subset of V contains no more than n vectors and can be extended to a basis for V by including appropriately chosen vectors. Also, each generating set for V contains at least n vectors and can be reduced to a basis for V by excluding appropriately chosen vectors. The Venn diagram in Figure 1. Linearly independent Bases sets Figure 1. Continue choosing vectors, x1 , x2 ,.

Let S be a basis for W. Because S is a linearly independent subset of V, Corollary 2 of the replacement theorem guarantees that S can be extended to a basis for V.

Since R2 has dimension 2, subspaces of R2 can be of dimensions 0, 1, or 2 only. Any subspace of R2 having dimension 1 consists of all scalar multiples of some nonzero vector in R2 Exercise 11 of Section 1. Similarly, the subspaces of R3 must have dimensions 0, 1, 2, or 3. Interpreting these possibilities geometrically, we see that a subspace of dimension zero must be the origin of Euclidean 3-space, a subspace of dimension 1 is a line through the origin, a subspace of dimension 2 is a plane through the origin, and a subspace of dimension 3 is Euclidean 3-space itself.

The Lagrange Interpolation Formula Corollary 2 of the replacement theorem can be applied to obtain a useful formula. The polynomials f0 x , f1 x ,. Note that each fi x is a polynomial of degree n and hence is in Pn F. This representation is called the Lagrange interpolation formula. Notice Sec. A vector space cannot have more than one basis. Then S1 cannot contain more vectors than S2. Determine which of the following sets are bases for R3.

Determine which of the following sets are bases for P2 R. Let W denote the subspace of R5 consisting of all the vectors having coordinates that sum to zero. Find the unique representation of an arbitrary vector a1 , a2 , a3 , a4 in F4 as a linear combination of u1 , u2 , u3 , and u4. In each part, use the Lagrange interpolation formula to construct the polynomial of smallest degree whose graph contains the following points.

Let u and v be distinct vectors of a vector space V. Let u, v, and w be distinct vectors of a vector space V. Find a basis for this subspace.

What are the dimensions of W1 and W2? Find a basis for W. What is the dimension of W? Find a basis for the vector space in Example 5 of Section 1. Complete the proof of Theorem 1. Let f x be a polynomial of degree n in Pn R.

If V and W are vector spaces over F of dimensions m and n, determine the dimension of Z. See Examples 11 and Our principal goal here is to prove that every vector space has a basis. There is no obvious way to construct a basis for this space, and yet it follows from the results of this section that such a basis does exist. Instead, a more general result called the maximal principle is needed. Before stating this principle, we need to introduce some terminology. Let F be a family of sets. A member M of F is called maximal with respect to set inclusion if M is contained in no member of F other than M itself.

This family F is called the power set of S. The set S is easily seen to be a maximal element of F. Then S and T are both maximal elements of F. Then F has no maximal element. Maximal Principle. In Theorem 1. Let S be a subset of a vector space V. A maximal linearly independent subset of S is a subset B of S satisfying both of the following conditions. For a treatment of set theory using the Maximal Principle, see John L. In this case, however, any subset of S consisting of two polynomials is easily shown to be a maximal linearly independent subset of S.

Thus maximal linearly independent subsets of a set need not be unique. Our next result shows that the converse of this statement is also true. Let V be a vector space and S a subset that generates V. Since Theorem 1. Thus a subset of a vector space is a basis if and only if it is a maximal linearly independent subset of the vector space. Therefore we can accomplish our goal of proving that every vector space has a basis by showing that every vector space contains a maximal linearly independent subset.

This result follows immediately from the next theorem. Let S be a linearly independent subset of a vector space V. There exists a maximal linearly independent subset of V that contains S. Let F denote the family of all linearly independent subsets of V that contain S.

In order to show that F contains a maximal element, we must show that if C is a chain in F, then there exists a member U of F that contains each member of C. We claim that U , the union of the members of C, is the desired set. Thus we need only prove that U is linearly independent. But since C is a chain, one of these sets, say Ak , contains all the others. It follows that U is linearly independent. The maximal principle implies that F has a maximal element. This element is easily seen to be a maximal linearly independent subset of V that contains S.

Every vector space has a basis. It can be shown, analogously to Corollary 1 of the replacement theorem p. Sets have the same cardinality if there is a one-to-one and onto mapping between them. See, for example, N. Jacobson, Lectures in Abstract Algebra, vol. Van Nostrand Company, New York, , p. Exercises extend other results from Section 1. See Exercise 21 in Section 1.

Hint: 62 Chap. Prove that any basis for W is a subset of a basis for V. Prove the following generalization of Theorem 1. Hint: Apply the maximal principle to the family of all linearly independent subsets of S2 that contain S1 , and proceed as in the proof of Theorem 1.

Prove the following generalization of the replacement theorem. These special functions are called linear transformations, and they abound in both pure and applied mathematics.

Later we use these transformations to study rigid motions in Rn Section 6. In the remaining chapters, we see further examples of linear transformations occurring in both the physical and the social sciences. Many of these transformations are studied in more detail in later sections.

Recall that a function T with domain V and codomain W is denoted by 64 Sec. See Appendix B. Let V and W be vector spaces over F.

See Exercises 38 and We often simply call T linear. See Exercise 7. T is linear if and only if, for x1 , x2 ,. So T is linear. The main reason for this is that most of the important geometrical transformations are linear.

We leave the proofs of linearity to the reader. See Figure 2. T is called the projection on the x -axis. Then T is a linear transformation by Exercise 3 of Section 1.

It is clear that both of these transformations are linear. We often write I instead of IV. We now turn our attention to two very important sets associated with linear transformations: the range and null space.

The determination of these sets allows us to examine more closely the intrinsic properties of a linear transformation. The next result shows that this is true in general. Theorem 2. To clarify the notation, we use the symbols 0 V and 0 W to denote the zero vectors of V and W, respectively.

With this accomplished, a basis for the range is easy to discover using the technique of Example 6 of Section 1.

It should be noted that Theorem 2. The next example illustrates the usefulness of Theorem 2. The null space and range are so important that we attach special names to their respective dimensions. In other words, the more vectors that are carried into 0 , the smaller the range. The same heuristic reasoning tells us that the larger the rank, the smaller the nullity. This balance between rank and nullity is made precise in the next theorem, appropriately called the dimension theorem.

First we prove that S generates R T. Using Theorem 2. Now we prove that S is linearly independent. Hence S is linearly independent. Interestingly, for a linear transformation, both of these concepts are intimately connected to the rank and nullity of the transformation. This is demonstrated in the next two theorems. This means that T is one-to-one.

The reader should observe that Theorem 2. Surprisingly, the conditions of one-to-one and onto are equivalent in an important special case. Then the following are equivalent. Now, with the use of Theorem 2. See Exercises 15, 16, and The linearity of T in Theorems 2. The next two examples make use of the preceding theorems in determining whether a given linear transformation is one-to-one or onto.

We conclude from Theorem 2. Hence Theorem 2. Example 13 illustrates the use of this result. Clearly T is linear and one-to-one.

This technique is exploited more fully later. One of the most important properties of a linear transformation is that it is completely determined by its action on a basis. This result, which follows from the next theorem and corollary, is used frequently throughout the book. Then compute the nullity and rank of T, and verify the dimension theorem. Finally, use the appropriate theorems in this section to determine whether T is one-to-one or onto.

Recall Example 4, Section 1. Prove properties 1, 2, 3, and 4 on page Prove that the transformations in Examples 2 and 3 are linear. For each of the following parts, state why T is not linear.

What is T 2, 3? Is T one-to-one? What is T 8, 11? Prove that S is linearly independent if and only if T S is linearly independent. Recall that T is linear. Prove that T is onto, but not one-to-one. Let V and W be vector spaces with subspaces V1 and W1 , respectively. Let V be the vector space of sequences described in Example 5 of Section 1.

T and U are called the left shift and right shift operators on V, respectively. Describe geometrically the possibilities for the null space of T. Hint: Use Exercise Describe T if W1 is the zero subspace. Warning: Do not assume that W is T-invariant or that T is a projection unless explicitly stated.

If W is T-invariant, prove that TW is linear. Suppose that W is T-invariant. Prove Theorem 2. Prove the following generalization of Theorem 2. By Exercise 34, there exists a linear transformation Sec. Let V be a vector space and W be a subspace of V. Compare the method of solving b with the method of deriving the same result as outlined in Exercise 35 of Section 1.

In fact, we develop a one-to-one correspondence between matrices and linear transformations that allows us to utilize properties of one to study properties of the other. Now that we have the concept of ordered basis, we can identify abstract vectors in an n-dimensional vector space with n-tuples. We study this transformation in Section 2.

To make this more explicit, we need some preliminary discussion about the addition and scalar multiplication of linear transformations. We are fortunate, however, to have the result that both sums and scalar multiples of linear transformations are also linear. In Section 2. Then Sec. So a is proved, and the proof of b is similar. L V, W is a vector space. Complete the proof of part b of Theorem 2. Prove part b of Theorem 2. Prove that T is linear. Recall by Exercise 38 of Section 2. By Theorem 2.

Suppose that W is a T-invariant subspace of V see the exercises of Section 2. Let V and W be vector spaces, and let S be a subset of V. Prove the following statements. The question now arises as to how the matrix representation of a composite of linear transformations is related to the matrix representation of each of the associated linear transformations.

Let V be a vector space. A more general result holds for linear transformations that have domains unequal to their codomains. See Exercise 8. Therefore the transpose of a product is the product of the transposes in the opposite order. We illustrate Theorem 2. To illustrate Theorem 2. When the context is clear, we sometimes omit the subscript n from In. See Exercise 5.

To see why, assume that the cancellation law is valid. The proof of b is left as an exercise. See Exercise 6. It follows see Exercise 14 from Theorem 2. It utilizes both the matrix representation of a linear transformation and matrix multiplication in order to evaluate the transformation at any given vector.

Identifying column vectors as matrices and using Theorem 2. This transformation is probably the most important tool for transferring properties about transformations to analogous properties about matrices and vice versa. For example, we use it to prove that matrix multiplication is associative. We call LA a left-multiplication transformation. These properties are all quite natural and so are easy to remember. The fact that LA is linear follows immediately from Theorem 2.

The proof of the converse is trivial. The uniqueness of C follows from b. We now use left-multiplication transformations to establish the associativity of matrix multiplication. Using e of Theorem 2. So from b of Theorem 2. The proof above, however, provides a prototype of many of the arguments that utilize the relationships between linear transformations and matrices. Applications A large and varied collection of interesting applications arises in connection with special matrices called incidence matrices.

An incidence matrix is a square matrix in which all the entries are either zero or one and, for convenience, all the diagonal entries are zero.

If we have a relationship on a set of n objects that we denote by 1, 2,. To make things concrete, suppose that we have four people, each of whom owns a communication device. We obtain an interesting interpretation of the entries of A2. A printable version of Linear Algebra is available in two parts. This tutorial assumes you have had linear algebra before do a google search for some tutorials , but want a new understanding.

Linear Algebra. Seller Rating:. About this Item: Condition: New. Brand new book. This is an international edition textbook with identical content as the US version. Buy with confidence from a 5-star US based seller. The power behind this applied linear algebra lies in the fact that techniques of applied linear algebra can be implement using computers to solve real-world problems in science, technology, engineering and mathematics.

He 'Introduction to Applied Linear Algebra fills a very important role that has been sorely missed so far in the plethora of other textbooks on the topic, which are filled with discussions of nullspaces, rank, complex eigenvalues and other concepts, and by way of 'examples', typically show toy problems. Download Linear algebra, Stephen H. Friedberg, Arnold J Recommend Documents. Friedberg, Insel, and Spence Linear algebra, 4th ed.

Norton and Tom J. Coope Items 1 - 7 - of the arithmetic operations, equals, and operational laws, combined with the. Friedberg Arnold J. Insel Lawrence E.

Author: Stephen H. Stephen H. Chapter 2. Friedberg, Arnold. Your email address will not be published. Home free the for you what and pdf with movie book your read about quotes how love life. Linear Algebra by Stephen H. Friedberg For courses in Advanced Linear Algebra.



0コメント

  • 1000 / 1000