1 / 57

Introduction to Hilbert Spaces

Introduction to Hilbert Spaces Dr. Md. Asaduzzaman Professor Department of Mathematics University of Rajshahi . Email: md_asaduzzaman@hotmail.com.

cerise
Download Presentation

Introduction to Hilbert Spaces

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Hilbert Spaces Dr. Md. Asaduzzaman Professor Department of Mathematics University of Rajshahi . Email: md_asaduzzaman@hotmail.com

  2. Prerequisite concept : Fields, Vector spaces, Subspaces, Linearly dependent and independent sets of vectors, Basis and dimension of vector spaces, Linear maps, Linear functionals, Metric spaces, Interior points, Limit points, Closure of a subset of a metric space, Open and Closed subsets of metric spaces, Bounded sets, Compact sets, Dense sets, Convergent and Cauchy sequences, Continuous mappings, Homeomorphisms.

  3. Inner Product Spaces: Let  be a vector space over the field  (where  is the real or complex field). A function :  is said to be an inner product on  if it satisfies the following conditions: x,y,z and , x,x0 and x,x=0 iff x=0. (conjugate symmetry). (linear in the first argument). A vector space over the real or complex field is called a linear space and a linear space with an inner product is called an inner product space or a pre-Hilbert space.

  4. How to define inner product in a finite dimensional linear space: Let  be an n-dimensional linear space over the complex field. Let S={e1, e2, …, en} be a basis of  and let A=(aij) be a positive definite Hermitian matrix of order n. Define : SSℂ by and extend this function to  linearly in the first argument and conjugate linearly in the second argument then  will define an inner product in .

  5. Thus If x,y then  scalars i,i (1 i n) such that Then where  denotes the column vector [1, 2 ,…, n]t and * is the conjugate transpose of the column vector [1, 2,…, n]t . In particular, when we consider real field then positive definite Hermitian matrix A will be replaced by positive definite symmetric matrix A and * will be replaced by t, the row vector [1, 2,…, n]. Thus, in this case,

  6. Remark: The inner product depends on the choice of basis {ei}, the arrangement of the basis, the choice of the matrix A. Example: Let =ℝn, eibe an i-th column vector of the identity matrix In , (the standard basis of ) and let A be the diagonal matrix with positive diagonal elements d1, d2, … ,dn. i.e., Then In particular, if A=In then we have the usual inner product

  7. Example: Let =ℝ2, Let Then for the coordinate vectors of x and y with respect to the basis e1, e2 are respectively Hence defines an inner product in X.

  8. Normed Linear Spaces Definition: Let X be a linear space over the field K. Then a function ║║: X→ℝis called anorm in X if it satisfies the following properties: (i) ║x║≥0; (ii) ║x║=0iff x=0; (iii) ║x║=||║x║; (iv) ║x+y║≤║x║+║y║. A linear space with a norm is called a normed linear space.

  9. Metric Induced from Norms For any normed linear space we have the following theorem: Theorem: Suppose X is a normed linear space . Then for any x, y, zX , we have ║x-y║≤ ║x-z║+ ║y-z║. Thus in a normed linear space X if we define d(x,y)= ║x-y║ then d satisfies all conditions of a metric. This metric is called metric induced by norm. It is clear that different norms on the same linear space induce different metrics.

  10. An inner product defines a norm and a norm defines a metric on a linear space . An inner product  on a linear space  defines a norm on  given by ║x║2= xx. A norm ║.║ on a linear space  defines a metric on  given by d(x,y)=║x-y║. An inner product space is a metric space under the metric induced by its inner product. But not all norm on a linear space can be obtained from an inner product. Also not every metricon a linear space can be obtained form a norm.

  11. The metric d induced from a norm ║.║ on a normed linear space  satisfies (i) d(x,y)=d(x-y,0), (ii) d(x,0)=||d(x,0). Hence the metric induced from a norm on a non trivial linear space can not be bounded. Hence a bounded metric can not be obtained from a norm. The metric d defined on X by is bounded.

  12. Also, given any metric d on X, the metrics d1 and d2 defined by are examples of bounded metrics. Hence d1 and d2 can not be obtained from any norm on X. If in an inner product space every Cauchy sequence converges then it is called a Hilbert space. Every finite dimensional inner product space is a Hilbert space. It can be shown that there are inner product spaces which are not Hilbert spaces.

  13. A norm induced by an inner product satisfies the parallelogram equality Geometrical interpretation: If x and y denote two adjacent sides of a parallelogram then x+y and x-y represent two diagonal vectors and norm measures their lengths. y y x+y x-y x

  14. However, a norm on a linear space may not satisfy parallelogram equality. If a norm satisfies parallelogram equality then it must be induced by a norm. Now the question arises, if a norm satisfies parallelogram identity then how can we determine the corresponding inner product ? The following theorem answers this question.

  15. Theorem:A norm ║.║ on a linear space X is induced by an inner product  on it if and only if it satisfies the parallelogram identity If it so, the inner product  is given by the polarization identity

  16. The following examples show that all norms are not induced by inner product and all inner product spaces are not Hilbert spaces. (1) The linear space equipped with the norm given by is not an inner product space and hence not a Hilbert space. For, if x=(-1,-1,0,0,0,…..) and y=(-1,1,0,0,0,….) then Hence

  17. (2) The linear space equipped with the norm given by is not an inner product space and hence not a Hilbert space. 1 a b

  18. (3) The linear space equipped with the norm given by is an inner product space since the norm is induced by the inner product But this inner product space is not complete and hence it is not a Hilbert space.

  19. We now give some examples of infinite dimensional Hilbert spaces (1) The linear space equipped with the induced norm given by is a Hilbert space. (2) The linear space of all square sumable sequences equipped with the induced norm given by is a Hilbert space.

  20. Theorem(Cauchy-Schwarz inequality): For any two vectors x and y in an inner product space and the equality holds iff x and y are linearly dependent. Remark: From the Schwarz inequality we see that holds for all non zero vectors x and y. This relation motivates us to define angle between two nonzero vectors x and y as

  21. x,y=0 iff x=y for some scalar , and x,y=/2 iff xy0. However, the angle between two nonzero vectors is not invariant under all inner product. For example, any two distinct standard basis vectors ei and ej in ℝnare orthogonal with respect to the usual inner product but with respect to other inner product they may not orthogonal. Although, if two vectors are parallel with respect to one inner product, they are parallel with respect to every inner product. If two vectors x and y are such that x,y0 then we say that the vectors x and y are orthogonal.

  22. A subset of a vector space is said to convex ifffor all , we have . Theorem: Every nonempty closed convex set contains a unique element of smallest norm. Proof: Let , then a sequence such that . Since is convex therefore , and . Hence . By parallelogram equality as and

  23. Hence is a Cauchy sequence in . Since is closed , is convergent in . Hence there exists an such that . Since the map is continuous , Hence . Let such that . Then since . By parallelogram equality .Hence If is a closed convex subset of the Hilbert space and is an arbitrary point of , there is a unique point of closest to .

  24. Definition If and in such that , then we say that is orthogonal to . We define and for any subset of Theorem Let be a subset of .Then is a closed subset of . Proof Since ,. Let . Then and for all Hence for any scalar and , for all . Hence .Thus is a subspace of .

  25. Next , since for a fixed , the map defined by is continuous and is closed subset of , ker. Hence is a closed subset of . Since and each is closed . Therefore is closed. Theorem (Projection Theorem) If is closed subspace of then . Proof Let .Then the set is closed and convex. Hence contains a unique element of smallest norm. Let this element be . Then and for all and for scalar .

  26. Put and . We now show that . From (1) we have for all and for all scalar . This inequality holds for all and for all scalar . Thus for any non zero , if we put in the inequality. Then we see that . Hence for all .

  27. Thus . Hence Since implies that and. Hence . Thus . Hence . Definition is called the orthogonal complement of . Corollary : If is a closed subspace of , then Fsubset of , then

  28. Theorem Let be a closed subspace of . Then there exist a unique pair of maps and such that (1) . These mappings have the following further properties: (2) for all and for all. (3) (4) if . (5) and are linear mappings.

  29. Proof Since every can be expressed uniquely as ,where and . Define and . Now . . Let , then , Then and are well defined maps and satisfies (1) and (2).

  30. To prove (3) Let Then , for a unique and a unique . Hence To prove (4) Let and . Then , for a unique and a unique . Hence since and .. Hence for all . The equality holds if . Hence .

  31. To prove (5) Let and . Then by (1) Hence Since the left hand side is in and right hand is in , both are . Hence and

  32. Finally, to prove uniqueness of and Let there are maps such that . Then for where , and Since and and We have and . Hence we have and .

  33. Definition and are respectively called the Projections of on and . From the above theorem , we see that among the points of has the smallest distance from . We call as the distance of from . If , then a projection mapping.

  34. Remark The projection theorem shows that every closed linear subspace of has at least one complementary closed linear subspace . One may note that in some Banach spaces a closed may fail to have complementary closed linear subspaces; for instance, the closed subspace of the Banach space is not complemented in . To each closed subspace of ,there is associated a projection of with rage space of and null space of .

  35. Projection theorem provides a characterization of closed subspaces of a Hilbert space in terms of orthogonality. A subspace of Hilbert space is closed in if and only if Projection theorem provides a characterization of sets in Hilbert spaces whose span is dense in Let be a non-empty subset of a Hilbert space . Then, span is dense in if and only if .

  36. Theorem If is a closed subspace of , if is the space spanned by and , then is closed. Corollary: Every finite dimensional subspace of is closed. Proof We know that every finite subset of a metric space is closed . Hence is a closed subspace of . Let be finite dimensional subspace of and let be a basis of . Let be the subspace of generated by and, by the above theorem, is closed. Let be the subspace of generated by, by the above theorem, is closed. Proceeding in this we can show that is a closed subspace of. Since generated by

  37. Theorem For any fixed , the mapping is a continuous linear functional on . Proof Clearly the map is a linear functional on. To prove continuity, let given. Choose and choose any if . Then for any , when . Hence the map is a continuous linear functional on .

  38. Representations of Continuous Linear Functionals ( Riesz representation theorem) Let be a continuous linear functional on the Hilbert space then there is a unique such that for all . Proof Iffor all we take . Otherwise, define . Then clearly is a subspace of . Sinceis continuous and is a closed subset is closed. Since for some , there exists an such that . Thus does not consists of alone. Hence there exists a with Put . Since we have .

  39. Thus . Hence Put . Then we have for all . To prove the uniqueness of Suppose there are such that . Then for all . In particular Hence .

  40. Definition A set of vectors in is said to be an orthonormal set if Definition Let be a nonempty subset of . We define to be the set of all finite linear combinations of vectors from , which is a subspace of . Clearly contains and it is the smallest subspace of that contains . We call it the subspace of generated by . Theorem Let be an orthonormal set of vectors from . Let . Then for any (1) . (2)

  41. Proof Let , then there exists scalars such that . Taking inner product on both sides with , we have . Thus for each , . Hence . Again, by

  42. Theorem An orthonormal set of vectors in a Hilbert space is linearly independent. Proof Let be an orthonormal set in . Assume is finite . Writing A= , consider the equation where are scalars. Taking inner product both sides with , ()we have . An arbitrary set (finite or infinite) is said to be linearly independent if every non-empty finite subset of is linearly independent. Hence it follows that our assertion is also valid for the case when is infinite.

  43. Advantage of orthonormal sets over arbitrary linearly independent sets (1) The determination of the unknown coefficients is simple. (2) The determination of the norm of a vector by inner product. (3) If we wish to add a term to in order to obtain , in that situation, we need to determine only one more coefficient since the other coefficients remain unchanged. .

  44. Definition Let be an orthonormal set . Let and consider the function : defined by . Then the numbers are call the Fourier coefficients of with respect to the orthonormal set . Theorem If is an orthonormal set in and . Then is the projection of on the subspace generated by .

  45. Proof Let be the subspace of generated by . Then is a closed subspace of . Hence . Since , there exist a unique and a unique such that . Since , . Now , for each Hence is the projection of on the subspace generated by .

  46. Theorem Let be a finite orthonormal set in an Hilbert space . Then for any in , we have (a) (Bessel Inequality) (b) Proof We have

  47. (b) The inequality (a) can be given “ The sum of the squares of the components of a vector in various perpendicular direction does not exceed the square of the length of the vector itself ” The relation (b) implies “ If we subtract from a vector its components in several perpendicular directions, then the resultant has no component left in any of these directions”. Hence, the resultant vector is perpendicular to each of these perpendicular directions. Bessel Inequality for is essentially the Cauchy-Schwarz Inequality.

  48. Theorem ( Bessel’s Inequality): If is an orthonormal set in , then for every. Proof Let be any finite subset of . Then is a finite orthonormal set in . We have ……. (1) Taking supremum on both side over all finite subsets of , we have

  49. Keeping in view the usefulness and convenience of orthonormal sequences over the linearly independent sequences , one is interested to generate orthonormal sequences from the linearly independent sequences . This is done by a constructive procedure, known as the Grahm (1883) – Schmidt (1907) process. Let be a (finite or countable infinite) linearly independent set of vectors in inner product space . The problem is to convert this set into an orthonormal set such that , for each .

  50. Step 1. Normalize , which is necessarily nonzero , so as to obtain as Step 2. Write , so that . Clearly ,since is linearly independent. Also, , since We can take by normalizing ; namely i.e.,

More Related