Choose v 3, …, v n such that v ′, v ̃, v 3, …, v n is an orthonormal basis. Then, v ̃ 1, …, v ̃ n is an orthonormal eigenbasis for C′ with respect to its eigenvalues λ 1, …, λ n and hence w ̃ 1 < max 1 ≤ j ≤ n w ̃ j. We denote by v ̃ j ∈ R n the projection of v j onto its first n components, and put w ̃ j = ⟨ v ̃ j, δ n ⟩ 2 for 1 ≤ j ≤ n. By the structure of C and the assumptions that λ j ≠ 1 for 1 ≤ j ≤ n, we find v n+1 = ±(0, …, 0, 1) and that the ( n + 1)-th entry of v j is zero for 1 ≤ j ≤ n. Now, let v 1, …, v n+1 be a respective orthonormal eigenbasis, and denote by w j = ⟨ v j, δ n + 1 ⟩ 2, 1 ≤ j ≤ n + 1, the corresponding weights. Putting λ n+1 ≔ 1, we have by the structure of C that λ 1, …, λ n+1 are the eigenvalues of C with λ 1 > λ j, 2 ≤ j ≤ n + 1. By the assumptions on C′, we have λ 1 > λ j, 2 ≤ j ≤ n, and hence, by the properties of correlation matrices, λ 1 > 1. Now, let λ 1, …, λ n be the eigenvalues of C′ with λ 1 ≥ λ j, 1 ≤ j ≤ n. Then, ( n + 1 ) n c = ∑ i ≠ j c i j = ∑ i ≠ j c i j ′ = n ( n − 1 ) c ′ and ( n + 1 ) n ( c 2 + σ 2 ) = ∑ i ≠ j c i j 2 = ∑ i ≠ j c i j ′ 2 = n ( n − 1 ) ( c ′ 2 + σ ′ 2 ). Write C = ( c ij) and C ′ = ( c i j ′ ). It is obvious that C is a correlation matrix. Considering the tensor product of correlation matrices under this notation, we have the following. Hence, given an n × n-matrix A = ( a ij) and an m × m-matrix B = ( b ij), we can identify the tensor product of A and B (or more precisely the tensor product of the corresponding linear maps) with an nm × nm-matrix C = A ⊗ B, where for C = ( c ij), we have c m( i−1)+ k, m( j−1)+ l = a ij b kl. By fixing a basis on a finite dimensional vector space, we have a one to one correspondence between linear maps and matrices. We identify R n m with the tensor product R n ⊗ R m by putting e i ⊗ f j ↦ d m( i−1)+ j for 1 ≤ i ≤ n and 1 ≤ j ≤ m. Let e 1, …, e n be the standard basis of R n, f 1, …, f m be the standard basis of R m, and d 1, …, d nm be the standard basis of R n m. Given eigenvectors v of A and w of B with respective eigenvalues α and β, we have that v ⊗ w is an eigenvector of A ⊗ B for the eigenvalue αβ. Given to linear maps A: V → V, B: W → W, we denote by A ⊗ B: V ⊗ W → V ⊗ W the linear map defined by A ⊗ B( v ⊗ w) ≔ Av ⊗ Bw for v ∈ V and w ∈ W. ⟩ V⊗ W on the tensor product V ⊗ W is defined by ⟨ v ⊗ w, v′ ⊗ w′⟩ V⊗ W ≔ ⟨ v, v′⟩ V⟨ w, w′⟩ V for v, v′ ∈ V, w, w′ ∈ W.⟩ W) be two finite dimensional Hilbert spaces.We first recall the basic notations and facts from multilinear algebra. As mentioned, we will use tensor products to construct correlation matrices with specific features.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |