Complex Factor Analysis and Extensions
Autor: | Alle-Jan van der Veen, Ahmad Mouri Sardarabadi |
---|---|
Přispěvatelé: | Astronomy |
Jazyk: | angličtina |
Rok vydání: | 2018 |
Předmět: |
signal denoising
unknown arbitrary diagonal noise covariance Covariance matrices Diagonal block diagonal covariance matrix 010103 numerical & computational mathematics 02 engineering and technology 01 natural sciences matrix inversion eigenvalue decomposition replacement Matrix (mathematics) Mathematical model 0202 electrical engineering electronic engineering information engineering Mathematics array signal processing algorithms Covariance matrix data covariance matrix Data models Block matrix Computational modeling Covariance Approximation algorithms noise covariance matrix nonlinear weighted least squares formulation Factor analysis Algorithm gradient methods uncalibrated array subspace estimation least squares approximations 0101 mathematics Electrical and Electronic Engineering array signal processing Arrays Eigenvalues and eigenvectors Eigendecomposition of a matrix covariance matching 020206 networking & telecommunications noise covariance parameters multiple data covariance matrices sparse covariance matrix convergence of numerical methods Noise complex factor analysis noise covariance matrix structure Newton method maximum-likelihood based algorithms Signal Processing maximum-likelihood general factor analysis decomposition Signal processing algorithms Gauss-Newton gradient descent method |
Zdroj: | IEEE Transactions on Signal Processing, 66(4) IEEE Transactions on Signal Processing, 66(4), 954-967 |
ISSN: | 1053-587X |
Popis: | Many subspace-based array signal processing algorithms assume that the noise is spatially white. In this case, the noise covariance matrix is a multiple of the identity and the eigenvectors of the data covariance matrix are not affected by it. If the noise covariance is an unknown arbitrary diagonal (e.g., for an uncalibrated array), the eigenvalue decomposition leads to incorrect subspace estimates and it has to be replaced by a more general “factor analysis” decomposition (FA), which then reveals all relevant information. We consider this data model and several extensions where the noise covariance matrix has a more general structure, such as banded, sparse, block diagonal, and cases, where we have multiple data covariance matrices that share the same noise covariance matrix. Starting from a nonlinear weighted least squares formulation, we propose new estimation algorithms for both classical FA and its extensions. The optimization is based on Gauss–Newton gradient descent. Generally, this leads to an iteration involving the inversion of a very large matrix. Using the structure of the problem, we show how this can be reduced to the inversion of a matrix with dimension equal to the number of unknown noise covariance parameters. This results in new algorithms that have faster numerical convergence and lower complexity compared to several maximum-likelihood based algorithms that could be considered state of the art. The new algorithms scale well to large dimensions and can replace eigenvalue decompositions in many applications even if the noise can be assumed to be white. |
Databáze: | OpenAIRE |
Externí odkaz: |