Zobrazeno 1 - 10
of 89
pro vyhledávání: '"Makoto Aoshima"'
Autor:
Tsutomu T. Takeuchi, Kazuyoshi Yata, Kento Egashira, Makoto Aoshima, Aki Ishii, Suchetha Cooray, Kouichiro Nakanishi, Kotaro Kohno, Kai T. Kono
Publikováno v:
The Astrophysical Journal Supplement Series, Vol 271, Iss 2, p 44 (2024)
In astronomy, if we denote the dimension of data as d and the number of samples as n , we often find a case with n ≪ d . Traditionally, such a situation is regarded as ill-posed, and there was no choice but to discard most of the information in dat
Externí odkaz:
https://doaj.org/article/4c6dc03948c849bfb68a832fb2be502d
Autor:
Makoto Aoshima, Zakkula Govindarajulu
Publikováno v:
International Journal of Mathematics and Mathematical Sciences, Vol 29, Iss 3, Pp 143-153 (2002)
We consider the problem of constructing a fixed-width confidence interval for a lognormal mean. We give a Birnbaum and Healy type two-stage procedure to construct such a confidence interval. We discuss some asymptotic properties of the procedure. A t
Externí odkaz:
https://doaj.org/article/336b8968602c4065a26a784098902d04
Publikováno v:
Japanese Journal of Statistics and Data Science. 4:821-840
While distance-weighted discrimination (DWD) was proposed to improve the support vector machine in high-dimensional settings, it is known that the DWD is quite sensitive to the imbalanced ratio of sample sizes. In this paper, we study asymptotic prop
Publikováno v:
Annals of the Institute of Statistical Mathematics. 72(5):1257-1286
In this paper, we study asymptotic properties of nonlinear support vector machines (SVM) in high-dimension, low-sample-size settings. We propose a bias-corrected SVM (BC-SVM) which is robust against imbalanced data in a general framework. In particul
Publikováno v:
Annals of the Institute of Statistical Mathematics. 73:599-622
We consider hypothesis testing for high-dimensional covariance structures in which the covariance matrix is a (i) scaled identity matrix, (ii) diagonal matrix, or (iii) intraclass covariance matrix. Our purpose is to systematically establish a nonpar
Publikováno v:
Ouyou toukeigaku. 49:109-125
Autor:
Kazuyoshi Yata, Makoto Aoshima
Publikováno v:
Scandinavian Journal of Statistics. 47:899-921
In this article, we consider clustering based on principal component analysis (PCA) for high-dimensional mixture models. We present theoretical reasons why PCA is effective for clustering high-dimensional data. First, we derive a geometric representa
Publikováno v:
Japanese Journal of Statistics and Data Science. 5:717-718
A Correction to this paper has been published: 10.1007/s42081-021-00135-x
Publikováno v:
Journal of Multivariate Analysis. 185
In this paper, we consider clustering based on the kernel principal component analysis (KPCA) for high-dimension, low-sample-size (HDLSS) data. We give theoretical reasons why the Gaussian kernel is effective for clustering high-dimensional data. In
Publikováno v:
Journal of Statistical Planning and Inference. 202:99-111
We consider the equality test of high-dimensional covariance matrices under the strongly spiked eigenvalue (SSE) model. We find the difference of covariance matrices by dividing high-dimensional eigenspaces into the first eigenspace and the others. W