Zobrazeno 1 - 10
of 65
pro vyhledávání: '"Grey Ballard"'
Autor:
Hussam Al Daas, Grey Ballard, Paul Cazeaux, Eric Hallman, Agnieszka Międlar, Mirjeta Pasha, Tim W. Reid, Arvind K. Saibaba
Publikováno v:
SIAM Journal on Scientific Computing. 45:A74-A95
The Tensor-Train (TT) format is a highly compact low-rank representation for high-dimensional tensors. TT is particularly useful when representing approximations to the solutions of certain types of parametrized partial differential equations. For ma
Publikováno v:
Proceedings of the 35th ACM Symposium on Parallelism in Algorithms and Architectures
SPAA '23-ACM Symposium on Parallelism in Algorithms and Architectures
SPAA '23-ACM Symposium on Parallelism in Algorithms and Architectures, Jun 2023, Orlando, United States. ⟨10.1145/3558481.3591072⟩
SPAA '23-ACM Symposium on Parallelism in Algorithms and Architectures
SPAA '23-ACM Symposium on Parallelism in Algorithms and Architectures, Jun 2023, Orlando, United States. ⟨10.1145/3558481.3591072⟩
International audience; In this paper, we focus on the parallel communication cost of multiplying a matrix with its transpose, known as a symmetric rank-k update (SYRK). SYRK requires half the computation of general matrix multiplication because of t
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::6ebc6955e7f8dba293cb3ba31ce5caf2
https://inria.hal.science/hal-04076513
https://inria.hal.science/hal-04076513
Publikováno v:
Proceedings of the 34th ACM Symposium on Parallelism in Algorithms and Architectures.
Autor:
Ramakrishnan Kannan, Michael A. Matheson, Koby Hayashi, Grey Ballard, Haesun Park, Srinivas Eswar
Publikováno v:
ACM Transactions on Mathematical Software. 47:1-37
We consider the problem of low-rank approximation of massive dense nonnegative tensor data, for example, to discover latent patterns in video and imaging applications. As the size of data sets grows, single workstations are hitting bottlenecks in bot
Publikováno v:
2022 IEEE International Parallel and Distributed Processing Symposium (IPDPS).
Autor:
Grey Ballard, Sarah Parsons
Publikováno v:
2021 IEEE/ACM Ninth Workshop on Education for High Performance Computing (EduHPC).
Publikováno v:
ICPP Workshops
Matrix multiplication is one of the bottleneck computations for training the weights within deep neural networks. To speed up the training phase, we propose to use faster algorithms for matrix multiplication known as Arbitrary Precision Approximating
Publikováno v:
ICPP
Tucker decomposition is a low-rank tensor approximation that generalizes a truncated matrix singular value decomposition (SVD). Existing parallel software has shown that Tucker decomposition is particularly effective at compressing terabyte-sized mul
Publikováno v:
Journal of Pure and Applied Algebra. 223:3205-3224
This is the second in a series of papers on rank decompositions of the matrix multiplication tensor. We present new rank 23 decompositions for the 3 × 3 matrix multiplication tensor M 〈 3 〉 . All our decompositions have symmetry groups that incl
Publikováno v:
Brain Connectivity. 9:95-112
There is a growing interest in using so-called dynamic functional connectivity, as the conventional static brain connectivity models are being questioned. Brain network analyses yield complex network data that are difficult to analyze and interpret.