Zobrazeno 1 - 10
of 84
pro vyhledávání: '"Ganahl, Martin"'
Autor:
Menczer, Andor, van Damme, Maarten, Rask, Alan, Huntington, Lee, Hammond, Jeff, Xantheas, Sotiris S., Ganahl, Martin, Legeza, Örs
We report cutting edge performance results for a hybrid CPU-multi GPU implementation of the spin adapted ab initio Density Matrix Renormalization Group (DMRG) method on current state-of-the-art NVIDIA DGX-H100 architectures. We evaluate the performan
Externí odkaz:
http://arxiv.org/abs/2407.07411
Autor:
Ganahl, Martin, Beall, Jackson, Hauru, Markus, Lewis, Adam G. M., Yoo, Jae Hyeon, Zou, Yijian, Vidal, Guifre
Google's Tensor Processing Units (TPUs) are integrated circuits specifically built to accelerate and scale up machine learning workloads. They can perform fast distributed matrix multiplications and therefore be repurposed for other computationally i
Externí odkaz:
http://arxiv.org/abs/2204.05693
Autor:
Shillito, Ross, Petrescu, Alexandru, Cohen, Joachim, Beall, Jackson, Hauru, Markus, Ganahl, Martin, Lewis, Adam G. M., Vidal, Guifre, Blais, Alexandre
Qubit measurement and control in circuit QED rely on microwave drives, with higher drive amplitudes ideally leading to faster processes. However, degradation in qubit coherence time and readout fidelity has been observed even under moderate drive amp
Externí odkaz:
http://arxiv.org/abs/2203.11235
Autor:
Pederson, Ryan, Kozlowski, John, Song, Ruyi, Beall, Jackson, Ganahl, Martin, Hauru, Markus, Lewis, Adam G. M., Yao, Yi, Mallick, Shrestha Basu, Blum, Volker, Vidal, Guifre
We demonstrate the use of Google's cloud-based Tensor Processing Units (TPUs) to accelerate and scale up conventional (cubic-scaling) density functional theory (DFT) calculations. Utilizing 512 TPU cores, we accomplish the largest such DFT computatio
Externí odkaz:
http://arxiv.org/abs/2202.01255
Autor:
Lewis, Adam G. M., Beall, Jackson, Ganahl, Martin, Hauru, Markus, Mallick, Shrestha Basu, Vidal, Guifre
We have repurposed Google Tensor Processing Units (TPUs), application-specific chips developed for machine learning, into large-scale dense linear algebra supercomputers. The TPUs' fast inter-core interconnects (ICI)s, physically two-dimensional netw
Externí odkaz:
http://arxiv.org/abs/2112.09017
Tensor Processing Units (TPUs) were developed by Google exclusively to support large-scale machine learning tasks. TPUs can, however, also be used to accelerate and scale up other computationally demanding tasks. In this paper we repurpose TPUs for t
Externí odkaz:
http://arxiv.org/abs/2111.10466
Autor:
Morningstar, Alan, Hauru, Markus, Beall, Jackson, Ganahl, Martin, Lewis, Adam G. M., Khemani, Vedika, Vidal, Guifre
Publikováno v:
PRX Quantum 3, 020331 (2022)
Tensor Processing Units (TPUs) are specialized hardware accelerators developed by Google to support large-scale machine-learning tasks, but they can also be leveraged to accelerate and scale other linear-algebra-intensive computations. In this paper
Externí odkaz:
http://arxiv.org/abs/2111.08044
Autor:
Gustafson, Erik, Holzman, Burt, Kowalkowski, James, Lamm, Henry, Li, Andy C. Y., Perdue, Gabriel, Boixo, Sergio, Isakov, Sergei, Martin, Orion, Thomson, Ross, Heidweiller, Catherine Vollgraff, Beall, Jackson, Ganahl, Martin, Vidal, Guifre, Peters, Evan
Simulating quantum field theories on a quantum computer is one of the most exciting fundamental physics applications of quantum information science. Dynamical time evolution of quantum fields is a challenge that is beyond the capabilities of classica
Externí odkaz:
http://arxiv.org/abs/2110.07482
Autor:
Hibat-Allah, Mohamed, Ganahl, Martin, Hayward, Lauren E., Melko, Roger G., Carrasquilla, Juan
Publikováno v:
Phys. Rev. Research 2, 023358 (2020)
A core technology that has emerged from the artificial intelligence revolution is the recurrent neural network (RNN). Its unique sequence-based architecture provides a tractable likelihood estimate with stable training paradigms, a combination that h
Externí odkaz:
http://arxiv.org/abs/2002.02973
We use TensorNetwork [C. Roberts et al., arXiv: 1905.01330], a recently developed API for performing tensor network contractions using accelerated backends such as TensorFlow, to implement an optimization algorithm for the Multi-scale Entanglement Re
Externí odkaz:
http://arxiv.org/abs/1906.12030