Zobrazeno 1 - 10
of 32
pro vyhledávání: '"Hauru, Markus"'
Autor:
Ganahl, Martin, Beall, Jackson, Hauru, Markus, Lewis, Adam G. M., Yoo, Jae Hyeon, Zou, Yijian, Vidal, Guifre
Google's Tensor Processing Units (TPUs) are integrated circuits specifically built to accelerate and scale up machine learning workloads. They can perform fast distributed matrix multiplications and therefore be repurposed for other computationally i
Externí odkaz:
http://arxiv.org/abs/2204.05693
Autor:
Shillito, Ross, Petrescu, Alexandru, Cohen, Joachim, Beall, Jackson, Hauru, Markus, Ganahl, Martin, Lewis, Adam G. M., Vidal, Guifre, Blais, Alexandre
Qubit measurement and control in circuit QED rely on microwave drives, with higher drive amplitudes ideally leading to faster processes. However, degradation in qubit coherence time and readout fidelity has been observed even under moderate drive amp
Externí odkaz:
http://arxiv.org/abs/2203.11235
Autor:
Pederson, Ryan, Kozlowski, John, Song, Ruyi, Beall, Jackson, Ganahl, Martin, Hauru, Markus, Lewis, Adam G. M., Yao, Yi, Mallick, Shrestha Basu, Blum, Volker, Vidal, Guifre
We demonstrate the use of Google's cloud-based Tensor Processing Units (TPUs) to accelerate and scale up conventional (cubic-scaling) density functional theory (DFT) calculations. Utilizing 512 TPU cores, we accomplish the largest such DFT computatio
Externí odkaz:
http://arxiv.org/abs/2202.01255
Autor:
Lewis, Adam G. M., Beall, Jackson, Ganahl, Martin, Hauru, Markus, Mallick, Shrestha Basu, Vidal, Guifre
We have repurposed Google Tensor Processing Units (TPUs), application-specific chips developed for machine learning, into large-scale dense linear algebra supercomputers. The TPUs' fast inter-core interconnects (ICI)s, physically two-dimensional netw
Externí odkaz:
http://arxiv.org/abs/2112.09017
Tensor Processing Units (TPUs) were developed by Google exclusively to support large-scale machine learning tasks. TPUs can, however, also be used to accelerate and scale up other computationally demanding tasks. In this paper we repurpose TPUs for t
Externí odkaz:
http://arxiv.org/abs/2111.10466
Autor:
Morningstar, Alan, Hauru, Markus, Beall, Jackson, Ganahl, Martin, Lewis, Adam G. M., Khemani, Vedika, Vidal, Guifre
Publikováno v:
PRX Quantum 3, 020331 (2022)
Tensor Processing Units (TPUs) are specialized hardware accelerators developed by Google to support large-scale machine-learning tasks, but they can also be leveraged to accelerate and scale other linear-algebra-intensive computations. In this paper
Externí odkaz:
http://arxiv.org/abs/2111.08044
Publikováno v:
SciPost Phys. 10, 040 (2021)
Several tensor networks are built of isometric tensors, i.e. tensors satisfying $W^\dagger W = \mathrm{I}$. Prominent examples include matrix product states (MPS) in canonical form, the multiscale entanglement renormalization ansatz (MERA), and quant
Externí odkaz:
http://arxiv.org/abs/2007.03638
Autor:
Van Acoleyen, Karel, Hallam, Andrew, Bal, Matthias, Hauru, Markus, Haegeman, Jutho, Verstraete, Frank
Publikováno v:
Phys. Rev. B 102, 165131 (2020)
The multiscale entanglement renormalization ansatz (MERA) provides a constructive algorithm for realizing wavefunctions that are inherently scale invariant. Unlike conformally invariant partition functions however, the finite bond dimension $\chi$ of
Externí odkaz:
http://arxiv.org/abs/1912.10572
Autor:
Hauru, Markus, Vidal, Guifre
Publikováno v:
Phys. Rev. A 98, 042316 (2018)
Given two states $|\psi\rangle$ and $|\phi\rangle$ of a quantum many-body system, one may use the overlap or fidelity $|\langle\psi|\phi\rangle|$ to quantify how similar they are. To further resolve the similarity of $|\psi\rangle$ and $|\phi\rangle$
Externí odkaz:
http://arxiv.org/abs/1807.01640
Publikováno v:
Phys. Rev. B 97, 045111 (2018)
We introduce an efficient algorithm for reducing bond dimensions in an arbitrary tensor network without changing its geometry. The method is based on a novel, quantitative understanding of local correlations in a network. Together with a tensor netwo
Externí odkaz:
http://arxiv.org/abs/1709.07460