Zobrazeno 1 - 10
of 1 027
pro vyhledávání: '"Smith, James E."'
Autor:
Smith, James E.
An agent employing reinforcement learning takes inputs (state variables) from an environment and performs actions that affect the environment in order to achieve some objective. Rewards (positive or negative) guide the agent toward improved future ac
Externí odkaz:
http://arxiv.org/abs/2402.18472
Autor:
Smith, James E.
The macrocolumn is a key component of a neuromorphic computing system that interacts with an external environment under control of an agent. Environments are learned and stored in the macrocolumn as labeled directed graphs where edges connect feature
Externí odkaz:
http://arxiv.org/abs/2207.05081
Autor:
Smith, James E.
A Temporal Neural Network (TNN) architecture for implementing efficient online reinforcement learning is proposed and studied via simulation. The proposed T-learning system is composed of a frontend TNN that implements online unsupervised clustering
Externí odkaz:
http://arxiv.org/abs/2204.05437
Autor:
Greene, Samuel M., Webber, Robert J., Smith, James E. T., Weare, Jonathan, Berkelbach, Timothy C.
We present a stable and systematically improvable quantum Monte Carlo (QMC) approach to calculating excited-state energies, which we implement using our fast randomized iteration method for the full configuration interaction problem (FCI-FRI). Unlike
Externí odkaz:
http://arxiv.org/abs/2201.12164
Autor:
Smith, James E.
This document is focused on computing systems implemented in technologies that communicate and compute with temporal transients. Although described in general terms, implementations of spiking neural networks are of primary interest. As background, a
Externí odkaz:
http://arxiv.org/abs/2201.07742
In this paper, we study the nuclear gradients of heat bath configuration interaction self-consistent field (HCISCF) wave functions and use them to optimize molecular geometries for various molecules. We show that the HCISCF nuclear gradients are fair
Externí odkaz:
http://arxiv.org/abs/2201.06514
Publikováno v:
2021 IEEE Computer Society Annual Symposium on VLSI (ISVLSI), 2021, pp. 266-271
Temporal Neural Networks (TNNs) are spiking neural networks that use time as a resource to represent and process information, similar to the mammalian neocortex. In contrast to compute-intensive deep neural networks that employ separate training and
Externí odkaz:
http://arxiv.org/abs/2105.13262
Autor:
Smith, James E.
A long-standing proposition is that by emulating the operation of the brain's neocortex, a spiking neural network (SNN) can achieve similar desirable features: flexible learning, speed, and efficiency. Temporal neural networks (TNNs) are SNNs that co
Externí odkaz:
http://arxiv.org/abs/2011.13844