Zobrazeno 1 - 10
of 128
pro vyhledávání: '"McCane Brendan"'
We propose a complexity measure of a neural network mapping function based on the diversity of the set of tangent spaces from different inputs. Treating each tangent space as a linear PAC concept we use an entropy-based measure of the bundle of conce
Externí odkaz:
http://arxiv.org/abs/2103.07614
We introduce a deep recursive octree network for the compression of 3D voxel data. Our network compresses a voxel grid of any size down to a very small latent space in an autoencoder-like network. We show results for compressing 32, 64 and 128 grids
Externí odkaz:
http://arxiv.org/abs/2008.03875
We show that reinforcement learning agents that learn by surprise (surprisal) get stuck at abrupt environmental transition boundaries because these transitions are difficult to learn. We propose a counter-intuitive solution that we call Mutual Inform
Externí odkaz:
http://arxiv.org/abs/2001.05636
Autor:
Lyons Brett, Herbison Peter, Shultz Barry, McCane Brendan, Fritz Julie M, Abbott J Haxby, Stefanko Georgia, Walsh Richard M
Publikováno v:
BMC Musculoskeletal Disorders, Vol 7, Iss 1, p 45 (2006)
Abstract Background Lumbar segmental rigidity (LSR) and lumbar segmental instability (LSI) are believed to be associated with low back pain (LBP), and identification of these disorders is believed to be useful for directing intervention choices. Prev
Externí odkaz:
https://doaj.org/article/ec347a2eff7243c5a707192a2485071f
Publikováno v:
BMC Musculoskeletal Disorders, Vol 6, Iss 1, p 56 (2005)
Abstract Background Musculoskeletal physiotherapists routinely assess lumbar segmental motion during the clinical examination of a patient with low back pain. The validity of manual assessment of segmental motion has not, however, been adequately inv
Externí odkaz:
https://doaj.org/article/f97754bfcd034c9faab60c9acfe70b03
Pseudo-rehearsal allows neural networks to learn a sequence of tasks without forgetting how to perform in earlier tasks. Preventing forgetting is achieved by introducing a generative network which can produce data from previously seen tasks so that i
Externí odkaz:
http://arxiv.org/abs/1911.11988
Exploration in environments with continuous control and sparse rewards remains a key challenge in reinforcement learning (RL). Recently, surprise has been used as an intrinsic reward that encourages systematic and efficient exploration. We introduce
Externí odkaz:
http://arxiv.org/abs/1910.14351
We introduce switched linear projections for expressing the activity of a neuron in a deep neural network in terms of a single linear projection in the input space. The method works by isolating the active subnetwork, a series of linear transformatio
Externí odkaz:
http://arxiv.org/abs/1909.11275
Any generic deep machine learning algorithm is essentially a function fitting exercise, where the network tunes its weights and parameters to learn discriminatory features by minimizing some cost function. Though the network tries to learn the optima
Externí odkaz:
http://arxiv.org/abs/1905.01168
We present a conditional probabilistic framework for collaborative representation of image patches. It incorporates background compensation and outlier patch suppression into the main formulation itself, thus doing away with the need for pre-processi
Externí odkaz:
http://arxiv.org/abs/1903.09123