Zobrazeno 1 - 10
of 40
pro vyhledávání: '"Banerjee, Shilpak"'
In both smooth and analytic categories, we construct examples of diffeomorphisms of topological entropy zero with intricate ergodic properties. On any smooth compact connected manifold of dimension 2 admitting a nontrivial circle action, we construct
Externí odkaz:
http://arxiv.org/abs/2412.21041
The development of classical ergodic theory has had a significant impact in the areas of mathematics, physics, and, in general, applied sciences. The quantum ergodic theory of Hamiltonian dynamics has its motivations to understand thermodynamics and
Externí odkaz:
http://arxiv.org/abs/2310.02740
Deep learning researchers have a keen interest in proposing two new novel activation functions which can boost network performance. A good choice of activation function can have significant consequences in improving network performance. A handcrafted
Externí odkaz:
http://arxiv.org/abs/2111.04682
Well-known activation functions like ReLU or Leaky ReLU are non-differentiable at the origin. Over the years, many smooth approximations of ReLU have been proposed using various smoothing techniques. We propose new smooth approximations of a non-diff
Externí odkaz:
http://arxiv.org/abs/2109.13210
The Anosov-Katok method is one of the most powerful tools of constructing smooth volume-preserving diffeomorphisms of entropy zero with prescribed ergodic or topological properties. To measure the complexity of systems with entropy zero, invariants l
Externí odkaz:
http://arxiv.org/abs/2109.08602
An activation function is a crucial component of a neural network that introduces non-linearity in the network. The state-of-the-art performance of a neural network depends also on the perfect choice of an activation function. We propose two novel no
Externí odkaz:
http://arxiv.org/abs/2109.04386
We have proposed orthogonal-Pad\'e activation functions, which are trainable activation functions and show that they have faster learning capability and improves the accuracy in standard deep learning datasets and models. Based on our experiments, we
Externí odkaz:
http://arxiv.org/abs/2106.09693
Measure-theoretic slow entropy is a more refined invariant than the classical measure-theoretic entropy to characterize the complexity of dynamical systems with subexponential growth rates of distinguishable orbit types. In this paper we prove flexib
Externí odkaz:
http://arxiv.org/abs/2010.14472
Activation functions play a pivotal role in the function learning using neural networks. The non-linearity in the learned function is achieved by repeated use of the activation function. Over the years, numerous activation functions have been propose
Externí odkaz:
http://arxiv.org/abs/2009.13501
Deep learning at its core, contains functions that are composition of a linear transformation with a non-linear function known as activation function. In past few years, there is an increasing interest in construction of novel activation functions re
Externí odkaz:
http://arxiv.org/abs/2009.03863