Zobrazeno 1 - 10
of 46
pro vyhledávání: '"Banburski, Andrzej"'
We show that a class of matrix theories can be understood as an extension of quantum field theory which has non-local interactions. This reformulation is based on the Wigner-Weyl transformation, and the interactions take the form of Moyal product on
Externí odkaz:
http://arxiv.org/abs/2206.13458
Autor:
Alford, Simon, Gandhi, Anshula, Rangamani, Akshay, Banburski, Andrzej, Wang, Tony, Dandekar, Sylee, Chin, John, Poggio, Tomaso, Chin, Peter
One of the challenges facing artificial intelligence research today is designing systems capable of utilizing systematic reasoning to generalize to new tasks. The Abstraction and Reasoning Corpus (ARC) measures such a capability through a set of visu
Externí odkaz:
http://arxiv.org/abs/2110.11536
Recent theoretical results show that gradient descent on deep neural networks under exponential loss functions locally maximizes classification margin, which is equivalent to minimizing the norm of the weight matrices under margin constraints. This p
Externí odkaz:
http://arxiv.org/abs/2107.10199
A convolutional neural network strongly robust to adversarial perturbations at reasonable computational and performance cost has not yet been demonstrated. The primate visual ventral stream seems to be robust to small perturbations in visual stimuli
Externí odkaz:
http://arxiv.org/abs/2006.16427
The main success stories of deep learning, starting with ImageNet, depend on deep convolutional networks, which on certain tasks perform significantly better than traditional shallow classifiers, such as support vector machines, and also better than
Externí odkaz:
http://arxiv.org/abs/2006.13915
In solving a system of $n$ linear equations in $d$ variables $Ax=b$, the condition number of the $n,d$ matrix $A$ measures how much errors in the data $b$ affect the solution $x$. Estimates of this type are important in many inverse problems. An exam
Externí odkaz:
http://arxiv.org/abs/1912.06190
While deep learning is successful in a number of applications, it is not yet well understood theoretically. A satisfactory theoretical characterization of deep learning however, is beginning to emerge. It covers the following questions: 1) representa
Externí odkaz:
http://arxiv.org/abs/1908.09375
Autor:
Banburski, Andrzej, Liao, Qianli, Miranda, Brando, Rosasco, Lorenzo, De La Torre, Fernanda, Hidary, Jack, Poggio, Tomaso
The key to generalization is controlling the complexity of the network. However, there is no obvious control of complexity -- such as an explicit regularization term -- in the training of deep networks for classification. We will show that a classica
Externí odkaz:
http://arxiv.org/abs/1903.04991
Given two networks with the same training loss on a dataset, when would they have drastically different test losses and errors? Better understanding of this question of generalization may improve practical applications of deep networks. In this paper
Externí odkaz:
http://arxiv.org/abs/1807.09659
Autor:
Poggio, Tomaso, Liao, Qianli, Miranda, Brando, Banburski, Andrzej, Boix, Xavier, Hidary, Jack
A main puzzle of deep neural networks (DNNs) revolves around the apparent absence of "overfitting", defined in this paper as follows: the expected error does not get worse when increasing the number of neurons or of iterations of gradient descent. Th
Externí odkaz:
http://arxiv.org/abs/1806.11379