Zobrazeno 1 - 10
of 279
pro vyhledávání: '"KRZYZAK, ADAM"'
Image classification based on over-parametrized convolutional neural networks with a global average-pooling layer is considered. The weights of the network are learned by gradient descent. A bound on the rate of convergence of the difference between
Externí odkaz:
http://arxiv.org/abs/2405.07619
Image classification from independent and identically distributed random variables is considered. Image classifiers are defined which are based on a linear combination of deep convolutional networks with max-pooling layer. Here all the weights are le
Externí odkaz:
http://arxiv.org/abs/2404.07128
Autor:
Kohler, Michael, Krzyzak, Adam
One of the most recent and fascinating breakthroughs in artificial intelligence is ChatGPT, a chatbot which can simulate human conversation. ChatGPT is an instance of GPT4, which is a language model based on generative gredictive gransformers. So if
Externí odkaz:
http://arxiv.org/abs/2312.17007
Autor:
Kohler, Michael, Krzyzak, Adam
Estimation of a regression function from independent and identically distributed random variables is considered. The $L_2$ error with integration with respect to the design measure is used as an error criterion. Over-parametrized deep neural network
Externí odkaz:
http://arxiv.org/abs/2210.01443
Autor:
Krzyżak, Adam, Wiȩckowski, Jȩdrzej, Rafajłowicz, Wojciech, Moczko, Przemysław, Rafajłowicz, Ewaryst
Publikováno v:
In Knowledge-Based Systems 5 September 2024 299
Autor:
GUANG YI CHEN1 guang_c@cse.concordia.ca, KRZYZAK, ADAM1 krzyzak@cse.concordia.ca, SHEN-EN QIAN2 shen-en.qian@asc-csa.gc.ca
Publikováno v:
Image Analysis & Stereology. 2024, Vol. 43 Issue 2, p195-201. 7p.
Autor:
Kohler, Michael, Krzyzak, Adam
A regression problem with dependent data is considered. Regularity assumptions on the dependency of the data are introduced, and it is shown that under suitable structural assumptions on the regression function a deep recurrent neural network estimat
Externí odkaz:
http://arxiv.org/abs/2011.00328
Recent results in nonparametric regression show that for deep learning, i.e., for neural network estimates with many hidden layers, we are able to achieve good rates of convergence even in case of high-dimensional predictor variables, provided suitab
Externí odkaz:
http://arxiv.org/abs/1912.05436
Autor:
Kohler, Michael, Krzyzak, Adam
Recently it was shown in several papers that backpropagation is able to find the global minimum of the empirical risk on the training data using over-parametrized deep neural networks. In this paper a similar result is shown for deep neural networks
Externí odkaz:
http://arxiv.org/abs/1912.03925