Zobrazeno 1 - 10
of 85
pro vyhledávání: '"Humayun Ahmed"'
Autor:
Vyas, Kushal, Humayun, Ahmed Imtiaz, Dashpute, Aniket, Baraniuk, Richard G., Veeraraghavan, Ashok, Balakrishnan, Guha
Implicit neural representations (INRs) have demonstrated success in a variety of applications, including inverse problems and neural rendering. An INR is typically trained to capture one signal of interest, resulting in learned neural features that a
Externí odkaz:
http://arxiv.org/abs/2409.09566
Autor:
Alemohammad, Sina, Humayun, Ahmed Imtiaz, Agarwal, Shruti, Collomosse, John, Baraniuk, Richard
The artificial intelligence (AI) world is running out of real data for training increasingly large generative models, resulting in accelerating pressure to train on synthetic data. Unfortunately, training new generative models with synthetic data fro
Externí odkaz:
http://arxiv.org/abs/2408.16333
Autor:
Humayun, Ahmed Imtiaz, Amara, Ibtihel, Schumann, Candice, Farnadi, Golnoosh, Rostamzadeh, Negar, Havaei, Mohammad
Deep generative models learn continuous representations of complex data manifolds using a finite number of samples during training. For a pre-trained generative model, the common way to evaluate the quality of the manifold representation learned, is
Externí odkaz:
http://arxiv.org/abs/2408.08307
In this paper, we overview one promising avenue of progress at the mathematical foundation of deep learning: the connection between deep networks and function approximation by affine splines (continuous piecewise linear functions in multiple dimensio
Externí odkaz:
http://arxiv.org/abs/2408.04809
We develop Scalable Latent Exploration Score (ScaLES) to mitigate over-exploration in Latent Space Optimization (LSO), a popular method for solving black-box discrete optimization problems. LSO utilizes continuous optimization within the latent space
Externí odkaz:
http://arxiv.org/abs/2406.09657
Grokking, or delayed generalization, is a phenomenon where generalization in a deep neural network (DNN) occurs long after achieving near zero training error. Previous studies have reported the occurrence of grokking in specific controlled settings,
Externí odkaz:
http://arxiv.org/abs/2402.15555
The study of Deep Network (DN) training dynamics has largely focused on the evolution of the loss function, evaluated on or around train and test set data points. In fact, many DN phenomenon were first introduced in literature with that respect, e.g.
Externí odkaz:
http://arxiv.org/abs/2310.12977
Publikováno v:
Plastic and Reconstructive Surgery, Global Open, Vol 5, Iss 11, p e1549 (2017)
Summary:. Syndactyly and polydactyly—respectively characterized by fused and supernumerary digits—are among the most common congenital limb malformations, with syndactyly presenting at an estimated incidence of 1 in 2,000–3,000 live births and
Externí odkaz:
https://doaj.org/article/ac77e89165b74c869a28a86b4385db32
Autor:
Alemohammad, Sina, Casco-Rodriguez, Josue, Luzi, Lorenzo, Humayun, Ahmed Imtiaz, Babaei, Hossein, LeJeune, Daniel, Siahkoohi, Ali, Baraniuk, Richard G.
Seismic advances in generative AI algorithms for imagery, text, and other data types has led to the temptation to use synthetic data to train next-generation models. Repeating this process creates an autophagous (self-consuming) loop whose properties
Externí odkaz:
http://arxiv.org/abs/2307.01850
Autor:
Rakib, Fazle Rabbi, Dip, Souhardya Saha, Alam, Samiul, Tasnim, Nazia, Shihab, Md. Istiak Hossain, Ansary, Md. Nazmuddoha, Hossen, Syed Mobassir, Meghla, Marsia Haque, Mamun, Mamunur, Sadeque, Farig, Chowdhury, Sayma Sultana, Reasat, Tahsin, Sushmit, Asif, Humayun, Ahmed Imtiaz
We present OOD-Speech, the first out-of-distribution (OOD) benchmarking dataset for Bengali automatic speech recognition (ASR). Being one of the most spoken languages globally, Bengali portrays large diversity in dialects and prosodic features, which
Externí odkaz:
http://arxiv.org/abs/2305.09688