Zobrazeno 1 - 10
of 13 712
pro vyhledávání: '"HUMAYUN, A."'
Autor:
Ahmed, Humayun, Biancofiore, Luca
Lubricant viscoelasticity arises due to a finite polymer relaxation time ($\lambda$) and can provide beneficial effects. In applications, such as bearings, gears, biological joints, etc., where the height-to-length ratio is small ($H_0 / \ell_x$) and
Externí odkaz:
http://arxiv.org/abs/2410.16880
Autor:
Lu, Jun, Xin, Yan, Toplosky, Vince, Levitan, Jeremy, Han, Ke, Wadhams, Jane, Humayun, Munir, Abraimov, Dmytro, Bai, Hongyu
Publikováno v:
Cryogenics 141 (2024) 103901
Residual resistance ratio (RRR) of Cu stabilizer in REBCO coated conductor is an important design parameter for REBCO magnets. In this work, we measured RRR of electroplated Cu stabilizer in commercial REBCO tapes. Over 130 samples were measured for
Externí odkaz:
http://arxiv.org/abs/2410.09919
Autor:
Ortiz, Joseph, Dedieu, Antoine, Lehrach, Wolfgang, Guntupalli, Swaroop, Wendelken, Carter, Humayun, Ahmad, Zhou, Guangyao, Swaminathan, Sivaramakrishnan, Lázaro-Gredilla, Miguel, Murphy, Kevin
Learning from previously collected data via behavioral cloning or offline reinforcement learning (RL) is a powerful recipe for scaling generalist agents by avoiding the need for expensive online learning. Despite strong generalization in some respect
Externí odkaz:
http://arxiv.org/abs/2409.18330
Autor:
Vyas, Kushal, Humayun, Ahmed Imtiaz, Dashpute, Aniket, Baraniuk, Richard G., Veeraraghavan, Ashok, Balakrishnan, Guha
Implicit neural representations (INRs) have demonstrated success in a variety of applications, including inverse problems and neural rendering. An INR is typically trained to capture one signal of interest, resulting in learned neural features that a
Externí odkaz:
http://arxiv.org/abs/2409.09566
In signals of opportunity (SOPs)-based positioning utilizing low Earth orbit (LEO) satellites, ephemeris data derived from two-line element files can introduce increasing error over time. To handle the erroneous measurement, an additional base receiv
Externí odkaz:
http://arxiv.org/abs/2409.05026
Autor:
Alemohammad, Sina, Humayun, Ahmed Imtiaz, Agarwal, Shruti, Collomosse, John, Baraniuk, Richard
The artificial intelligence (AI) world is running out of real data for training increasingly large generative models, resulting in accelerating pressure to train on synthetic data. Unfortunately, training new generative models with synthetic data fro
Externí odkaz:
http://arxiv.org/abs/2408.16333
Autor:
Humayun, Ahmed Imtiaz, Amara, Ibtihel, Schumann, Candice, Farnadi, Golnoosh, Rostamzadeh, Negar, Havaei, Mohammad
Deep generative models learn continuous representations of complex data manifolds using a finite number of samples during training. For a pre-trained generative model, the common way to evaluate the quality of the manifold representation learned, is
Externí odkaz:
http://arxiv.org/abs/2408.08307
In this paper, we overview one promising avenue of progress at the mathematical foundation of deep learning: the connection between deep networks and function approximation by affine splines (continuous piecewise linear functions in multiple dimensio
Externí odkaz:
http://arxiv.org/abs/2408.04809
We develop Scalable Latent Exploration Score (ScaLES) to mitigate over-exploration in Latent Space Optimization (LSO), a popular method for solving black-box discrete optimization problems. LSO utilizes continuous optimization within the latent space
Externí odkaz:
http://arxiv.org/abs/2406.09657
Grokking, or delayed generalization, is a phenomenon where generalization in a deep neural network (DNN) occurs long after achieving near zero training error. Previous studies have reported the occurrence of grokking in specific controlled settings,
Externí odkaz:
http://arxiv.org/abs/2402.15555