Zobrazeno 1 - 10
of 706
pro vyhledávání: '"Alemohammad, A."'
Autor:
Alemohammad, Sina, Humayun, Ahmed Imtiaz, Agarwal, Shruti, Collomosse, John, Baraniuk, Richard
The artificial intelligence (AI) world is running out of real data for training increasingly large generative models, resulting in accelerating pressure to train on synthetic data. Unfortunately, training new generative models with synthetic data fro
Externí odkaz:
http://arxiv.org/abs/2408.16333
The development of systems to measure and optimize emerging energetic material performance is critical for CWA defeat. This study documents a combination of two spectroscopic systems designed to monitor decomposition of a CWA simulant and temperature
Externí odkaz:
http://arxiv.org/abs/2408.11066
Autor:
Khallaghi, Sam, Abedi, Rahebe, Ali, Hanan Abou, Alemohammad, Hamed, Asipunu, Mary Dziedzorm, Alatise, Ismail, Ha, Nguyen, Luo, Boka, Mai, Cat, Song, Lei, Wussah, Amos, Xiong, Sitian, Yao, Yao-Ting, Zhang, Qi, Estes, Lyndon D.
The accuracy of mapping agricultural fields across large areas is steadily improving with high-resolution satellite imagery and deep learning (DL) models, even in regions where fields are small and geometrically irregular. However, developing effecti
Externí odkaz:
http://arxiv.org/abs/2408.06467
Filling cloudy pixels in multispectral satellite imagery is essential for accurate data analysis and downstream applications, especially for tasks which require time series data. To address this issue, we compare the performance of a foundational Vis
Externí odkaz:
http://arxiv.org/abs/2404.19609
Autor:
Kilic, Velat, Macfarlane, Neil, Stround, Jasper, Metais, Samuel, Alemohammad, Milad, Cooper, A. Brinton, Foster, Amy C., Foster, Mark A.
We investigate usage of nonlinear wave chaotic amorphous silicon (a-Si) cavities as physically unclonable functions (PUF). Machine learning attacks on integrated electronic PUFs have been demonstrated to be very effective at modeling PUF behavior. Su
Externí odkaz:
http://arxiv.org/abs/2402.02846
Autor:
Jakubik, Johannes, Roy, Sujit, Phillips, C. E., Fraccaro, Paolo, Godwin, Denys, Zadrozny, Bianca, Szwarcman, Daniela, Gomes, Carlos, Nyirjesy, Gabby, Edwards, Blair, Kimura, Daiki, Simumba, Naomi, Chu, Linsong, Mukkavilli, S. Karthik, Lambhate, Devyani, Das, Kamal, Bangalore, Ranjini, Oliveira, Dario, Muszynski, Michal, Ankur, Kumar, Ramasubramanian, Muthukumaran, Gurung, Iksha, Khallaghi, Sam, Hanxi, Li, Cecil, Michael, Ahmadi, Maryam, Kordi, Fatemeh, Alemohammad, Hamed, Maskey, Manil, Ganti, Raghu, Weldemariam, Kommy, Ramachandran, Rahul
Significant progress in the development of highly adaptable and reusable Artificial Intelligence (AI) models is expected to have a significant impact on Earth science and remote sensing. Foundation models are pre-trained on large unlabeled datasets t
Externí odkaz:
http://arxiv.org/abs/2310.18660
Autor:
Li, Liuchi, Kilic, Velat, Alemohammad, Milad, Ramesh, K. T., Foster, Mark A., Hufnagel, Todd C.
The stress intensity factor is important for understanding crack initiation and propagation. Because it cannot be measured directly, the characterization of the stress intensity factor relies on the measurement of deformation around a crack tip. Such
Externí odkaz:
http://arxiv.org/abs/2310.00862
Autor:
LeJeune, Daniel, Alemohammad, Sina
In order to better understand feature learning in neural networks, we propose a framework for understanding linear models in tangent feature space where the features are allowed to be transformed during training. We consider linear transformations of
Externí odkaz:
http://arxiv.org/abs/2308.15478
Autor:
Alemohammad, Sina, Casco-Rodriguez, Josue, Luzi, Lorenzo, Humayun, Ahmed Imtiaz, Babaei, Hossein, LeJeune, Daniel, Siahkoohi, Ali, Baraniuk, Richard G.
Seismic advances in generative AI algorithms for imagery, text, and other data types has led to the temptation to use synthetic data to train next-generation models. Repeating this process creates an autophagous (self-consuming) loop whose properties
Externí odkaz:
http://arxiv.org/abs/2307.01850
Autor:
Lacoste, Alexandre, Lehmann, Nils, Rodriguez, Pau, Sherwin, Evan David, Kerner, Hannah, Lütjens, Björn, Irvin, Jeremy Andrew, Dao, David, Alemohammad, Hamed, Drouin, Alexandre, Gunturkun, Mehmet, Huang, Gabriel, Vazquez, David, Newman, Dava, Bengio, Yoshua, Ermon, Stefano, Zhu, Xiao Xiang
Recent progress in self-supervision has shown that pre-training large neural networks on vast amounts of unsupervised data can lead to substantial increases in generalization to downstream tasks. Such models, recently coined foundation models, have b
Externí odkaz:
http://arxiv.org/abs/2306.03831