Zobrazeno 1 - 10
of 2 688
pro vyhledávání: '"Bakos, Á."'
Autor:
Pinto, Marcelo Tala, Jordán, Andrés, Acuña, Lorena, Jones, Matías, Brahm, Rafael, Reinarz, Yared, Eberhardt, Jan, Espinoza, Néstor, Henning, Thomas, Hobson, Melissa, Rojas, Felipe, Schlecker, Martin, Trifonov, Trifon, Bakos, Gaspar, Boyle, Gavin, Csubry, Zoltan, Hartmann, Joel, Knepper, Benjamin, Kreidberg, Laura, Suc, Vincent, Teske, Johanna, Butler, R. Paul, Crane, Jeffrey, Schectman, Steve, Thompson, Ian, Osip, Dave, Ricker, George, Collins, Karen A., Watkins, Cristilyn N., Bieryla, Allyson, Stockdale, Chris, Wang, Gavin, Zambelli, Roberto, Seager, Sara, Winn, Joshua, Rose, Mark E., Rice, Malena, Essack, Zahra
We report the discovery and characterization of three new transiting giant planets orbiting TOI-6628, TOI-3837 and TOI-5027, and one new warm sub-Saturn orbiting TOI-2328, whose transits events were detected in the lightcurves of the Transiting Exopl
Externí odkaz:
http://arxiv.org/abs/2412.02069
Transformer neural networks (TNN) excel in natural language processing (NLP), machine translation, and computer vision (CV) without relying on recurrent or convolutional layers. However, they have high computational and memory demands, particularly o
Externí odkaz:
http://arxiv.org/abs/2411.18148
Autor:
Slavkova, Kalina P., Traughber, Melanie, Chen, Oliver, Bakos, Robert, Goldstein, Shayna, Harms, Dan, Erickson, Bradley J., Siddiqui, Khan M.
Technological advances in artificial intelligence (AI) have enabled the development of large vision language models (LVLMs) that are trained on millions of paired image and text samples. Subsequent research efforts have demonstrated great potential o
Externí odkaz:
http://arxiv.org/abs/2411.17891
Autor:
Bakos, Evelin, Boterenbrood, Henk, Dönszelmann, Mark, Egli, Florian, Franco, Luca, Gottardo, Carlo A., Habraken, René, König, Adriaan, Pellegrino, Antonio, Valderanis, Chrysostomos, Vermeulen, Jos, Wijnen, Thei, Wu, Mengqing
The ATLAS Muon Drift Tube (MDT) ReadOut Drivers (MROD), 204 VME modules that are an essential part of the readout chain of the 1,150 MDT chambers, have been in operation for more than 15 years and are expected to remain in operation until about 2026.
Externí odkaz:
http://arxiv.org/abs/2411.07709
This study investigates a counterintuitive phenomenon in adversarial machine learning: the potential for noise-based defenses to inadvertently aid evasion attacks in certain scenarios. While randomness is often employed as a defensive strategy agains
Externí odkaz:
http://arxiv.org/abs/2410.23870
Given the growing focus on memristive crossbar-based in-memory computing (IMC) architectures as a potential alternative to current energy-hungry machine learning hardware, the availability of a fast and accurate circuit-level simulation framework cou
Externí odkaz:
http://arxiv.org/abs/2410.19993
In medical image segmentation tasks, the scarcity of labeled training data poses a significant challenge when training deep neural networks. When using U-Net-style architectures, it is common practice to address this problem by pretraining the encode
Externí odkaz:
http://arxiv.org/abs/2410.18677
Autor:
Kabir, MD Arafat, Kamucheka, Tendayi, Fredricks, Nathaniel, Mandebi, Joel, Bakos, Jason, Huang, Miaoqing, Andrews, David
Many recent FPGA-based Processor-in-Memory (PIM) architectures have appeared with promises of impressive levels of parallelism but with performance that falls short of expectations due to reduced maximum clock frequencies, an inability to scale proce
Externí odkaz:
http://arxiv.org/abs/2410.07546
Autor:
Kabir, MD Arafat, Kamucheka, Tendayi, Fredricks, Nathaniel, Mandebi, Joel, Bakos, Jason, Huang, Miaoqing, Andrews, David
Processor-in-Memory (PIM) overlays and new redesigned reconfigurable tile fabrics have been proposed to eliminate the von Neumann bottleneck and enable processing performance to scale with BRAM capacity. The performance of these FPGA-based PIM archit
Externí odkaz:
http://arxiv.org/abs/2410.04367
Autor:
Kabir, Ehsan, Kabir, Md. Arafat, Downey, Austin R. J., Bakos, Jason D., Andrews, David, Huang, Miaoqing
Transformer neural networks (TNNs) are being applied across a widening range of application domains, including natural language processing (NLP), machine translation, and computer vision (CV). Their popularity is largely attributed to the exceptional
Externí odkaz:
http://arxiv.org/abs/2409.14023