Zobrazeno 1 - 10
of 1 882
pro vyhledávání: '"P, Portes"'
The flat spectrum radio quasar PKS 1510-089 is one of the most active blazars in $\gamma$-rays, exhibiting phases of very high activity. This study investigates its variability over a decade across a wide range of wavelengths, from radio to $\gamma$-
Externí odkaz:
http://arxiv.org/abs/2410.17349
The Ground-based Observational Support of the Fermi Gamma-ray Space Telescope is conducted by the University of Arizona using the 2.3m Bok and 1.54m Kuiper telescopes operated by the Steward Observatory (SO). This program monitors blazar sources with
Externí odkaz:
http://arxiv.org/abs/2406.05321
Autor:
Biderman, Dan, Portes, Jacob, Ortiz, Jose Javier Gonzalez, Paul, Mansheej, Greengard, Philip, Jennings, Connor, King, Daniel, Havens, Sam, Chiley, Vitaliy, Frankle, Jonathan, Blakeney, Cody, Cunningham, John P.
Low-Rank Adaptation (LoRA) is a widely-used parameter-efficient finetuning method for large language models. LoRA saves memory by training only low rank perturbations to selected weight matrices. In this work, we compare the performance of LoRA and f
Externí odkaz:
http://arxiv.org/abs/2405.09673
Autor:
Dainese, P., Marra, L., Cassara, D., Portes, A., Oh, J., Yang, J., Palmieri, A., Rodrigues, J. R., Dorrah, A. H., Capasso, F.
Complex non-local behavior makes designing high efficiency and multifunctional metasurfaces a significant challenge. While using libraries of meta-atoms provide a simple and fast implementation methodology, pillar to pillar interaction often imposes
Externí odkaz:
http://arxiv.org/abs/2405.03930
Large language model (LLM) scaling laws are empirical formulas that estimate changes in model quality as a result of increasing parameter count and training data. However, these formulas, including the popular Deepmind Chinchilla scaling laws, neglec
Externí odkaz:
http://arxiv.org/abs/2401.00448
Autor:
Portes, Jacob, Trott, Alex, Havens, Sam, King, Daniel, Venigalla, Abhinav, Nadeem, Moin, Sardana, Nikhil, Khudia, Daya, Frankle, Jonathan
Publikováno v:
NeurIPS 2023
Although BERT-style encoder models are heavily used in NLP research, many researchers do not pretrain their own BERTs from scratch due to the high cost of training. In the past half-decade since BERT first rose to prominence, many advances have been
Externí odkaz:
http://arxiv.org/abs/2312.17482
Large Language Models are traditionally finetuned on large instruction datasets. However recent studies suggest that small, high-quality datasets can suffice for general purpose instruction following. This lack of consensus surrounding finetuning bes
Externí odkaz:
http://arxiv.org/abs/2311.13133
Publikováno v:
Stat Comput 34, 9 (2024)
High dimension, low sample size (HDLSS) problems are numerous among real-world applications of machine learning. From medical images to text processing, traditional machine learning algorithms are usually unsuccessful in learning the best possible co
Externí odkaz:
http://arxiv.org/abs/2310.14710
Autor:
Paulo Dainese, Louis Marra, Davide Cassara, Ary Portes, Jaewon Oh, Jun Yang, Alfonso Palmieri, Janderson Rocha Rodrigues, Ahmed H. Dorrah, Federico Capasso
Publikováno v:
Light: Science & Applications, Vol 13, Iss 1, Pp 1-10 (2024)
Abstract Complex non-local behavior makes designing high efficiency and multifunctional metasurfaces a significant challenge. While using libraries of meta-atoms provide a simple and fast implementation methodology, pillar to pillar interaction often
Externí odkaz:
https://doaj.org/article/c4ac1912f61f4b62ba72d6adf475115c
Publikováno v:
JSES International, Vol 8, Iss 6, Pp 1342- (2024)
Externí odkaz:
https://doaj.org/article/45a13dd6127b4aac828663753a396461