Zobrazeno 1 - 10
of 9 258
pro vyhledávání: '"A. Feizi"'
Autor:
C. Albert, L. Bracaglia, A. Koide, J. DiRito, T. Lysyy, L. Harkins, C. Edwards, O. Richfield, J. Grundler, K. Zhou, E. Denbaum, G. Ketavarapu, T. Hattori, S. Perincheri, J. Langford, A. Feizi, D. Haakinson, S. A. Hosgood, M. L. Nicholson, J. S. Pober, W. M. Saltzman, S. Koide, G. T. Tietjen
Publikováno v:
Nature Communications, Vol 13, Iss 1, Pp 1-13 (2022)
Targeted nanoparticle delivery to sites of interest is important for targeted therapeutics. Here, the authors improve the targeting efficiency of antibodies on nanoparticles using a monobody adapter to correctly orientate the antibody to preserve tar
Externí odkaz:
https://doaj.org/article/ee341fcf16ed404ea4f2fab3565e0d20
Publikováno v:
Journal of Applied Fluid Mechanics, Vol 15, Iss 1, Pp 51-62 (2022)
The outrigger symmetry of a trimaran is believed to significantly affect its hydrodynamic functioning. The present study was conducted to investigate the added resistance responses and experimental vertical motion of a wave-piercing trimaran in regul
Externí odkaz:
https://doaj.org/article/53cdb7942313417db53f892435a77619
Autor:
Rezaei, Keivan, Chandu, Khyathi, Feizi, Soheil, Choi, Yejin, Brahman, Faeze, Ravichander, Abhilasha
Large language models trained on web-scale corpora can memorize undesirable datapoints such as incorrect facts, copyrighted content or sensitive data. Recently, many machine unlearning methods have been proposed that aim to 'erase' these datapoints f
Externí odkaz:
http://arxiv.org/abs/2411.00204
Recent advances in parameter-efficient fine-tuning methods, such as Low Rank Adaptation (LoRA), have gained significant attention for their ability to efficiently adapt large foundational models to various downstream tasks. These methods are apprecia
Externí odkaz:
http://arxiv.org/abs/2410.17358
Autor:
Moayeri, Mazda, Balachandran, Vidhisha, Chandrasekaran, Varun, Yousefi, Safoora, Fel, Thomas, Feizi, Soheil, Nushi, Besmira, Joshi, Neel, Vineet, Vibhav
With models getting stronger, evaluations have grown more complex, testing multiple skills in one benchmark and even in the same instance at once. However, skill-wise performance is obscured when inspecting aggregate accuracy, under-utilizing the ric
Externí odkaz:
http://arxiv.org/abs/2410.13826
Publikováno v:
مهندسی عمران شریف, Vol 35.2, Iss 4.2, Pp 87-96 (2020)
Assessing the pollution Risk Potential of Water resources and its zoning can prod
Externí odkaz:
https://doaj.org/article/b6b477ea234b465d9bce4bfa102b0e88
Image-text contrastive models such as CLIP learn transferable and robust representations for zero-shot transfer to a variety of downstream tasks. However, to obtain strong downstream performances, prompts need to be carefully curated, which can be a
Externí odkaz:
http://arxiv.org/abs/2406.13683
The increasing size of large language models (LLMs) challenges their usage on resource-constrained platforms. For example, memory on modern GPUs is insufficient to hold LLMs that are hundreds of Gigabytes in size. Offloading is a popular method to es
Externí odkaz:
http://arxiv.org/abs/2406.11674
Autor:
Zarei, Arman, Rezaei, Keivan, Basu, Samyadeep, Saberi, Mehrdad, Moayeri, Mazda, Kattakinda, Priyatham, Feizi, Soheil
Recent text-to-image diffusion-based generative models have the stunning ability to generate highly detailed and photo-realistic images and achieve state-of-the-art low FID scores on challenging image generation benchmarks. However, one of the primar
Externí odkaz:
http://arxiv.org/abs/2406.07844
Autor:
Basu, Samyadeep, Grayson, Martin, Morrison, Cecily, Nushi, Besmira, Feizi, Soheil, Massiceti, Daniela
Understanding the mechanisms of information storage and transfer in Transformer-based models is important for driving model understanding progress. Recent work has studied these mechanisms for Large Language Models (LLMs), revealing insights on how i
Externí odkaz:
http://arxiv.org/abs/2406.04236