Zobrazeno 1 - 10
of 12 342
pro vyhledávání: '"Sreenivas A"'
Publikováno v:
Advanced Science, Vol 11, Iss 1, Pp n/a-n/a (2024)
Abstract Two‐coordinate coinage metal complexes have emerged as promising emitters for highly efficient organic light‐emitting devices (OLEDs). However, achieving efficient long‐wavelength electroluminescence emission from these complexes remai
Externí odkaz:
https://doaj.org/article/3adef06bd2d54acab008a820903c548a
Autor:
Sreenivas Avula, Xudan Peng, Xingfen Lang, Micky Tortorella, Béatrice Josselin, Stéphane Bach, Stephane Bourg, Pascal Bonnet, Frédéric Buron, Sandrine Ruchaud, Sylvain Routier, Cleopatra Neagoie
Publikováno v:
Journal of Enzyme Inhibition and Medicinal Chemistry, Vol 37, Iss 1, Pp 1632-1650 (2022)
A library of substituted indolo[2,3-c]quinolone-6-ones was developed as simplified Lamellarin isosters. Synthesis was achieved from indole after a four-step pathway sequence involving iodination, a Suzuki-Miyaura cross-coupling reaction, and a reduct
Externí odkaz:
https://doaj.org/article/eae579ca70af4ec7902f09b889e08fbc
Publikováno v:
ETRI Journal, Vol 43, Iss 6, Pp 1113-1129 (2021)
AbstractLightweight ciphers are increasingly employed in cryptography because of the high demand for secure data transmission in wireless sensor network, embedded devices, and Internet of Things. The PRESENT algorithm as an ultra‐lightweight block
Externí odkaz:
https://doaj.org/article/4f1f6a571f814e80a6d28917d7162ec0
Performative learning addresses the increasingly pervasive situations in which algorithmic decisions may induce changes in the data distribution as a consequence of their public deployment. We propose a novel view in which these performative effects
Externí odkaz:
http://arxiv.org/abs/2411.02023
Autor:
Sreenivas, Sharath Turuvekere, Muralidharan, Saurav, Joshi, Raviraj, Chochowski, Marcin, Patwary, Mostofa, Shoeybi, Mohammad, Catanzaro, Bryan, Kautz, Jan, Molchanov, Pavlo
We present a comprehensive report on compressing the Llama 3.1 8B and Mistral NeMo 12B models to 4B and 8B parameters, respectively, using pruning and distillation. We explore two distinct pruning strategies: (1) depth pruning and (2) joint hidden/at
Externí odkaz:
http://arxiv.org/abs/2408.11796
Autor:
Muralidharan, Saurav, Sreenivas, Sharath Turuvekere, Joshi, Raviraj, Chochowski, Marcin, Patwary, Mostofa, Shoeybi, Mohammad, Catanzaro, Bryan, Kautz, Jan, Molchanov, Pavlo
Large language models (LLMs) targeting different deployment scales and sizes are currently produced by training each variant from scratch; this is extremely compute-intensive. In this paper, we investigate if pruning an existing LLM and then re-train
Externí odkaz:
http://arxiv.org/abs/2407.14679
Autor:
Li, Yaguang, Bedding, Timothy R., Huber, Daniel, Stello, Dennis, van Saders, Jennifer, Zhou, Yixiao, Crawford, Courtney L., Joyce, Meridith, Li, Tanda, Murphy, Simon J., Sreenivas, K. R.
Asteroseismic modelling is a powerful way to derive stellar properties. However, the derived quantities are limited by built-in assumptions used in stellar models. This work presents a detailed characterisation of stellar model uncertainties in aster
Externí odkaz:
http://arxiv.org/abs/2407.09967
Non-volatile Memory (NVM) could bridge the gap between memory and storage. However, NVMs are susceptible to data remanence attacks. Thus, multiple security metadata must persist along with the data to protect the confidentiality and integrity of NVM-
Externí odkaz:
http://arxiv.org/abs/2407.09180
Autor:
Bera, Rahul, Ranganathan, Adithya, Rakshit, Joydeep, Mahto, Sujit, Nori, Anant V., Gaur, Jayesh, Olgun, Ataberk, Kanellopoulos, Konstantinos, Sadrosadati, Mohammad, Subramoney, Sreenivas, Mutlu, Onur
Load instructions often limit instruction-level parallelism (ILP) in modern processors due to data and resource dependences they cause. Prior techniques like Load Value Prediction (LVP) and Memory Renaming (MRN) mitigate load data dependence by predi
Externí odkaz:
http://arxiv.org/abs/2406.18786
Autor:
Kazemi, Mehran, Dikkala, Nishanth, Anand, Ankit, Devic, Petar, Dasgupta, Ishita, Liu, Fangyu, Fatemi, Bahare, Awasthi, Pranjal, Guo, Dee, Gollapudi, Sreenivas, Qureshi, Ahmed
With the continuous advancement of large language models (LLMs), it is essential to create new benchmarks to effectively evaluate their expanding capabilities and identify areas for improvement. This work focuses on multi-image reasoning, an emerging
Externí odkaz:
http://arxiv.org/abs/2406.09175