Zobrazeno 1 - 10
of 17 281
pro vyhledávání: '"Jenner A"'
Autor:
Jenner Alexander Gamboa Aragundi, Leidy Jessenia Salinas Herrera, Virgilio Eduardo Salcedo-Muñoz, Linda Amarilis Nuñez Guale
Publikováno v:
Telos: Revista de Estudios Interdisciplinarios en Ciencias Sociales, Vol 24, Iss 2, Pp 430-444 (2022)
El presente artículo tiene como objetivo analizar la participación del Triple Bottom Line (TBL) en las acciones de Responsabilidad Social Universitaria (RSU) de la Universidad Técnica de Machala. Su contribución es gestionada mediante tres dimens
Externí odkaz:
https://doaj.org/article/11f28299f4244e0f9a6de9c7bd99ce79
Computational models are invaluable in capturing the complexities of real-world biological processes. Yet, the selection of appropriate algorithms for inference tasks, especially when dealing with real-world observational data, remains a challenging
Externí odkaz:
http://arxiv.org/abs/2409.19675
Autor:
Jenner Alonso Tobar Torres
Publikováno v:
Civilizar, Vol 22, Iss 42 (2022)
En el derecho contractual internacional se ha reconocido la teoría de la imprevisión como un componente de la llamada lex mercatoria, a partir de su inclusión en diferentes textos normativos. Sin embargo, a pesar de su amplio reconocimiento doctri
Externí odkaz:
https://doaj.org/article/9aff57e7a9194a7fb8085b5001cf8c1d
Autor:
Villa, Chiara, Maini, Philip K, Browning, Alexander P, Jenner, Adrianne L, Hamis, Sara, Cassidy, Tyler
Intratumour phenotypic heterogeneity is nowadays understood to play a critical role in disease progression and treatment failure. Accordingly, there has been increasing interest in the development of mathematical models capable of capturing its role
Externí odkaz:
http://arxiv.org/abs/2406.01505
Autor:
Jenner, Erik, Kapur, Shreyas, Georgiev, Vasil, Allen, Cameron, Emmons, Scott, Russell, Stuart
Do neural networks learn to implement algorithms such as look-ahead or search "in the wild"? Or do they rely purely on collections of simple heuristics? We present evidence of learned look-ahead in the policy network of Leela Chess Zero, the currentl
Externí odkaz:
http://arxiv.org/abs/2406.00877
Large language models generate code one token at a time. Their autoregressive generation process lacks the feedback of observing the program's output. Training LLMs to suggest edits directly can be challenging due to the scarcity of rich edit data. T
Externí odkaz:
http://arxiv.org/abs/2405.20519
Autor:
Anwar, Usman, Saparov, Abulhair, Rando, Javier, Paleka, Daniel, Turpin, Miles, Hase, Peter, Lubana, Ekdeep Singh, Jenner, Erik, Casper, Stephen, Sourbut, Oliver, Edelman, Benjamin L., Zhang, Zhaowei, Günther, Mario, Korinek, Anton, Hernandez-Orallo, Jose, Hammond, Lewis, Bigelow, Eric, Pan, Alexander, Langosco, Lauro, Korbak, Tomasz, Zhang, Heidi, Zhong, Ruiqi, hÉigeartaigh, Seán Ó, Recchia, Gabriel, Corsi, Giulio, Chan, Alan, Anderljung, Markus, Edwards, Lilian, Petrov, Aleksandar, de Witt, Christian Schroeder, Motwan, Sumeet Ramesh, Bengio, Yoshua, Chen, Danqi, Torr, Philip H. S., Albanie, Samuel, Maharaj, Tegan, Foerster, Jakob, Tramer, Florian, He, He, Kasirzadeh, Atoosa, Choi, Yejin, Krueger, David
This work identifies 18 foundational challenges in assuring the alignment and safety of large language models (LLMs). These challenges are organized into three different categories: scientific understanding of LLMs, development and deployment methods
Externí odkaz:
http://arxiv.org/abs/2404.09932
Oncolytic virotherapy, utilizing genetically modified viruses to combat cancer and trigger anti-cancer immune responses, has garnered significant attention in recent years. In our previous work arXiv:2305.12386, we developed a stochastic agent-based
Externí odkaz:
http://arxiv.org/abs/2404.06459
Publikováno v:
Giornale Italiano di Endodonzia, Vol 32, Iss 1, Pp 25-30 (2018)
Aim: To present a long term follow up clinical case in which a compromised anterior tooth was saved by a surgical extrusion procedure. Summary: Although different techniques have been suggested for clinical crown lengthening in the anterior zone, som
Externí odkaz:
https://doaj.org/article/7567c0a57d6344bc8258ae29c08395f6
Past analyses of reinforcement learning from human feedback (RLHF) assume that the human evaluators fully observe the environment. What happens when human feedback is based only on partial observations? We formally define two failure cases: deceptive
Externí odkaz:
http://arxiv.org/abs/2402.17747