Zobrazeno 1 - 10
of 40 111
pro vyhledávání: '"Nussbaum"'
Autor:
Kapoor, K., Hoseini, S., Choi, J., Nussbaum, B. E., Zhang, Y., Shetty, K., Skaar, C., Ward, M., Wilson, L., Shinbrough, K., Edwards, E., Wiltfong, R., Lualdi, C. P., Cohen, Offir, Kwiat, P. G., Lorenz, V. O.
We present a quantum network that distributes entangled photons between the University of Illinois Urbana-Champaign and a public library in Urbana. The network allows members of the public to perform measurements on the photons. We describe its desig
Externí odkaz:
http://arxiv.org/abs/2410.06398
This technical report describes the training of nomic-embed-vision, a highly performant, open-code, open-weights image embedding model that shares the same latent space as nomic-embed-text. Together, nomic-embed-vision and nomic-embed-text form the f
Externí odkaz:
http://arxiv.org/abs/2406.18587
Publikováno v:
BSGF - Earth Sciences Bulletin, Vol 194, p 5 (2023)
Field observations and seismic interpretations testify that the front of the Jura fold-and-thrust belt is still submitted to compressive deformation, but whether the basement is deforming (thick-skinned) or not (thin-skinned) is an active question. W
Externí odkaz:
https://doaj.org/article/bfba7568692c4f9c9c5e6e361e6c928b
Autor:
Jalota, Rricha, Verwimp, Lyan, Nussbaum-Thom, Markus, Mousa, Amr, Argueta, Arturo, Oualil, Youssef
Neural Network Language Models (NNLMs) for Virtual Assistants (VAs) are generally language-, region-, and in some cases, device-dependent, which increases the effort to scale and maintain them. Combining NNLMs for one or more of the categories is one
Externí odkaz:
http://arxiv.org/abs/2403.18783
This technical report describes the training of nomic-embed-text-v1, the first fully reproducible, open-source, open-weights, open-data, 8192 context length English text embedding model that outperforms both OpenAI Ada-002 and OpenAI text-embedding-3
Externí odkaz:
http://arxiv.org/abs/2402.01613
Autor:
Anand, Yuvanesh, Nussbaum, Zach, Treat, Adam, Miller, Aaron, Guo, Richard, Schmidt, Ben, Community, GPT4All, Duderstadt, Brandon, Mulyar, Andriy
Large language models (LLMs) have recently achieved human-level performance on a range of professional and academic benchmarks. The accessibility of these models has lagged behind their performance. State-of-the-art LLMs require costly infrastructure
Externí odkaz:
http://arxiv.org/abs/2311.04931