Zobrazeno 1 - 10
of 48
pro vyhledávání: '"ISHIBASHI, Yoichi"'
Large Language Models (LLMs) have shown remarkable performance improvements and are rapidly gaining adoption in industry. However, the methods for improving LLMs are still designed by humans, which restricts the invention of new model-improving algor
Externí odkaz:
http://arxiv.org/abs/2410.15639
Recent advancements in automatic code generation using large language model (LLM) agent have brought us closer to the future of automated software development. However, existing single-agent approaches face limitations in generating and improving lar
Externí odkaz:
http://arxiv.org/abs/2404.02183
We explore a knowledge sanitization approach to mitigate the privacy concerns associated with large language models (LLMs). LLMs trained on a large corpus of Web data can memorize and potentially reveal sensitive or confidential information, raising
Externí odkaz:
http://arxiv.org/abs/2309.11852
Discrete prompts have been used for fine-tuning Pre-trained Language Models for diverse NLP tasks. In particular, automatic methods that generate discrete prompts from a small set of training instances have reported superior performance. However, a c
Externí odkaz:
http://arxiv.org/abs/2302.05619
In the field of natural language processing (NLP), continuous vector representations are crucial for capturing the semantic meanings of individual words. Yet, when it comes to the representations of sets of words, the conventional vector-based approa
Externí odkaz:
http://arxiv.org/abs/2210.13034
Word embeddings, which often represent such analogic relations as king - man + woman = queen, can be used to change a word's attribute, including its gender. For transferring king into queen in this analogy-based manner, we subtract a difference vect
Externí odkaz:
http://arxiv.org/abs/2007.02598
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Publikováno v:
Case Reports in Orthopedics. 12/3/2019, p1-5. 5p.
Word embedding is a fundamental technology in natural language processing. It is often exploited for tasks using sets of words, although standard methods for representing word sets and set operations remain limited. If we can leverage the advantage o
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::500f3f3ee2967435f981ac92e1f1c13f
Publikováno v:
SAE Transactions, 2004 Jan 01. 113, 714-720.
Externí odkaz:
https://www.jstor.org/stable/44723543