Zobrazeno 1 - 10
of 3 794
pro vyhledávání: '"Oota A"'
Transformer-based models have revolutionized the field of natural language processing. To understand why they perform so well and to assess their reliability, several studies have focused on questions such as: Which linguistic properties are encoded
Externí odkaz:
http://arxiv.org/abs/2410.02611
Identifying user's opinions and stances in long conversation threads on various topics can be extremely critical for enhanced personalization, market research, political campaigns, customer service, conflict resolution, targeted advertising, and cont
Externí odkaz:
http://arxiv.org/abs/2406.16833
Autor:
Egami, Shusaku, Ugai, Takanori, Oota, Masateru, Matsushita, Kyoumoto, Kawamura, Takahiro, Kozaki, Kouji, Fukuda, Ken
Publikováno v:
IEEE Access, Volume 11, pp.142030-142042, 2023
Knowledge Graphs (KGs) such as Resource Description Framework (RDF) data represent relationships between various entities through the structure of triples (). Knowledge graph embedding (KGE) is crucial in machine learning
Externí odkaz:
http://arxiv.org/abs/2312.15626
Despite known differences between reading and listening in the brain, recent work has shown that text-based language models predict both text-evoked and speech-evoked brain activity to an impressive degree. This poses the question of what types of in
Externí odkaz:
http://arxiv.org/abs/2311.04664
Autor:
Masaru Oota
Publikováno v:
Journal of Medical Case Reports, Vol 18, Iss 1, Pp 1-6 (2024)
Abstract Background This case report explores the long-term dynamics of insulin secretion and glycemic control in two patients with diabetes mellitus type 2 over 20 years. The observations underscore the impact of lifestyle interventions, including w
Externí odkaz:
https://doaj.org/article/a3b6e0a5f79f44c69d3b876eb5ced268
Autor:
Oota, Subba Reddy, Chen, Zijiao, Gupta, Manish, Bapi, Raju S., Jobard, Gael, Alexandre, Frederic, Hinaut, Xavier
Can we obtain insights about the brain using AI models? How is the information in deep learning models related to brain recordings? Can we improve AI models with the help of brain recordings? Such questions can be tackled by studying brain recordings
Externí odkaz:
http://arxiv.org/abs/2307.10246
Autor:
Neerudu, Pavan Kalyan Reddy, Oota, Subba Reddy, Marreddy, Mounika, Kagita, Venkateswara Rao, Gupta, Manish
Transformer-based pretrained models like BERT, GPT-2 and T5 have been finetuned for a large number of natural language processing (NLP) tasks, and have been shown to be very effective. However, while finetuning, what changes across layers in these mo
Externí odkaz:
http://arxiv.org/abs/2305.14453
Syntactic parsing is the task of assigning a syntactic structure to a sentence. There are two popular syntactic parsing methods: constituency and dependency parsing. Recent works have used syntactic embeddings based on constituency trees, incremental
Externí odkaz:
http://arxiv.org/abs/2302.08589
Document summarization aims to create a precise and coherent summary of a text document. Many deep learning summarization models are developed mainly for English, often requiring a large training corpus and efficient pre-trained language models and t
Externí odkaz:
http://arxiv.org/abs/2212.12937
Language models have been shown to be very effective in predicting brain recordings of subjects experiencing complex language stimuli. For a deeper understanding of this alignment, it is important to understand the correspondence between the detailed
Externí odkaz:
http://arxiv.org/abs/2212.08094