Zobrazeno 1 - 10
of 18
pro vyhledávání: '"Sotudeh, Sajad"'
Autor:
Sotudeh, Sajad, Goharian, Nazli
This study examines the potential of integrating Learning-to-Rank (LTR) with Query-focused Summarization (QFS) to enhance the summary relevance via content prioritization. Using a shared secondary decoder with the summarization decoder, we carry out
Externí odkaz:
http://arxiv.org/abs/2411.00324
Autor:
Sotudeh, Sajad, Goharian, Nazli
Query-focused summarization (QFS) is a challenging task in natural language processing that generates summaries to address specific queries. The broader field of Generative Information Retrieval (Gen-IR) aims to revolutionize information extraction f
Externí odkaz:
http://arxiv.org/abs/2307.07586
Recent Transformer-based summarization models have provided a promising approach to abstractive summarization. They go beyond sentence selection and extractive strategies to deal with more complicated tasks such as novel word generation and sentence
Externí odkaz:
http://arxiv.org/abs/2302.01342
Automatically generating short summaries from users' online mental health posts could save counselors' reading time and reduce their fatigue so that they can provide timely responses to those seeking help for improving their mental state. Recent Tran
Externí odkaz:
http://arxiv.org/abs/2302.00954
Mental health remains a significant challenge of public health worldwide. With increasing popularity of online platforms, many use the platforms to share their mental health conditions, express their feelings, and seek help from the community and cou
Externí odkaz:
http://arxiv.org/abs/2206.00856
Autor:
Sotudeh, Sajad, Goharian, Nazli
Many scientific papers such as those in arXiv and PubMed data collections have abstracts with varying lengths of 50-1000 words and average length of approximately 200 words, where longer abstracts typically convey more information about the source pa
Externí odkaz:
http://arxiv.org/abs/2206.00847
Recent models in developing summarization systems consist of millions of parameters and the model performance is highly dependent on the abundance of training data. While most existing summarization corpora contain data in the order of thousands to o
Externí odkaz:
http://arxiv.org/abs/2110.01159
Prior work in document summarization has mainly focused on generating short summaries of a document. While this type of summary helps get a high-level view of a given document, it is desirable in some cases to know more detailed information about its
Externí odkaz:
http://arxiv.org/abs/2012.14136
Autor:
Sotudeh, Sajad, Xiang, Tong, Yao, Hao-Ren, MacAvaney, Sean, Yang, Eugene, Goharian, Nazli, Frieder, Ophir
Offensive language detection is an important and challenging task in natural language processing. We present our submissions to the OffensEval 2020 shared task, which includes three English sub-tasks: identifying the presence of offensive language (S
Externí odkaz:
http://arxiv.org/abs/2007.14477
Sequence-to-sequence (seq2seq) network is a well-established model for text summarization task. It can learn to produce readable content; however, it falls short in effectively identifying key regions of the source. In this paper, we approach the con
Externí odkaz:
http://arxiv.org/abs/2005.00163