Zobrazeno 1 - 10
of 12
pro vyhledávání: '"Joshua Maynez"'
Autor:
Priyanka Agrawal, Chris Alberti, Fantine Huot, Joshua Maynez, Ji Ma, Sebastian Ruder, Kuzman Ganchev, Dipanjan Das, Mirella Lapata
Publikováno v:
Transactions of the Association for Computational Linguistics, Vol 11 (2023)
Externí odkaz:
https://doaj.org/article/8faa2d36a9fa414b8e03374b25025494
Autor:
Shashi Narayan, Joshua Maynez, Reinald Kim Amplayo, Kuzman Ganchev, Annie Louis, Fantine Huot, Anders Sandholm, Dipanjan Das, Mirella Lapata
Publikováno v:
Transactions of the Association for Computational Linguistics, Vol 11 (2023)
Externí odkaz:
https://doaj.org/article/f8a276925972406791f8ca06dbf1a650
Publikováno v:
Transactions of the Association for Computational Linguistics, Vol 9, Pp 1475-1492 (2021)
AbstractWe introduce a simple but flexible mechanism to learn an intermediate plan to ground the generation of abstractive summaries. Specifically, we prepend (or prompt) target summaries with entity chains—ordered sequences of entities mentioned i
Externí odkaz:
https://doaj.org/article/6b04d20f306a41088885f7c5939a9f80
Publikováno v:
Advances in Civil Engineering, Vol 2016 (2016)
The recent rise of terrorist attacks has reinforced the need for mitigation of damage caused by blast loading on unreinforced masonry walls. The primary goal of the techniques is to prevent the loss of life while simultaneously preserving the integri
Externí odkaz:
https://doaj.org/article/7812942e5ab040ea9b7cb369ae565184
Publikováno v:
Findings of the Association for Computational Linguistics: NAACL 2022.
Publikováno v:
2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU).
Publikováno v:
ACL/IJCNLP (1)
Aralikatte, R, Narayan, S, Maynez, J, Rothe, S & McDonald, R 2021, Focus attention : Promoting faithfulness and diversity in summarization . in ACL-IJCNLP 2021-59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Proceedings of the Conference . Association for Computational Linguistics, pp. 6078-6095, Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL-IJCNLP 2021, Virtual, Online, 01/08/2021 . https://doi.org/10.18653/v1/2021.acl-long.474
Aralikatte, R, Narayan, S, Maynez, J, Rothe, S & McDonald, R 2021, Focus attention : Promoting faithfulness and diversity in summarization . in ACL-IJCNLP 2021-59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Proceedings of the Conference . Association for Computational Linguistics, pp. 6078-6095, Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL-IJCNLP 2021, Virtual, Online, 01/08/2021 . https://doi.org/10.18653/v1/2021.acl-long.474
Professional summaries are written with document-level information, such as the theme of the document, in mind. This is in contrast with most seq2seq decoders which simultaneously learn to focus on salient content, while deciding what to generate, at
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::9333312f70fafd66deb6825845736bf2
http://arxiv.org/abs/2105.11921
http://arxiv.org/abs/2105.11921
Publikováno v:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing.
Publikováno v:
EMNLP (1)
We propose encoder-centric stepwise models for extractive summarization using structured transformers -- HiBERT and Extended Transformers. We enable stepwise summarization by injecting the previously generated summary into the structured transformer
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::e6acbc6696ed13ab80901410c5129c98
http://arxiv.org/abs/2010.02744
http://arxiv.org/abs/2010.02744
Publikováno v:
ACL
It is well known that the standard likelihood training and approximate decoding objectives in neural text generation models lead to less human-like responses for open-ended tasks such as language modeling and story generation. In this paper we have a
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::2a6ddcb40ec46d21a57577302b909358