Zobrazeno 1 - 10
of 21
pro vyhledávání: '"Kwon, Heeyoung"'
In a continuous-time Kyle setting, we prove global existence of an equilibrium when the insider faces a terminal trading constraint. We prove that our equilibrium model produces output consistent with several empirical stylized facts such as autocorr
Externí odkaz:
http://arxiv.org/abs/2206.08117
Autor:
Chen, Gong, Robertson, MacCallum, Kwon, Heeyoung, Won, Changyeon, Schmid, Andreas K., Liu, Kai
Publikováno v:
Journal of Vacuum Science & Technology A, 39, 053410 (2021)
The domain structure in in-plane magnetized Fe/Ni/W(110) films is investigated using spin-polarized low-energy electron microscopy. A novel transition of the domain wall shape from a zigzag-like pattern to straight is observed as a function of the fi
Externí odkaz:
http://arxiv.org/abs/2108.08427
Language understanding must identify the logical connections between events in a discourse, but core events are often unstated due to their commonsense nature. This paper fills in these missing events by generating precondition events. Precondition g
Externí odkaz:
http://arxiv.org/abs/2106.07117
Autor:
Kwon, Heeyoung, Koupaee, Mahnaz, Singh, Pratyush, Sawhney, Gargi, Shukla, Anmol, Kallur, Keerthi Kumar, Chambers, Nathanael, Balasubramanian, Niranjan
Preconditions provide a form of logical connection between events that explains why some events occur together and information that is complementary to the more widely studied relations such as causation, temporal ordering, entailment, and discourse
Externí odkaz:
http://arxiv.org/abs/2010.02429
Autor:
Gaonkar, Radhika, Kwon, Heeyoung, Bastan, Mohaddeseh, Balasubramanian, Niranjan, Chambers, Nathanael
Predicting how events induce emotions in the characters of a story is typically seen as a standard multi-label classification task, which usually treats labels as anonymous classes to predict. They ignore information that may be conveyed by the emoti
Externí odkaz:
http://arxiv.org/abs/2006.05489
Early work on narrative modeling used explicit plans and goals to generate stories, but the language generation itself was restricted and inflexible. Modern methods use language models for more robust generation, but often lack an explicit representa
Externí odkaz:
http://arxiv.org/abs/2004.03762
Question Answering (QA) naturally reduces to an entailment problem, namely, verifying whether some text entails the answer to a question. However, for multi-hop QA tasks, which require reasoning with multiple sentences, it remains unclear how best to
Externí odkaz:
http://arxiv.org/abs/1904.09380
Sentence encoders are typically trained on language modeling tasks with large unlabeled datasets. While these encoders achieve state-of-the-art results on many sentence-level tasks, they are difficult to train with long training cycles. We introduce
Externí odkaz:
http://arxiv.org/abs/1808.03840
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.