Zobrazeno 1 - 10
of 227
pro vyhledávání: '"Jojic, Nebojsa"'
Autor:
Alavi, Seyed Hossein, Xu, Weijia, Jojic, Nebojsa, Kennett, Daniel, Ng, Raymond T., Rao, Sudha, Zhang, Haiyan, Dolan, Bill, Shwartz, Vered
We introduce GamePlot, an LLM-powered assistant that supports game designers in crafting immersive narratives for turn-based games, and allows them to test these games through a collaborative game play and refine the plot throughout the process. Our
Externí odkaz:
http://arxiv.org/abs/2411.02714
Diffusion models have dominated the field of large, generative image models, with the prime examples of Stable Diffusion and DALL-E 3 being widely adopted. These models have been trained to perform text-conditioned generation on vast numbers of image
Externí odkaz:
http://arxiv.org/abs/2410.18804
Humans have the ability to learn new tasks by inferring high-level concepts from existing solution, then manipulating these concepts in lieu of the raw data. Can we automate this process by deriving latent semantic structures in a document collection
Externí odkaz:
http://arxiv.org/abs/2410.05481
Autor:
Peng, Xiangyu, Quaye, Jessica, Rao, Sudha, Xu, Weijia, Botchway, Portia, Brockett, Chris, Jojic, Nebojsa, DesGarennes, Gabriel, Lobb, Ken, Xu, Michael, Leandro, Jorge, Jin, Claire, Dolan, Bill
Publikováno v:
IEEE Conference on Games 2024
We explore how interaction with large language models (LLMs) can give rise to emergent behaviors, empowering players to participate in the evolution of game narratives. Our testbed is a text-adventure game in which players attempt to solve a mystery
Externí odkaz:
http://arxiv.org/abs/2404.17027
Autor:
Wang, Xinyuan, Li, Chenxi, Wang, Zhen, Bai, Fan, Luo, Haotian, Zhang, Jiayou, Jojic, Nebojsa, Xing, Eric P., Hu, Zhiting
Highly effective, task-specific prompts are often heavily engineered by experts to integrate detailed instructions and domain insights based on a deep understanding of both instincts of large language models (LLMs) and the intricacies of the target t
Externí odkaz:
http://arxiv.org/abs/2310.16427
Autor:
Momennejad, Ida, Hasanbeig, Hosein, Vieira, Felipe, Sharma, Hiteshi, Ness, Robert Osazuwa, Jojic, Nebojsa, Palangi, Hamid, Larson, Jonathan
Recently an influx of studies claim emergent cognitive abilities in large language models (LLMs). Yet, most rely on anecdotes, overlook contamination of training sets, or lack systematic Evaluation involving multiple tasks, control conditions, multip
Externí odkaz:
http://arxiv.org/abs/2309.15129
Agency, the capacity to proactively shape events, is central to how humans interact and collaborate. While LLMs are being developed to simulate human behavior and serve as human-like agents, little attention has been given to the Agency that these mo
Externí odkaz:
http://arxiv.org/abs/2305.12815
We introduce Reprompting, an iterative sampling algorithm that automatically learns the Chain-of-Thought (CoT) recipes for a given task without human intervention. Through Gibbs sampling, Reprompting infers the CoT recipes that work consistently well
Externí odkaz:
http://arxiv.org/abs/2305.09993
We demonstrate that, through appropriate prompting, GPT-3 family of models can be triggered to perform iterative behaviours necessary to execute (rather than just write or recall) programs that involve loops, including several popular algorithms foun
Externí odkaz:
http://arxiv.org/abs/2303.14310
Large language models (LLMs) have a substantial capacity for high-level analogical reasoning: reproducing patterns in linear text that occur in their training data (zero-shot evaluation) or in the provided context (few-shot in-context learning). Howe
Externí odkaz:
http://arxiv.org/abs/2210.01293