Zobrazeno 1 - 10
of 46 694
pro vyhledávání: '"YANG, JING"'
Pre-trained Transformers, through in-context learning (ICL), have demonstrated exceptional capabilities to adapt to new tasks using example prompts \textit{without model update}. Transformer-based wireless receivers, where prompts consist of the pilo
Externí odkaz:
http://arxiv.org/abs/2411.07600
Distributionally robust offline reinforcement learning (RL) aims to find a policy that performs the best under the worst environment within an uncertainty set using an offline dataset collected from a nominal model. While recent advances in robust RL
Externí odkaz:
http://arxiv.org/abs/2411.07514
We propose an analytical thermodynamic model for describing defect phase transformations, which we term the statistical phase evaluation approach (SPEA). The SPEA model assumes a Boltzmann distribution of finite size phase fractions and calculates th
Externí odkaz:
http://arxiv.org/abs/2411.02228
We study the three-dimensional Carrollian field theory on the Rindler horizon which is dual to a bulk massless scalar field theory in the four-dimensional Rindler wedge. The Carrollian field theory could be mapped to a two-dimensional Euclidean field
Externí odkaz:
http://arxiv.org/abs/2410.20372
Large Language Models (LLMs) have shown significant potential in designing reward functions for Reinforcement Learning (RL) tasks. However, obtaining high-quality reward code often involves human intervention, numerous LLM queries, or repetitive RL t
Externí odkaz:
http://arxiv.org/abs/2410.14660
An intriguing property of the Transformer is its ability to perform in-context learning (ICL), where the Transformer can solve different inference tasks without parameter updating based on the contextual information provided by the corresponding inpu
Externí odkaz:
http://arxiv.org/abs/2410.13981
Large Language Models (LLMs) rely on the contextual information embedded in examples/demonstrations to perform in-context learning (ICL). To mitigate the risk of LLMs potentially leaking private information contained in examples in the prompt, we int
Externí odkaz:
http://arxiv.org/abs/2410.12085
While transformers have demonstrated impressive capacities for in-context learning (ICL) in practice, theoretical understanding of the underlying mechanism enabling transformers to perform ICL is still in its infant stage. This work aims to theoretic
Externí odkaz:
http://arxiv.org/abs/2410.11778
The in-context learning (ICL) capability of pre-trained models based on the transformer architecture has received growing interest in recent years. While theoretical understanding has been obtained for ICL in reinforcement learning (RL), the previous
Externí odkaz:
http://arxiv.org/abs/2410.09701
Autor:
Yang, Jing, Jiang, Minyue, Yang, Sen, Tan, Xiao, Li, Yingying, Ding, Errui, Wang, Hanli, Wang, Jingdong
The construction of Vectorized High-Definition (HD) map typically requires capturing both category and geometry information of map elements. Current state-of-the-art methods often adopt solely either point-level or instance-level representation, over
Externí odkaz:
http://arxiv.org/abs/2410.07733