Zobrazeno 1 - 10
of 412
pro vyhledávání: '"Liu Tian-Yu"'
Publikováno v:
Nanophotonics, Vol 12, Iss 4, Pp 753-760 (2023)
Most of the previous TMDC-photon coupling devices were mainly based on A exciton due to its high oscillator strength and large exciton binding energy. Less effort has been focused on the modulation of the emission of B exciton and Rydberg states in T
Externí odkaz:
https://doaj.org/article/3864320bead24dc39e2c50f847de6b4e
We tackle the question of whether Large Language Models (LLMs), viewed as dynamical systems with state evolving in the embedding space of symbolic tokens, are observable. That is, whether there exist multiple 'mental' state trajectories that yield th
Externí odkaz:
http://arxiv.org/abs/2405.14061
Autor:
Achille, Alessandro, Steeg, Greg Ver, Liu, Tian Yu, Trager, Matthew, Klingenberg, Carson, Soatto, Stefano
Quantifying the degree of similarity between images is a key copyright issue for image-based machine learning. In legal doctrine however, determining the degree of similarity between works requires subjective analysis, and fact-finders (judges and ju
Externí odkaz:
http://arxiv.org/abs/2402.08919
Autor:
Liu, Tian Yu, Trager, Matthew, Achille, Alessandro, Perera, Pramuditha, Zancato, Luca, Soatto, Stefano
We propose to extract meaning representations from autoregressive language models by considering the distribution of all possible trajectories extending an input text. This strategy is prompt-free, does not require fine-tuning, and is applicable to a
Externí odkaz:
http://arxiv.org/abs/2310.18348
Unsupervised depth completion and estimation methods are trained by minimizing reconstruction error. Block artifacts from resampling, intensity saturation, and occlusions are amongst the many undesirable by-products of common data augmentation scheme
Externí odkaz:
http://arxiv.org/abs/2310.09739
Vision Transformer (ViT) architectures represent images as collections of high-dimensional vectorized tokens, each corresponding to a rectangular non-overlapping patch. This representation trades spatial granularity for embedding dimensionality, and
Externí odkaz:
http://arxiv.org/abs/2310.03967
We introduce Tangent Attention Fine-Tuning (TAFT), a method for fine-tuning linearized transformers obtained by computing a First-order Taylor Expansion around a pre-trained initialization. We show that the Jacobian-Vector Product resulting from line
Externí odkaz:
http://arxiv.org/abs/2307.08122
Autor:
Liu, Tian Yu, Soatto, Stefano
Tangent Model Composition (TMC) is a method to combine component models independently fine-tuned around a pre-trained point. Component models are tangent vectors to the pre-trained model that can be added, scaled, or subtracted to support incremental
Externí odkaz:
http://arxiv.org/abs/2307.08114
We tackle the question of whether an agent can, by suitable choice of prompts, control an AI bot to any state. To that end, we first introduce a formal definition of ``meaning'' that is amenable to analysis. Then, we characterize ``meaningful data''
Externí odkaz:
http://arxiv.org/abs/2305.18449
Autor:
Zancato, Luca, Achille, Alessandro, Liu, Tian Yu, Trager, Matthew, Perera, Pramuditha, Soatto, Stefano
We introduce Train/Test-Time Adaptation with Retrieval (${\rm T^3AR}$), a method to adapt models both at train and test time by means of a retrieval module and a searchable pool of external samples. Before inference, ${\rm T^3AR}$ adapts a given mode
Externí odkaz:
http://arxiv.org/abs/2303.14333