Zobrazeno 1 - 10
of 133
pro vyhledávání: '"A. Volkovs"'
Autor:
Ma, Junwei, Thomas, Valentin, Hosseinzadeh, Rasa, Kamkari, Hamidreza, Labach, Alex, Cresswell, Jesse C., Golestan, Keyvan, Yu, Guangwei, Volkovs, Maksims, Caterini, Anthony L.
The challenges faced by neural networks on tabular data are well-documented and have hampered the progress of tabular foundation models. Techniques leveraging in-context learning (ICL) have shown promise here, allowing for dynamic adaptation to unsee
Externí odkaz:
http://arxiv.org/abs/2410.18164
Autor:
Thomas, Valentin, Ma, Junwei, Hosseinzadeh, Rasa, Golestan, Keyvan, Yu, Guangwei, Volkovs, Maksims, Caterini, Anthony
Tabular data is a pervasive modality spanning a wide range of domains, and the inherent diversity poses a considerable challenge for deep learning. Recent advancements using transformer-based in-context learning have shown promise on smaller and less
Externí odkaz:
http://arxiv.org/abs/2406.05207
Autor:
Vouitsis, Noël, Liu, Zhaoyan, Gorti, Satya Krishna, Villecroze, Valentin, Cresswell, Jesse C., Yu, Guangwei, Loaiza-Ganem, Gabriel, Volkovs, Maksims
The goal of multimodal alignment is to learn a single latent space that is shared between multimodal inputs. The most powerful models in this space have been trained using massive datasets of paired inputs and large-scale computational resources, mak
Externí odkaz:
http://arxiv.org/abs/2312.10144
Transformer-based models have greatly pushed the boundaries of time series forecasting recently. Existing methods typically encode time series data into $\textit{patches}$ using one or a fixed set of patch lengths. This, however, could result in a la
Externí odkaz:
http://arxiv.org/abs/2311.18780
Autor:
Sui, Yi, Wu, Tongzi, Cresswell, Jesse C., Wu, Ga, Stein, George, Huang, Xiao Shi, Zhang, Xiaochen, Volkovs, Maksims
Self-supervised representation learning~(SSRL) has advanced considerably by exploiting the transformation invariance assumption under artificially designed data augmentations. While augmentation-based SSRL algorithms push the boundaries of performanc
Externí odkaz:
http://arxiv.org/abs/2310.07756
Autor:
Labach, Alex, Pokhrel, Aslesha, Huang, Xiao Shi, Zuberi, Saba, Yi, Seung Eun, Volkovs, Maksims, Poutanen, Tomi, Krishnan, Rahul G.
Electronic health records (EHRs) recorded in hospital settings typically contain a wide range of numeric time series data that is characterized by high sparsity and irregular observations. Effective modelling for such data must exploit its time serie
Externí odkaz:
http://arxiv.org/abs/2304.13017
DiMS: Distilling Multiple Steps of Iterative Non-Autoregressive Transformers for Machine Translation
The computational benefits of iterative non-autoregressive transformers decrease as the number of decoding steps increases. As a remedy, we introduce Distill Multiple Steps (DiMS), a simple yet effective distillation technique to decrease the number
Externí odkaz:
http://arxiv.org/abs/2206.02999
Autor:
Gorti, Satya Krishna, Vouitsis, Noel, Ma, Junwei, Golestan, Keyvan, Volkovs, Maksims, Garg, Animesh, Yu, Guangwei
In text-video retrieval, the objective is to learn a cross-modal similarity function between a text and a video that ranks relevant text-video pairs higher than irrelevant pairs. However, videos inherently express a much wider gamut of information th
Externí odkaz:
http://arxiv.org/abs/2203.15086
Publikováno v:
Nature Communications 14, 2899 (2023)
Institutions in highly regulated domains such as finance and healthcare often have restrictive rules around data sharing. Federated learning is a distributed learning framework that enables multi-institutional collaborations on decentralized data wit
Externí odkaz:
http://arxiv.org/abs/2111.11343
Time series data introduces two key challenges for explainability methods: firstly, observations of the same feature over subsequent time steps are not independent, and secondly, the same feature can have varying importance to model predictions over
Externí odkaz:
http://arxiv.org/abs/2107.14317