Autor: |
Ceperic, Vladimir, Markovic, Tomislav |
Předmět: |
|
Zdroj: |
International Journal of Simulation: Systems, Science & Technology; Mar2024, Vol. 25 Issue 1, p8.1-8.6, 6p |
Abstrakt: |
The advent of Large Language Models (LLMs) has sparked significant interest in their application across various domains, including time-series forecasting. This paper introduces an encoding strategy designed to bridge the gap between the inherently quantitative nature of time-series data and the primarily textual processing capabilities of LLMs. By leveraging an innovative combination of adaptive segmentation and tokenization, inspired by the fast Brownian bridge-based aggregation (fABBA) algorithm, our method transforms time series data into a format conducive to LLM analysis. Through evaluation on diverse datasets (DARTS series), we demonstrate that our approach, on average, improves time-series forecasting accuracy. [ABSTRACT FROM AUTHOR] |
Databáze: |
Complementary Index |
Externí odkaz: |
|