
Time Series Classification with HIVE-COTE. Temporal Fusion Transformers for Interpretable Multi-horizon Time Series Forecasting. In Advances in Neural Information Processing Systems. Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting.
Shiyang Li, Xiaoyong Jin, Yao Xuan, Xiyou Zhou, Wenhu Chen, Yu-Xiang Wang, and Xifeng Yan. Similarity Preserving Representation Learning for Time Series Analysis. Mixture-of-Experts Variational Autoencoder for clustering and generating from similarity-based representations. Kopf, Vincent Fortuin, Vignesh Ram Somnath, and M. 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2018).
Unsupervised Learning of Semantic Audio Representations. In International Conference on Learning Representations. Music transformer: Generating music with long-term structure. Cheng-Zhi Anna Huang, Ashish Vaswani, et al.In Advances in Neural Information Processing Systems 32, H.
Unsupervised Scalable Representation Learning for Multivariate Time Series.
Jean-Yves Franceschi, Aymeric Dieuleveut, and Martin Jaggi. SOM-VAE: Interpretable Discrete Representation Learning on Time Series. Hüser, Francesco Locatello, Heiko Strathmann, and G. InceptionTime: Finding AlexNet for Time Series Classification. Data Mining and Knowledge Discovery, Vol. Deep learning for time series classification: a review. Hassan Fawaz, Germain Forestier, Jonathan Weber, Lhassane Idoumghar, and Pierre-Alain Muller. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. MINIROCKET: A Very Fast (Almost) Deterministic Transform for Time Series Classification. Data Mining and Knowledge Discovery (2020). ROCKET: exceptionally fast and accurate time series classification using random convolutional kernels. Angus Dempster, Franccois Petitjean, and Geoffrey I. GRU-ODE-Bayes: Continuous Modeling of Sporadically-Observed Time Series. Edward De Brouwer, Jaak Simm, Adam Arany, and Yves Moreau. Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context. Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V. Learning representations of multivariate time series with missing data. Filippo Maria Bianchi, Lorenzo Livi, Karl Øyvind Mikalsen, Michael Kampffmeyer, and Robert Jenssen. Longformer: The Long-Document Transformer. The Great Time Series Classification Bake Off: a Review and Experimental Evaluation of Recent Algorithmic Advances. The UEA multivariate time series classification archive, 2018. Anthony Bagnall, Hoang Anh Dau, Jason Lines, Michael Flynn, James Large, Aaron Bostrom, Paul Southam, and Eamonn Keogh.
Given the pronounced interest in unsupervised learning for nearly all domains in the sciences and in industry, these findings represent an important landmark, presenting the first unsupervised method shown to push the limits of state-of-the-art performance for multivariate time series regression and classification. Evaluating our framework on several public multivariate time series datasets from various domains and with diverse characteristics, we demonstrate that it performs significantly better than the best currently available methods for regression and classification, even for datasets which consist of only a few hundred training samples. The framework includes an unsupervised pre-training scheme, which can offer substantial performance benefits over fully supervised learning on downstream tasks, both with but even without leveraging additional unlabeled data, i.e., by reusing the existing data samples. We present a novel framework for multivariate time series representation learning based on the transformer encoder architecture.