Efficient multi-lingual language model fine-tuning
Written: 10 Sep 2019 by Julian Eisenschlos, Sebastian Ruder, Piotr Czapla, and Marcin Kadras
Multi-lingual language model Fine-Tuning (MultiFiT) is an extension of
ULMFiT (see below) that is designed to enable practitioners to
efficiently train and fine-tune language models in their language of
choice. Stay tuned! A full blog post and pretrained models will be coming
soon. The following links should help you get started.