🎄 Just in time for the magical week 🎅: LightOn and Answer.AI just made available a new model called ModernBERT.
ModernBERT is available as a slot-in replacement for any BERT-like models, with both a base (139M params) and large (395M params) model size.
To get a sense of how important the BERT model and its derivatives are, here are some figures:
- Out of the 1.2 million different models uploaded on HuggingFace since its inception, Google's initial BERT model is the second model most downloaded with more than 65 millions downloads last month.
- In the first 30 most downloaded models, BERT and related models account for 325 millions downloads last month.
We hope the community likes ModernBERT and build applications that will be smarter 🧠 , better 🛰️ , faster 🚀 and with longer context 🦒 .
Here is the preprint:
Smarter, Better, Faster, Longer: A Modern Bidirectional Encoder for Fast, Memory Efficient, and Long Context Finetuning and Inference by Benjamin Warner, Antoine Chaffin, Benjamin Clavié, Orion Weller, Oskar Hallström, Said Taghadouini, Alexis Gallagher, Raja Biswas, Faisal Ladhak, Tom Aarsen, Nathan Cooper, Griffin Adams, Jeremy Howard, Iacopo Poli
Encoder-only transformer models such as BERT offer a great performance-size tradeoff for retrieval and classification tasks with respect to larger decoder-only models. Despite being the workhorse of numerous production pipelines, there have been limited Pareto improvements to BERT since its release. In this paper, we introduce ModernBERT, bringing modern model optimizations to encoder-only models and representing a major Pareto improvement over older encoders. Trained on 2 trillion tokens with a native 8192 sequence length, ModernBERT models exhibit state-of-the-art results on a large pool of evaluations encompassing diverse classification tasks and both single and multi-vector retrieval on different domains (including code). In addition to strong downstream performance, ModernBERT is also the most speed and memory efficient encoder and is designed for inference on common GPUs.
See also
- Announcement on LightOn blog:
- LightOn Technical blogpost: Finally, a Replacement for BERT
- AnswerdotAI blog post
Models:
ModernBERT model was trained smoothly on Orange Business cloud ⛅ in cooperation with Hewlett Packard Enterprise.
(*) the magical weeks are generally the last two weeks of December. Marie Curie discovers Radium (Dec 21st), the Wright brothers made their first flight (Dec 17th), Brattain and H. R. Moore made a demonstration of the transistor (Dec 23rd), Charles Babbage invented the calculating machine (Dec 26th).
Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn or the Advanced Matrix Factorization group on LinkedIn
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.
Other links:
Paris Machine Learning: Meetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup About LightOn: Newsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myself: LightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv
No comments:
Post a Comment