With LIMA, Meta’s AI researchers introduce a new language model that achieves GPT-4 and Bard level performance in test scenarios, albeit fine-tuned with relatively few examples.
LIMA stands for “Less is More for Alignment,” and the name hints at the model’s function: It is intended to show that with an extensively pre-trained AI model, a few examples are sufficient to achieve high-quality results.
Few examples in this case means that Meta manually selected 1,000 diverse prompts and their output from sources such as other research papers, WikiHow, StackExchange, and Reddit.
Comments are closed.