LoRA: Low-Rank Adaptation of Large Language Model🚀 Introducing ChatLLaMA: Your Personal AI Assistant Powered by LoRA! 🤖 🌟 We’re excited to announce that you can now create custom personal assistants that run directly on your GPUs! ChatLLaMA utilizes LoRA, trained on Anthropic’s HH dataset, to model seamless convos between an AI assistant & users. Plus, the RLHF version of LoRA is coming soon! 🔥 📚 Know any high-quality dialogue-style datasets? Share them with us, and we’ll train ChatLLaMA on them! 🌐 ChatLLaMA is currently available for 30B and 13B models, with the 7B version coming soon. 🤔 Have questions or need help setting up ChatLLaMA? Join our Discord group & ask! Let’s revolutionize AI-assisted conversations together! 🌟 Disclaimer: — trained for research, — no foundation model weights, — the post was ran through gpt4 to make it more coherent.
Comments are closed.