We are publishing a detailed study of a 280-billion parameter transformer language model called Gopher, a study of ethical and social risks associated with large language models, and a paper investigating a new architecture with better training efficiency.
Comments are closed.