After a long time with countless rumors, Meta has finally officially launched Code Llama, a large language model designed for coding tasks. Code Llama builds on the Llama 2 model released not long ago, and has been further trained with over 500 billion code and code-related token data.
Meta is releasing Code Llama in three sizes: 7 billion, 13 billion, and 34 billion parameters. The smaller 7B and 13B models are optimized for low latency scenarios like real-time code completion. While model 34B provides the most optimal overall results but requires more computing power.
In addition, Meta refines the Python and Instruct variants of Code Llama. The Python version provides some advanced capabilities for Python code generation tasks. Meanwhile, the Instruct version is tweaked to produce safer, more useful responses to natural language prompts.
Meta says Code Llama has the potential to increase productivity for professional developers, as well as lower barriers for novice programmers. However, the company also acknowledges the risks associated with large language models and believes that an open source approach is best for promoting safety.
Programmers have used LLM to support a variety of tasks. The goal is to make developer workflows more efficient so they can focus on the most human-centered aspects of their work.
Code Llama is designed to support software engineers in all fields – including research, industry, open source projects, NGOs and enterprises. But there are still many other use cases that need support.
Code Llama is now available for non-commercial research and even commercial use under an open source license. Meta hopes the release of the tool will bring a breath of fresh air in the field of AI code assistants, while also assisting the community in assessing capabilities and vulnerabilities.