Meta confirmed on April 6 that it will release open-source versions of its upcoming Mango multimedia generator and Avocado large language model, extending its open-weight strategy to the next generation of frontier models.
For the broader landscape, see our open-source AI models 2026 creator reference.
What Happened
According to SiliconANGLE reporting on an Axios scoop, Meta is developing open-source editions of both Mango (an image and video generation model) and Avocado (an LLM focused on coding and reasoning). The models are being built inside Meta Superintelligence Labs under AI chief Alexandr Wang.
The open-source versions will launch "eventually" after the proprietary versions, with some capabilities removed. Possible limitations include scaled-down parameter counts, missing mixture-of-experts components, omitted post-training rounds, and restricted cybersecurity code generation.
Why It Matters
Mango is the model creative professionals should watch. A multimedia generator from Meta that handles both image and video generation would compete directly with Stability AI, Runway, and Pika. An open-source release would let developers fine-tune the model for specific creative workflows, from product photography to social media video, without depending on API access or subscription pricing.
Meta's open-source AI strategy has produced models like Llama that reshaped the LLM landscape. Applying the same approach to multimedia generation could accelerate creative AI tooling across the ecosystem.
Key Details
- Mango: Image and video generation model
- Avocado: LLM for coding and reasoning
- Open-source timeline: After proprietary launch, timing unspecified
- Limitations: Some capabilities will be removed from public versions
- Team: Meta Superintelligence Labs, led by Alexandr Wang
What to Do Next
No release date has been announced for either model. Follow Meta's AI blog for updates. For context on the current open-source model landscape, see our analysis of how Gemma 4 rewrites the open-source playbook.