At CES 2026, Nvidia introduced Alpamayo, a family of open-source AI models designed to enhance the decision-making capabilities of autonomous vehicles in complex driving scenarios. At the core of this innovation is Alpamayo 1, a 10-billion-parameter vision language action (VLA) model that enables autonomous vehicles to emulate human-like reasoning processes.
Alpamayo 1 empowers autonomous vehicles to tackle challenging edge cases by breaking down problems into manageable steps, exploring various possibilities, and selecting the safest course of action. Nvidia’s CEO, Jensen Huang, emphasized that Alpamayo equips autonomous vehicles with the ability to navigate complex environments safely, decipher rare scenarios, and articulate the rationale behind their driving decisions.
Developers can access the underlying code of Alpamayo 1 on Hugging Face, allowing for customization into more efficient versions for vehicle development, training of simpler driving systems, and creation of auxiliary tools such as auto-labeling systems for video data. Additionally, Nvidia’s Cosmos platform facilitates the generation of synthetic data for training Alpamayo-based autonomous vehicle applications, combining real and synthetic datasets for comprehensive testing and validation.
Source: TechCrunch