Multiverse Computing Unveils Compressed AI Model to Enhance Accessibility

This article was generated by AI and cites original sources.

Spanish startup Multiverse Computing has introduced a new version of its HyperNova 60B AI model, leveraging CompactifAI, a compression technology inspired by quantum computing. By releasing a compressed AI model for free on Hugging Face, Multiverse Computing aims to make cutting-edge AI models more accessible for practical deployment by companies.

The updated HyperNova 60B 2602 model, at 32GB, is significantly smaller than its predecessor, OpenAI’s GPT-OSS-120B, while maintaining high accuracy and performance. The model now offers enhanced support for tool calling and agentic coding, addressing challenges in inference costs.

Multiverse Computing’s HyperNova 60B model has outperformed competitors like Mistral AI’s Mistral Large 3, showcasing the company’s technical expertise. Both Multiverse Computing and Mistral AI are European companies with global expansion and enterprise clientele, with Multiverse serving customers like Iberdrola, Bosch, and the Bank of Canada.

As Multiverse Computing continues to innovate, the company is reportedly on track to raise a substantial funding round, potentially exceeding €1.5 billion in valuation. The startup’s commitment to advancing AI through compression technology underscores its position as a key player in the industry.

Source: TechCrunch