Multiverse Computing, a Spanish startup, has introduced compressed AI models that enable on-device processing without reliance on external compute infrastructure. This innovation aims to address the challenges posed by financial instability impacting AI supply chains, as highlighted by VC firm Lux Capital’s advice to secure compute capacity commitments in writing.
By compressing models from prominent AI labs such as OpenAI, Meta, DeepSeek, and Mistral AI, Multiverse Computing has launched an app, CompactifAI, and an API portal to democratize access to these compressed models. The CompactifAI app, utilizing Multiverse’s quantum-inspired compression technology, offers users an AI chat tool that functions offline, running locally on the device without data leaving the user’s control.
This development marks a significant step towards bringing AI capabilities to the edge, minimizing privacy concerns by processing data locally. However, device compatibility remains a key factor, with older devices potentially reverting to cloud-based models via API due to hardware limitations. Multiverse’s automated system, Ash Nazg, seamlessly manages the transition between local and cloud processing, ensuring a smooth user experience.
Source: TechCrunch