AI models are now venturing into self-learning territory, generating their own coding problems to enhance their intelligence. This approach, as reported by WIRED, showcases how an AI system named Absolute Zero Reasoner (AZR) is advancing learning methods within the AI realm.
The core concept behind AZR involves the AI model creating challenging coding problems for itself using a large language model. By solving these self-posed problems and refining its approach based on successes and failures, the model iteratively improves its reasoning and coding capabilities.
The research conducted by Tsinghua University, the Beijing Institute for General Artificial Intelligence (BIGAI), and Pennsylvania State University demonstrates the significant improvements in coding and reasoning skills achieved through this self-questioning method. The AI model’s performance surpassed that of models trained solely on human-curated data.
According to key researchers, this self-learning approach mirrors human learning processes that transcend mere imitation. It allows the AI model to move beyond imitation and ask its own questions, ultimately surpassing its initial training.
This innovative self-learning paradigm, often referred to as ‘self-play,’ opens new avenues for AI advancement, hinting at the potential for future AI systems to continuously enhance their capabilities.
Source: WIRED