Anthropic, a prominent AI company, has accused DeepSeek and two other Chinese AI firms of misusing its Claude AI model to enhance their own products. The allegations, detailed in a recent announcement, point to what Anthropic describes as ‘industrial-scale campaigns’ involving fraudulent account creation and millions of exchanges with Claude. While distillation, the process of training a smaller AI model based on a more advanced one, is considered a legitimate method by Anthropic, the company warns of potential illicit uses. These unauthorized models may lack crucial safeguards, posing risks if integrated into military, intelligence, or surveillance systems.
DeepSeek, known for its efficient models, reportedly engaged in over 150,000 exchanges with Claude, focusing on enhancing reasoning capabilities and generating ‘censorship-safe alternatives’ to sensitive political queries. The situation has drawn concerns from OpenAI, which accused DeepSeek of exploiting capabilities developed by U.S. labs. The unauthorized use of AI models raises significant ethical and security implications within the tech industry.
Source: The Verge