Category: AI

  • Researchers Uncover Vulnerability in AI Language Models: Syntax Hacking Exploits Weaknesses in Comprehension

    This article was generated by AI and cites original sources.

    Recent research by MIT, Northeastern University, and Meta has revealed a vulnerability in large language models (LLMs) like ChatGPT, where models may prioritize sentence structure over meaning when processing questions. This discovery sheds light on potential weaknesses that prompt injection attacks may exploit, highlighting the importance of understanding how AI models interpret instructions.

    The study, led by Chantal Shaib and Vinith M. Suriyakumar, demonstrated that LLMs can sometimes rely on grammatical patterns alone, leading to responses based on syntax rather than semantics. By crafting prompts with nonsensical words but preserved structures, the researchers observed models generating contextually relevant yet factually incorrect answers.

    This phenomenon underscores the complexity of language understanding in AI systems, showcasing how syntactic shortcuts can overshadow semantic comprehension, especially in scenarios where patterns align closely with training data domains. These insights will be presented at the upcoming NeurIPS conference, offering valuable implications for enhancing AI safety and robustness.

    Source: Ars Technica

  • OpenAI Accelerates ChatGPT Enhancements as Competitors Gain Ground

    This article was generated by AI and cites original sources.

    OpenAI, a prominent player in the AI field, is responding to increased competition from companies like Google and Anthropic. CEO Sam Altman has declared a ‘code red’ and directed efforts towards enhancing their flagship product, ChatGPT. This move signals a shift from OpenAI’s previous dominance in the AI landscape.

    Altman’s memo, as reported by the Wall Street Journal and The Information, outlines a strategic refocus on ChatGPT. Initiatives such as ads, shopping and health agents, and the personal assistant, Pulse, will be delayed to prioritize ChatGPT improvements. These enhancements aim to boost speed, reliability, personalization, and the chatbot’s knowledge base.

    To expedite development, a daily call for ChatGPT enhancements has been instituted, with Altman encouraging team transfers as needed. This sense of urgency underscores OpenAI’s significant investments in growth and its quest for future profitability.

    Google’s advancements in AI, particularly with the success of tools like the Nano Banana image model and the Gemini 3 AI model, have prompted OpenAI’s reactive measures. The intensifying competition highlights the dynamic nature of the AI industry and the imperative for continuous innovation.

    Source: The Verge

  • Amazon’s AI Chatbot Rufus Boosts Black Friday Sales with Impressive Adoption

    This article was generated by AI and cites original sources.

    Amazon’s AI chatbot, Rufus, demonstrated remarkable effectiveness in driving sales on Black Friday, as reported by market intelligence firm Sensor Tower. In the U.S., sessions resulting in a purchase surged by 100% when Rufus was utilized, compared to a mere 20% increase without its assistance. Moreover, sessions with Rufus that led to purchases saw a 75% day-over-day rise, outperforming the 35% increase for sessions without Rufus.

    The widespread adoption of Rufus is evident in the data, with total website sessions involving the AI chatbot surpassing overall website sessions during the shopping event. Rufus, initially introduced in beta in early 2024 and later made available to all U.S. customers, aids Amazon shoppers in product discovery, recommendations, and comparisons.

    Black Friday witnessed a surge in AI usage for holiday shopping, with Adobe Analytics reporting an 805% year-over-year increase in AI traffic to U.S. retail sites. This surge reflects consumers’ growing reliance on generative AI chatbots for deal hunting and product research, particularly in popular Black Friday categories like electronics, video games, appliances, toys, personal care items, and baby products.

    Source: TechCrunch

  • Arcee AI Unveils Trinity Models, Challenging Chinese Dominance in Open Source AI

    This article was generated by AI and cites original sources.

    Arcee AI, a U.S. startup, has unveiled the Trinity Mini and Trinity Nano Preview, the first models in its new ‘Trinity’ family of open-source Mixture-of-Experts (MoE) models. These models, released under the Apache 2.0 license, represent a significant shift in the open-source AI domain, which has been dominated by Chinese labs like Alibaba and Baidu.

    Trinity Mini, with 26 billion parameters, and Trinity Nano Preview, a 6 billion parameter model, showcase Arcee’s innovative Attention-First MoE architecture, emphasizing stability and training efficiency. Trinity Mini’s performance on benchmarks like SimpleQA and BFCL V3 has been notable, demonstrating competitiveness with larger models.

    Both Trinity models are available for free download on Hugging Face, empowering developers to modify and fine-tune them to their requirements. Arcee’s strategic focus on model sovereignty and end-to-end training reflects a commitment to reshaping the U.S. open-source AI landscape, challenging the dominance of Chinese models.

    With Trinity Large, a 420 billion parameter model set to launch in January 2026, Arcee aims to further establish itself as a key player in frontier-scale open-source AI models.

    Source: VentureBeat

  • DeepSeek Unveils Efficient AI Models with Sparse Attention Breakthrough

    This article was generated by AI and cites original sources.

    Chinese AI startup DeepSeek has announced two new AI models, DeepSeek-V3.2 and DeepSeek-V3.2-Speciale, which introduce a novel architectural innovation called DeepSeek Sparse Attention (DSA). DSA significantly reduces computational costs when processing long documents and complex tasks by identifying relevant context portions, leading to a 70% reduction in inference costs compared to previous models.

    DeepSeek’s technical report highlights that the new models support context windows of 128,000 tokens, enabling efficient analysis of extensive documents, codebases, and research papers. Notably, DeepSeek-V3.2-Speciale has excelled in international competitions, showcasing its capabilities in mathematics, coding, and reasoning tasks.

    Additionally, DeepSeek’s models incorporate ‘thinking in tool-use,’ allowing seamless problem-solving while utilizing external tools without losing reasoning context. By training on synthetic tasks and leveraging real-world tools, DeepSeek has expanded the boundaries of AI capabilities.

    Departing from industry norms, DeepSeek has adopted an open-source approach, offering its cutting-edge models under the MIT license. This strategic move challenges the proprietary model ownership model, potentially disrupting the AI business landscape with free access to high-performance AI systems.

    Despite facing regulatory challenges in Europe and America regarding data privacy and export controls, DeepSeek’s innovation and open-source strategy signal a new era in AI development and deployment.

    Source: VentureBeat

  • OpenAI Faces Legal Scrutiny Over Deletion of Allegedly Pirated Book Datasets

    This article was generated by AI and cites original sources.

    OpenAI, a prominent player in the AI landscape, is facing legal pressure following the deletion of book datasets that have sparked controversy. The datasets, known as ‘Books 1’ and ‘Books 2,’ were removed before the release of ChatGPT in 2022. These datasets, allegedly sourced from Library Genesis (LibGen), have put OpenAI in the crosshairs of a class-action lawsuit from authors who claim their works were used without permission.

    While OpenAI initially cited ‘non-use’ as a rationale for deleting the datasets, subsequent legal developments have raised questions about the true motives behind this action. Authors have pushed for transparency, leading to a court order for OpenAI to disclose internal communications related to the dataset deletion, including discussions with in-house lawyers and references to LibGen that were previously withheld under attorney-client privilege.

    This legal saga underscores the complexities of data ethics and intellectual property rights in the realm of artificial intelligence. As AI models become more sophisticated and data-intensive, ensuring ethical sourcing and usage of datasets is paramount to prevent legal entanglements and safeguard intellectual property.

    Source: Ars Technica

  • Apple Appoints New AI Chief with Extensive Tech Background as Giannandrea Steps Down

    This article was generated by AI and cites original sources.

    Apple recently announced a change in its AI leadership, with John Giannandrea stepping down from his role as the company’s AI chief. Giannandrea, who had held this position since 2018, will remain as an advisor until spring.

    Replacing Giannandrea is Amar Subramanya, a former Microsoft executive with a 16-year tenure at Google, where he led engineering for the Gemini Assistant. This strategic move brings in expertise from major competitors, positioning Apple to address its AI challenges.

    The transition comes amid Apple’s struggles in the AI domain since the launch of Apple Intelligence in October 2024. Initial reviews of the platform were mixed, with reports of underwhelming performance and inaccuracies in content generation.

    One incident involved the erroneous reporting by Apple Intelligence on various news events, such as falsely attributing actions to individuals. These missteps, including setbacks in Siri’s overhaul, highlighted Apple’s difficulties in the AI space.

    A Bloomberg investigation shed light on Apple’s AI challenges, with revelations of Siri’s malfunctioning features just before its scheduled release in April. The delayed launch led to legal actions from disappointed consumers promised advanced AI capabilities.

    During this turbulent period, Giannandrea’s role was reportedly marginalized, indicating internal shifts in AI leadership within Apple.

    Source: TechCrunch

  • Apple Shakes Up AI Leadership as Siri Lags Behind

    This article was generated by AI and cites original sources.

    Apple has announced the departure of its AI chief, John Giannandrea, following challenges with Siri, the company’s AI-powered voice assistant. Giannandrea’s exit comes after delays in Siri’s development earlier this year, prompting a leadership change.

    Amar Subramanya, a former Google executive who recently served at Microsoft’s AI division, will take over as Apple’s vice president of AI. Subramanya’s role will involve overseeing the enhancement of Apple’s AI models, machine learning research, and AI safety and evaluation.

    Giannandrea, known for his work at Google before joining Apple in 2018 to improve Siri’s capabilities, faced scrutiny over the delayed Siri improvements. Apple’s CEO, Tim Cook, reportedly reassigned responsibilities within the AI team due to concerns over the progress under Giannandrea’s leadership.

    Subramanya’s expertise in integrating AI research into consumer products aligns with Apple’s future innovation plans. He will report to Apple’s software SVP Craig Federighi, focusing on advancing Siri’s functionalities. Apple aims to unveil an upgraded Siri next spring, with speculations that Google’s Gemini AI model will power new features.

    Source: The Verge

  • OpenAI’s Strategic Partnership with Thrive Holdings Signals AI Integration in Business Services

    This article was generated by AI and cites original sources.

    OpenAI, a prominent player in the AI industry, recently announced an ownership stake in Thrive Holdings, a private equity investment firm closely linked to OpenAI through its parent company Thrive Capital. This move, although not involving direct monetary exchange according to sources cited by The Financial Times, signifies a strategic partnership where OpenAI will offer employees, models, products, and services to Thrive Holdings’ companies.

    The collaboration extends beyond mere services, with potential future returns for OpenAI from Thrive Holdings. This arrangement reflects the interconnected nature of the tech industry, where companies often engage in mutual investments and partnerships.

    Specifically, the partnership aims to leverage AI in IT services and accounting, focusing on enhancing speed, accuracy, and cost efficiency while improving service quality within these sectors. Joshua Kushner, CEO of Thrive Holding and Capital, highlighted the transformative potential of AI, envisioning a future where AI becomes an integral tool reshaping traditional business practices.

    The integration of AI into these sectors aligns with the broader industry trend of utilizing AI as a native tool to revolutionize traditional practices. This strategic move underscores the increasing importance of AI in driving operational enhancements and industry transformations.

    Source: The Verge

  • Nvidia Unveils Cutting-Edge AI Models for Autonomous Driving Research

    This article was generated by AI and cites original sources.

    Nvidia, a prominent semiconductor company, has introduced new AI models and infrastructure aimed at advancing autonomous vehicle and robotics research. The company revealed the Alpamayo-R1, an open reasoning vision language model tailored for autonomous driving research, marking a significant milestone in this domain. This innovation, showcased at the NeurIPS AI conference, enables vehicles to analyze both textual information and images simultaneously, enhancing their ability to perceive their surroundings and make informed decisions based on sensory input.

    The Alpamayo-R1 model builds upon Nvidia’s existing Cosmos Reason model, known for its thoughtful decision-making process that precedes actions. Nvidia’s commitment to developing such technology aligns with its goal of supporting companies in achieving level 4 autonomous driving, characterized by complete independence within specific environments and conditions. By imbuing autonomous vehicles with a level of ‘common sense,’ Nvidia aims to enhance their decision-making capabilities, mirroring human-like nuanced driving judgments.

    Complementing the new vision model, Nvidia has made available a comprehensive set of resources on GitHub, collectively known as the Cosmos Cookbook. This repository includes guides, inference tools, and workflows to assist developers in effectively leveraging and training Cosmos models for diverse applications, covering essential aspects such as data preparation, synthetic data generation, and model assessment.

    Source: TechCrunch

  • OpenAI Invests in Thrive Holdings to Accelerate AI Adoption in Business Sectors

    This article was generated by AI and cites original sources.

    OpenAI, a prominent player in the AI industry, has made a strategic investment by acquiring a stake in Thrive Holdings, a company affiliated with one of its major investors, Thrive Capital. Thrive Holdings functions as a private equity firm focused on consolidating businesses in sectors such as accounting and IT services that stand to benefit from AI technology.

    The terms of the deal were not disclosed by either company, but it involves OpenAI integrating its engineering, research, and product teams within Thrive’s portfolio companies to expedite the adoption of AI and enhance operational efficiency. According to CNBC, if these companies succeed, OpenAI’s ownership stake will increase, and it will be remunerated for its contributions.

    This collaboration marks another instance of OpenAI’s strategic investments, as the $500 billion AI company has recently expanded its portfolio to include infrastructure partners like Advanced Micro Devices and CoreWeave. Analysts will be keen to observe whether the businesses under Thrive Holdings can establish sustainable profitability through the utilization of OpenAI’s technology, or if the valuations are merely inflated based on speculative market projections.

    Source: TechCrunch

  • Liquid AI’s LFM2 Blueprint: Empowering Enterprise-Grade On-Device AI Training

    This article was generated by AI and cites original sources.

    Liquid AI, a startup founded by MIT computer scientists, has introduced its Liquid Foundation Models series 2 (LFM2), offering enterprise-grade small-model training that challenges conventional AI limits. The LFM2 architecture emphasizes efficiency and real-time, privacy-preserving AI on various devices, eliminating the need for cloud-only large language models. This approach marks a significant shift towards on-device AI capabilities that balance latency and capability.

    By releasing a detailed technical report, Liquid AI provides a transparent blueprint for training small, efficient models, underscoring predictability, operational portability, and on-device feasibility. The report focuses on practicality, optimizing models for real-world constraints rather than academic benchmarks.

    The training pipeline of LFM2 adopts a structured approach, compensating for model scale through innovative techniques like Top-K knowledge distillation and post-training sequences for reliable behavior. This approach enhances operational reliability and practicality, ensuring models can effectively follow instructions and manage chat flows.

    Moreover, Liquid AI’s multimodal variants, such as LFM2-VL and LFM2-Audio, demonstrate a token-efficient design that enables document understanding, transcription, and multimodal capabilities directly on devices, without the need for extensive GPU resources.

    The LFM2 report outlines a future where enterprise AI architectures blend local and cloud orchestration, leveraging small on-device models for time-critical tasks and larger cloud models for complex reasoning. This hybrid approach offers cost control, latency determinism, governance benefits, and operational resilience.

    For tech leaders, the strategic takeaway is clear: on-device AI is no longer a compromise but a strategic design choice. LFM2 signifies a shift towards reproducible, open, and operationally feasible AI foundations that empower agentic systems to operate anywhere.

    Source: VentureBeat

  • AWS and Visa Collaborate to Enhance AI-Powered Commerce Infrastructure

    This article was generated by AI and cites original sources.

    AWS and Visa have joined forces to introduce blueprints aimed at addressing the current gaps in AI-powered commerce infrastructure. The collaboration aims to simplify the adoption of agent-based commerce for enterprises.

    The partnership centers around making it easier for enterprises to leverage tools that facilitate agent-based payments integration. By listing Visa’s Intelligence Commerce platform on the AWS Marketplace, AWS is providing developers with the necessary frameworks to overcome development barriers and securely integrate payment capabilities.

    Through the Visa Intelligence Commerce platform, AWS customers gain access to essential tools like authentication, agent tokenization, and data personalization, enabling seamless connectivity to Visa’s payment infrastructure. This initiative is poised to accelerate innovation for developers and enhance consumer experiences globally.

    The collaboration also involves the publication of blueprints designed to reduce development complexity and accelerate the creation of various agents, from travel booking to retail shopping and B2B payment reconciliation.

    Agent-based commerce presents a new frontier for AI players, with companies introducing AI-powered shopping tools to enhance product discovery and streamline transactions. The introduction of standardized infrastructure and blueprints is set to pave the way for scalable agent-based commerce, revolutionizing the way transactions are managed by agents capable of real-time reasoning and coordination.

    Source: VentureBeat

  • OpenAGI Unveils Lux: An AI Model Designed for Autonomous Computer Control

    This article was generated by AI and cites original sources.

    OpenAGI, a stealth AI startup, has announced the release of its AI model named Lux, which it claims outperforms industry leaders like OpenAI and Anthropic. Led by CEO Zengyi Qin, OpenAGI introduced Lux, designed to autonomously operate computers by interpreting screenshots and executing actions on desktop applications. Lux has achieved an 83.6% success rate on the demanding Online-Mind2Web benchmark, surpassing competitors like OpenAI’s Operator and Anthropic’s Claude Computer Use.

    Unlike traditional language models, Lux’s training method, called Agentic Active Pre-training, focuses on action sequences rather than text corpus. By learning from computer screenshots and action sequences, Lux excels at controlling the computer environment, continuously improving through self-exploration.

    Moreover, Lux stands out for its ability to control various desktop applications beyond web browsers, including Slack and Excel. OpenAGI’s partnership with Intel to optimize Lux for edge devices further enhances its appeal for enterprise use, ensuring data security by running locally on devices.

    With safety mechanisms embedded, Lux prioritizes user security, refusing potentially harmful requests like copying sensitive data. The model’s safety features will be crucial as computer-use agents become more prevalent, facing challenges like adversarial attacks.

    OpenAGI’s Lux enters a competitive market, offering superior performance and cost efficiency against well-funded rivals. While Lux’s benchmark success is promising, its real-world reliability remains to be tested, highlighting the gap between controlled tests and practical applications.

    Source: VentureBeat

  • ChatGPT: Three Years of Transforming Business and Technology

    This article was generated by AI and cites original sources.

    OpenAI’s ChatGPT, launched on November 30, 2022, has become a significant force in the realms of business and technology. Initially presented as a conversational model, ChatGPT has risen to prominence, currently holding the top position on Apple’s free app rankings. Its introduction has sparked a wave of generative AI innovations, reshaping industries and prompting discussions on AI’s influence on geopolitics and daily life.

    According to a report by TechCrunch, the transformative nature of ChatGPT has given rise to a new era characterized by uncertainty, with generative AI’s continuous evolution leading to a future where career paths may be unpredictable. While some individuals envision a prosperous AI-centric future, the dynamic nature of generative AI implies that its full potential has yet to be realized.

    Source: TechCrunch

  • Enhancing Enterprise Reliability with Observable AI

    This article was generated by AI and cites original sources.

    In the realm of enterprise AI, the spotlight is on the crucial role of observability in transforming large language models (LLMs) into dependable systems. As highlighted in a recent VentureBeat article, the quest for reliable and accountable AI solutions has brought observability to the forefront, emphasizing its significance in ensuring the trustworthiness of AI-driven enterprise operations.

    Observable AI serves as the missing SRE (Site Reliability Engineering) layer that enterprises need to enhance the robustness and governance of their AI systems. By offering visibility into AI decision-making processes, observability becomes the bedrock of trust, enabling organizations to audit, evaluate, and improve AI outcomes effectively.

    One example features a Fortune 100 bank that encountered misrouted critical cases within its LLM-based loan application classification system. Despite initial impressive benchmark accuracy, the lack of observability led to undetected errors, highlighting the critical importance of transparency and accountability in AI deployments.

    The article underscores the necessity of starting AI projects by defining measurable business outcomes rather than focusing solely on model selection. By aligning AI initiatives with specific business goals and designing telemetry around desired outcomes, enterprises can steer their AI endeavors towards tangible success metrics and operational efficiency.

    Embracing a structured observability stack for AI systems akin to microservices’ reliance on logs, metrics, and traces, the article advocates for a 3-layer telemetry model comprising prompts and context, policies and controls, and outcomes and feedback. This structured approach fosters accountability and enables continuous improvement and performance optimization within AI workflows.

    By applying SRE principles such as Service Level Objectives (SLOs) and error budgets to AI operations, organizations can instill reliability and resilience in their AI workflows. Defining key signals for critical workflows and implementing auto-routing mechanisms in case of breaches can significantly enhance the reliability of AI systems.

    In essence, observable AI stands as the linchpin for transforming AI from a mere experiment to a foundational infrastructure within enterprises. With clear telemetry, human oversight loops, and defined success metrics, organizations can scale trust, drive innovation, and deliver reliable AI experiences to customers.

    Source: VentureBeat

  • Anthropic Unveils Multi-Session Claude SDK to Address AI Agent Memory Challenges

    This article was generated by AI and cites original sources.

    Anthropic, a leading AI company, has announced the release of a new multi-session Claude SDK to address the long-standing issue of AI agent memory. Enterprises have long sought to overcome the challenge of agents forgetting instructions or conversations over time, which can hinder their performance.

    The core problem Anthropic aimed to solve was the limited memory of long-running agents, which start each session without recollection of past interactions. To address this, the company devised a two-part strategy within their Agent SDK: an initializer agent to establish the environment and a coding agent to make incremental progress in each session, preserving continuity through artifacts.

    Other companies, such as LangChain, Memobase, and OpenAI, have also explored enhancing agent memory using various frameworks. Anthropic’s innovation seeks to refine its Claude Agent SDK, providing a more robust solution to the memory challenge.

    Enhancing Agent Memory

    Anthropic’s approach focused on overcoming the limitations of existing context management capabilities within the Claude Agent SDK. By incorporating an initializer agent and a coding agent, the company aimed to prevent memory lapses and incomplete tasks, drawing inspiration from effective software engineering practices. Testing tools were integrated into the coding agent to enhance bug identification and resolution.

    Future Implications

    While Anthropic’s solution represents a significant advancement in long-running agent technology, the company acknowledged that further research is needed to optimize agent performance across diverse contexts. Experimentation in different tasks beyond web app development will be crucial to validate the solution’s versatility.

    Anthropic’s work in enhancing AI agent memory sets the stage for broader exploration in the AI domain, offering insights that could benefit scientific research, financial modeling, and other complex applications.

    Source: VentureBeat

  • Agent-R1: Revolutionizing Reinforcement Learning for Advanced LLM Agents

    This article was generated by AI and cites original sources.

    Researchers at the University of Science and Technology of China have introduced a new reinforcement learning (RL) framework, named Agent-R1, aimed at enhancing the training of large language models (LLMs) for complex agentic tasks that go beyond traditional domains like math and coding.

    Agent-R1 redefines the RL paradigm to address the challenges of dynamic agentic applications requiring multi-turn interactions and complex reasoning across evolving environments. By extending the Markov Decision Process framework, Agent-R1 expands the model’s state space to encompass historical interactions, introduces stochastic state transitions, and implements a more granular reward system to enhance training efficiency.

    The new framework enables RL-based LLM agents to excel in multi-step reasoning and dynamic interactions within diverse environments, outperforming traditional single-turn RL frameworks. The core innovation lies in the flexible multi-turn rollout facilitated by the Tool and ToolEnv modules, revolutionizing how agents generate responses and interpret outcomes.

    In testing, Agent-R1 demonstrated significant performance improvements in multi-hop question answering tasks, surpassing baseline methods like Naive RAG and Base Tool Call. The results underscore the potential of RL-trained agents and frameworks like Agent-R1 to empower LLM agents for real-world problem-solving.

    Source: VentureBeat

  • The Battle for AI Regulation: Federal vs. State Oversight

    This article was generated by AI and cites original sources.

    The debate over AI regulation has shifted to a clash between federal and state jurisdictions, focusing on who should have the authority to set the rules rather than the technology itself. The absence of a comprehensive federal AI standard emphasizing consumer safety has led states like California and Texas to introduce bills such as California’s AI safety bill SB-53 and Texas’s Responsible AI Governance Act to safeguard residents from AI-related risks.

    However, tech industry players, including established companies and emerging startups from Silicon Valley, are concerned that these state-specific regulations could hinder innovation. Industry representatives warn that such laws might impede the United States’ competitive edge against countries like China.

    Efforts are underway at the federal level to establish a national AI standard or prevent state-level regulations altogether. House lawmakers are exploring avenues like the National Defense Authorization Act to block state AI laws, while a leaked White House executive order supports preempting state initiatives in AI regulation.

    Despite some support for preemption, there is significant pushback in Congress against stripping states of their authority to regulate AI. Lawmakers argue that without a federal standard, blocking state regulations could expose consumers to risks and enable tech companies to operate without adequate oversight.

    Source: TechCrunch

  • Google and OpenAI Limit AI Generation Requests Amid Surging Demand

    This article was generated by AI and cites original sources.

    In response to overwhelming demand, Google and OpenAI have implemented restrictions on the number of AI generation requests allowed for their products, Nano Banana Pro and Sora, respectively.

    Bill Peebles, the head of Sora at OpenAI, announced that free users are now limited to six video generations per day, citing the strain on their GPUs. Peebles did not specify if these changes are temporary but noted that users can purchase additional generations as needed, indicating a shift towards monetization.

    Meanwhile, Google has reduced the free user limit for Nano Banana Pro from three to two images per day, as observed by 9to5Google. The company warned that these limits may change frequently and without prior notice, especially after popular product launches. Additionally, Google seems to have imposed restrictions on free users’ access to Gemini 3 Pro.

    Source: The Verge