Category: AI

  • Tech Billionaires Invest Millions to Influence AI Regulation in Congressional Race

    This article was generated by AI and cites original sources.

    A group of tech billionaires have funded a $125 million super PAC, named Leading the Future, to influence the congressional campaign of former tech executive Alex Bores, who is running for New York’s 12th congressional district. This move comes as Bores and other candidates advocate for stricter AI regulation, prompting a counter-effort from the tech sector to maintain a light-touch approach.

    Bores, a former employee of AI company Palantir, has faced attack ads accusing him of contributing to controversial ICE deportation efforts through his tech work. However, Bores clarified that he left Palantir in 2019 over its ICE involvement. Despite this, the super PAC is targeting him along with other candidates pushing for AI legislation.

    The super PAC’s funding sources include Palantir co-founder Joe Lonsdale, OpenAI President Greg Brockman, VC firm Andreessen Horowitz, and AI search startup Perplexity. These tech industry leaders are rallying against candidates like Bores to prevent what they perceive as overregulation in the AI space.

    Bores believes that his tech background makes him a prime target for these efforts to influence the regulatory landscape around AI. The clash between tech billionaires and candidates like Bores highlights the ongoing debate over the extent of government intervention in AI development and deployment.

    Source: TechCrunch

  • Google Home Enhances Smart Home Experience with Live Camera Search

    This article was generated by AI and cites original sources.

    Google has announced a series of updates to its Google Home smart home platform, aimed at improving functionality and addressing user concerns. One notable feature in this update is the addition of ‘Live Search’ for connected cameras.

    Previously, the virtual assistant Gemini could only recognize past events, but with this enhancement, it can now interpret live camera feeds. Users can now ask questions like ‘Hey Google, is there a car in the driveway?’ and receive real-time information.

    Additionally, Gemini’s capabilities have been enhanced through updated models, leading to improved accuracy in responses to general inquiries and better performance when playing recently released music. The virtual assistant also now possesses improved contextual understanding, allowing for more precise control over smart devices. For instance, issuing a command such as ‘turn off the kitchen’ will now specifically target the lights in that area rather than all smart devices.

    These updates bring significant improvements to the Google Home experience, enhancing its functionality and responsiveness to user commands.

    Source: The Verge

  • Data Centers Flock to the Arctic Circle to Power AI Workloads

    This article was generated by AI and cites original sources.

    The tech industry is witnessing a surge in the construction of data centers in the Arctic Circle region, particularly in countries like Sweden and Norway. This trend is primarily driven by the growing demand for facilities capable of supporting the training and operation of AI models.

    One of the key factors behind this shift is the scarcity of suitable sites in Europe with the necessary capacity and energy supply to handle the demanding workloads of AI technologies. As data center operators seek locations that can offer ample energy resources at affordable rates, the Arctic Circle has emerged as a prime destination due to its access to cheap and abundant energy sources.

    With more than 50 data centers either under construction or in the pipeline across the Nordic region, the Arctic Circle is quickly becoming a hub for AI-centric computing operations. This strategic move towards optimizing energy efficiency and operational costs signals a departure from the traditional concentration of data centers in major metropolitan areas.

    Source: WIRED

  • Trump Administration Directs Federal Agencies to Discontinue Use of Anthropic’s AI Tools

    This article was generated by AI and cites original sources.

    The Trump administration has directed all federal agencies to immediately halt their use of Anthropic’s AI tools following disputes over military applications of artificial intelligence. The decision came after the Defense Department pressured Anthropic to remove restrictions on AI use by the military.

    In a statement, the President said, “The federal agencies must transition away from Anthropic’s AI tools within the next six months.” This move follows the Defense Department’s designation of Anthropic as a “supply chain risk,” preventing military collaboration with the AI company.

    Defense Secretary Pete Hegseth criticized Anthropic’s stance on limitations, accusing the company of prioritizing Silicon Valley ideology over national security. Anthropic had objected to a proposed deal alteration that would permit broader AI deployment, expressing concerns over potential misuse for lethal autonomous weapons or mass surveillance on U.S. citizens.

    While the Pentagon currently avoids such applications, the Trump administration officials oppose restrictions on civilian tech use by the military.

    Source: WIRED

  • Consumer Backlash: ChatGPT Uninstalls Surge After DoD Partnership Announcement

    This article was generated by AI and cites original sources.

    Recent data reveals a significant shift in consumer behavior following the announcement of OpenAI’s ChatGPT partnership with the Department of Defense (DoD). According to market intelligence provider Sensor Tower, ChatGPT’s app experienced a 295% increase in uninstalls in the U.S. on February 28, far exceeding its typical uninstall rate of 9%. Conversely, competitor Anthropic’s Claude saw a boost in downloads, with a 37% increase on February 27 and a further 51% rise on February 28 after announcing its decision not to partner with the defense department.

    Anthropic’s stance against AI surveillance and autonomous weaponry seemed to resonate with consumers, reflected in the app’s surge to the top spot on the U.S. App Store rankings. In contrast, ChatGPT faced a decline in downloads post-announcement, with a 13% drop on February 28 and an additional 5% decrease on March 1. This shift was mirrored in user ratings, as 1-star reviews for ChatGPT spiked by 775% on February 28, while 5-star reviews declined by 50% during the same period.

    The swift consumer response underscores the impact of tech partnerships on user trust and adoption. As users navigate ethical considerations surrounding AI applications, companies like ChatGPT and Anthropic face the challenge of balancing innovation with societal values.

    Source: TechCrunch

  • Deutsche Telekom Unveils AI-Powered Call Assistant for Seamless Communication

    This article was generated by AI and cites original sources.

    Deutsche Telekom, the German mobile provider and majority stakeholder of T-Mobile in the US, has introduced a new AI-powered Magenta AI Call Assistant feature for phone calls. Partnering with ElevenLabs, this innovative service offers real-time language translation and other functionalities without the need for additional apps or specific smartphones.

    Users can trigger the assistant by saying ‘Hey Magenta’ during a call, enabling tasks such as language translation, calendar queries, and location searches. This development marks a significant step towards integrating AI seamlessly into everyday phone interactions, eliminating the limitations of previous language translation services.

    The partnership between Deutsche Telekom and ElevenLabs showcases the ongoing evolution of AI technology in telecommunications, offering users enhanced convenience and functionality in their daily communication activities. By providing a hands-free, intuitive AI interface directly within phone calls, Deutsche Telekom’s AI assistant sets a new standard for accessible AI integration in phone services.

    Source: WIRED

  • Anthropic Enhances Claude AI’s Memory Feature to Simplify Switching from Competing Chatbots

    This article was generated by AI and cites original sources.

    Anthropic has introduced updates to its Claude AI, making it more convenient for users to transition from other chatbots. The company has now made Claude’s memory feature accessible to all users, including those on the free plan. Alongside this enhancement, Anthropic has introduced a new prompt and a specialized tool for importing data from rival chatbots, such as OpenAI’s ChatGPT and Google’s Gemini. These improvements empower users to seamlessly transfer the existing data collected by their previous AI to Claude, eliminating the need to re-teach contextual information and history.

    Previously, the memory feature in Claude was reserved for paid subscribers. However, in October, it became accessible to all users, coinciding with the introduction of the ability to activate Claude’s memory. Through the ‘settings’ menu under ‘capabilities,’ users can enable this feature and utilize the new memory importing tool. This tool instructs users to input a specific prompt into their former AI, transferring the generated output back into Claude’s importing tool.

    Anthropic’s enhancement of the memory importing tool aligns with the increasing popularity of Claude, propelled by tools like Claude Code and Claude Cowork. The recent launch of Opus 4.6 and Sonnet 4.6 models further solidifies Anthropic’s position, enhancing coding capabilities and streamlining complex tasks such as spreadsheet management and form completion.

    Notably, Anthropic has garnered attention for its stance against the Pentagon’s pressure to relax AI model constraints, emphasizing boundaries concerning mass surveillance and fully autonomous lethal weapons.

    Source: The Verge

  • Anthropic’s Claude AI Chatbot Faces Service Disruptions Amid Surge in Popularity

    This article was generated by AI and cites original sources.

    Anthropic, the company behind the AI chatbot Claude, faced significant service disruptions on Monday morning, impacting numerous users attempting to access Claude services. The outage primarily affected Claude.ai and Claude Code, leading to login issues for most users. Anthropic has acknowledged the problem and is actively working to resolve the ongoing disruptions.

    The disruption comes amid a surge in Claude’s popularity, fueled by recent attention due to the company’s high-profile negotiations with the Pentagon. Over the weekend, Claude saw a spike in user interest, resulting in a notable rise in the App Store rankings, surpassing competitors like ChatGPT. However, this surge in usage coincided with the outage, posing challenges for both Anthropic and its user base.

    Recent conflicts with the U.S. government, particularly a directive from former President Donald Trump instructing federal agencies to cease using Anthropic products, have added complexity to the situation. Discussions surrounding safeguards against potential misuse of Anthropic’s AI models have raised concerns, leading the Department of Defense to consider the company a supply-chain threat. Despite these developments, Anthropic has not formally received any notifications regarding this designation.

    As Anthropic continues to address the service disruptions impacting Claude, the incident underscores the critical role of robust infrastructure and contingency plans for AI-powered platforms, especially in times of heightened demand and scrutiny.

    Source: TechCrunch

  • Tech Community Calls for Reconsideration of Anthropic’s Supply Chain Risk Designation

    This article was generated by AI and cites original sources.

    Hundreds of tech workers have joined forces by signing an open letter to request the Department of Defense to reconsider its classification of Anthropic as a ‘supply chain risk.’ The letter, which also appeals to Congress to assess the situation, involves prominent figures from the tech and venture capital sectors, including representatives from OpenAI, Slack, IBM, Cursor, and Salesforce Ventures.

    The conflict arose when Anthropic, an AI research lab, declined to grant the military unrestricted access to its AI systems, setting two non-negotiable boundaries: refusing mass surveillance on Americans and disallowing autonomous weapons that could operate without human oversight. Despite the DOD’s assurance of no intentions to engage in such activities, disagreements persisted over vendor constraints.

    Following Anthropic CEO Dario Amodei’s refusal to reach an agreement with Defense Secretary Pete Hegseth, the administration issued a directive to halt federal agencies’ utilization of Anthropic’s technology after a six-month transition period. Subsequently, Hegseth proceeded to label Anthropic a supply chain risk, a designation typically assigned to foreign adversaries, potentially barring the AI firm from collaborations involving Pentagon-affiliated entities.

    However, this designation necessitates a comprehensive risk evaluation and congressional notification before military partners can sever ties with Anthropic. In response, Anthropic has criticized the designation as ‘legally unsound’ and vowed to contest any imposed supply chain restrictions.

    Source: TechCrunch

  • Alibaba’s Qwen3.5-9B: Smaller AI Models Outperform Larger Rivals

    This article was generated by AI and cites original sources.

    Alibaba’s latest release, the Qwen3.5 Small Model Series, has made a significant impact in the AI sector. This series, which includes models like Qwen3.5-9B, has outperformed OpenAI’s gpt-oss-120B while being significantly smaller in size. The key to this success lies in a hybrid architecture that combines Gated Delta Networks and sparse Mixture-of-Experts, enabling higher throughput and lower latency.

    These models are natively multimodal, showcasing a level of visual understanding previously unseen in models of their size. Benchmark data reveals exceptional performance across various tasks, from visual reasoning to mathematical prowess, positioning the Qwen3.5 series as a notable development in the AI landscape.

    Moreover, the release of these models under the Apache 2.0 license is a positive step for the open ecosystem, allowing for commercial use, modification, and distribution without royalty payments. This move enhances accessibility and fosters innovation in the AI community.

    Enterprise applications of the Qwen3.5 series span a wide range of functions, from visual workflow automation to real-time edge analysis. However, teams must be mindful of operational challenges that come with deploying small-scale models, such as the risk of a ‘Hallucination Cascade’ in multi-step workflows.

    The Qwen3.5 series represents a shift towards localized deployment of powerful AI models, enabling organizations to streamline tasks that previously relied on cloud-based solutions.

    Source: VentureBeat

  • Nvidia Invests $4 Billion to Advance AI with Photonics Technology

    This article was generated by AI and cites original sources.

    Nvidia has announced a significant $4 billion investment split evenly between Lumentum and Coherent, two companies focused on developing photonics technology. This investment aims to enhance the capabilities of data centers that support advanced AI applications.

    Photonics technologies, such as optical transceivers, circuit switches, and lasers, play a crucial role in facilitating high-speed data transfer over long distances. This technology can revolutionize energy efficiency, data speeds, and bandwidth in upcoming AI-focused data centers. Following Nvidia’s strategic acquisition of Mellanox in 2020, the investment in Lumentum and Coherent aims to further bolster NVLink and improve data throughput between GPUs.

    The multi-billion dollar agreements with Lumentum and Coherent include purchase commitments, future capacity access rights, and support for research and development initiatives to drive innovation in laser components and optical networking products. This investment comes as the demand for increased bandwidth in AI data centers is escalating due to the rise of advanced AI applications like Anthropic’s Claude Cowork and Microsoft’s Copilot Tasks.

    Recognizing the significance of photonics in AI advancement, other entities, such as DARPA and Nvidia’s competitor AMD, are also exploring this domain. DARPA has solicited research proposals focusing on enhancing photonic computing for AI applications, while AMD acquired Enosemi, a silicone photonics startup, to accelerate optics innovation for its AI ecosystem.

    Source: The Verge

  • Supreme Court Declines to Hear Case on Copyright of AI-Generated Art

    This article was generated by AI and cites original sources.

    The US Supreme Court has decided not to review a case regarding the copyrightability of AI-generated art, as reported by The Verge. The court’s decision follows appeals by computer scientist Stephen Thaler from Missouri, challenging a previous ruling that denied copyright protection for AI-generated artworks.

    In 2019, Thaler’s attempt to copyright an image created by an algorithm was rejected by the US Copyright Office, citing the lack of ‘human authorship.’ Subsequent reviews in 2022 and a ruling by US District Court Judge Howell in 2023 reinforced this stance, stating that copyright requires human input.

    Thaler’s plea to the Supreme Court in 2025 highlighted concerns about the impact on AI creativity, arguing against a perceived ‘chilling effect’ on innovation. However, the Supreme Court upheld the lower court’s decision, aligning with previous determinations on AI-related intellectual property.

    Recent guidance from the Copyright Office further clarified that AI-generated art based on textual prompts does not qualify for copyright protection, echoing the stance on human-centric authorship.

    The legal debate on AI-generated content extends beyond copyright to patent law, with the US Patent Office and UK Supreme Court both asserting that AI systems lack the legal capacity for patenting due to their non-human nature.

    This case underscores the evolving legal landscape around AI creations and the complex intersections between technology and intellectual property rights.

    Source: The Verge

  • Lenovo Unveils Innovative AI-Powered Productivity Concepts at MWC 2026

    This article was generated by AI and cites original sources.

    At the Mobile World Congress 2026, Lenovo introduced two novel AI-powered productivity companion concepts aimed at enhancing workplace efficiency and offering a touch of artificial companionship to office workers. The first concept, the AI Workmate, features a robotic arm attached to a swiveling base with a screen displaying expressive eyes, allowing users to interact with it through voice commands and gestures. This desk companion can scan physical documents, summarize notes, organize ideas, and even project documents for easy sharing.

    The second concept, the AI Work Companion, resembles a bedside alarm clock but is designed to sync tasks and schedules from various devices, creating a balanced daily plan for users. Additionally, it monitors screen time to prevent burnout and suggests regular breaks throughout the day. These concepts demonstrate Lenovo’s efforts to leverage AI technology to streamline work processes and promote well-being in the workplace.

    Source: The Verge

  • OpenAI Terminates Employee for Insider Trading on Prediction Markets

    This article was generated by AI and cites original sources.

    OpenAI recently terminated an employee for engaging in activities on prediction markets, including Polymarket, that involved the use of confidential company information, as confirmed by Wired. The individual allegedly utilized privileged OpenAI data in these trades, leading to a violation of the company’s internal policy prohibiting the exploitation of insider information for personal benefit.

    Prediction markets such as Polymarket and Kalshi offer individuals the opportunity to place bets on the outcomes of real-world events. For instance, Polymarket hosts predictions related to OpenAI’s future product announcements and potential public offering in 2026. These markets cover a wide range of events, with substantial sums at stake. Notably, a recent incident saw an accountant secure a $470,300 prize on Kalshi by betting against supporters of DOGE.

    While prediction markets distance themselves from gambling by positioning as financial platforms, regulatory actions are taken against individuals who breach trading rules. Kalshi, a regulated exchange, recently penalized and banned a MrBeast editor for similar suspected insider trading. OpenAI has yet to provide further statements on the issue.

    Source: TechCrunch

  • Perplexity Unveils Unified AI Platform for Enhanced User Experiences

    This article was generated by AI and cites original sources.

    Perplexity has announced the launch of a new computer system that integrates various AI capabilities into a single platform, aiming to streamline workflows and enhance user experiences. The Perplexity Computer, available exclusively on the company’s premium subscription tier, leverages 19 distinct AI models to autonomously execute complex tasks and generate valuable insights. Operating in the cloud, the system offers a range of functionalities, from data collection and analysis to content creation and visualization.

    While TechCrunch has not conducted a hands-on review of the tool, Perplexity showcased sample workflows on its website, illustrating the system’s potential in handling diverse tasks efficiently. Despite canceling a live demonstration due to last-minute product issues, the company remains committed to advancing its technology and meeting user demands in the evolving AI landscape.

    Perplexity’s strategic shift towards consolidating AI resources underscores a broader industry trend towards unified AI solutions, potentially reshaping how users interact with intelligent systems. The company’s approach aligns with the growing demand for comprehensive AI tools that simplify complex processes and empower users across various domains.

    Source: TechCrunch

  • Anthropic’s AI Chatbot Claude Surges in Popularity Amid Pentagon Dispute

    This article was generated by AI and cites original sources.

    Anthropic, a technology company known for its AI chatbot Claude, has seen a significant increase in popularity following its contentious negotiations with the Pentagon. According to TechCrunch, Claude has risen to the second spot among free apps in Apple’s US App Store. This uptick in rankings comes after Anthropic’s attempts to establish safeguards against the Department of Defense utilizing its AI models for mass domestic surveillance or autonomous weapons.

    Initially positioned just outside the top 100 in January, Claude has steadily climbed the ranks throughout February, peaking at number two recently. This spike in interest coincided with the federal government’s directive to discontinue the use of all Anthropic products due to security concerns, as well as the Secretary of Defense’s labeling of the company as a supply-chain threat.

    Following this development, OpenAI, another prominent player in the AI space, announced its own agreement with the Pentagon, emphasizing the inclusion of safeguards related to surveillance and autonomous weaponry. This shift in alliances within the tech industry highlights the complex landscape of AI ethics and government partnerships.

    Source: TechCrunch

  • US Military Designates Anthropic as ‘Supply Chain Risk’ Amid AI Dispute

    This article was generated by AI and cites original sources.

    The U.S. Department of Defense has designated Anthropic, a prominent AI company, as a ‘supply chain risk,’ sparking concerns in the tech industry and raising questions about the future use of its AI models within military contexts.

    The conflict arose from disagreements between the Pentagon and Anthropic regarding the permissible applications of the startup’s AI technology. Anthropic expressed concerns over potential misuse, particularly in mass surveillance or autonomous weaponry scenarios, advocating for limitations on its usage. In response, the Pentagon has taken steps to prohibit any entity doing business with the U.S. military from engaging in commercial activities with Anthropic, citing security implications.

    This decision empowers the Pentagon to safeguard military systems against vulnerabilities, including those related to ownership and influence. Anthropic has vowed to contest the designation in court, highlighting the broader implications for U.S. firms engaged in governmental negotiations.

    This development underscores the complex relationship between tech companies and national security interests, emphasizing the critical role of clear contractual agreements and regulatory frameworks in governing AI deployments within sensitive domains.

    Source: WIRED

  • OpenAI’s Pentagon Deal: Balancing AI Deployment with Ethical Considerations

    This article was generated by AI and cites original sources.

    OpenAI CEO Sam Altman recently announced an agreement with the Department of Defense, allowing the use of OpenAI’s AI models within the department’s secure network. This development comes after a significant conflict involving the Pentagon and OpenAI’s competitor, Anthropic, which raised concerns about the extensive use of AI in military contexts. Anthropic’s stance against mass domestic surveillance and fully autonomous weapons set the stage for a complex debate on the ethical and practical implications of AI deployment in defense operations.

    The disagreement between Anthropic and the Pentagon highlights the challenges in balancing technological advancements with societal values. With over 60 OpenAI employees and 300 Google employees expressing support for Anthropic’s position, the tech community is actively engaged in discussions about the responsible use of AI technologies. The impact of such debates on national security and corporate partnerships is underscored by President Trump’s criticism of Anthropic and the subsequent actions taken by Secretary of Defense Pete Hegseth.

    As the technological landscape continues to evolve, ensuring technical safeguards in AI deployment remains a critical aspect of maintaining ethical standards and upholding democratic values. The recent developments between OpenAI, Anthropic, and the Department of Defense serve as a reminder of the intricate relationship between technology, policy, and societal impact.

    Source: TechCrunch

  • OpenAI and Amazon Unveil Stateful Runtime Environment for Enterprise AI

    This article was generated by AI and cites original sources.

    OpenAI’s recent $110 billion funding injection from SoftBank, Nvidia, and Amazon marks a significant development in enterprise artificial intelligence. While the influx of capital is noteworthy, the real game-changer is OpenAI’s collaboration with Amazon, introducing a ‘Stateful Runtime Environment’ on Amazon Web Services (AWS), the leading cloud platform globally.

    This move signals a shift towards autonomous ‘AI coworkers’ and a need for a new architectural foundation different from GPT-4. For businesses on AWS, this means upcoming access to a stateful runtime environment, promising a significant evolution in agentic intelligence capabilities.

    The core innovation lies in the distinction between ‘stateless’ and ‘stateful’ environments. The Stateful Runtime Environment on Amazon Bedrock will enable AI models to maintain persistent context, memory, and identity, revolutionizing developer workflows and reducing the complexity of maintaining context.

    OpenAI’s platform, Frontier, designed to streamline AI agent development and deployment, empowers enterprises to bridge the ‘AI opportunity gap’ by offering shared business context, a robust agent execution environment, and built-in governance. While Frontier resides on Microsoft Azure, AWS will serve as the exclusive cloud distribution provider, allowing AWS customers to leverage agentic workloads seamlessly.

    Enterprises interested in adopting the new Stateful Runtime Environment can register their interest via OpenAI’s dedicated Enterprise Interest Portal, signaling a shift towards production-grade agentic workflows.

    The partnership dynamics between OpenAI, Amazon, and Microsoft present strategic choices for CTOs and decision-makers. While Azure remains the go-to for standard tasks, AWS’s Stateful Runtime Environment excels in complex, long-running agent scenarios, offering a cost-efficient solution for enterprises looking to scale OpenAI models.

    Despite the Amazon investment, Microsoft’s commercial and revenue share relationship with OpenAI remains intact, underscoring the intricate ties between the two tech giants. As OpenAI positions itself as a key infrastructure player straddling Azure and AWS, the enterprise AI landscape is evolving towards tailored solutions based on specific technical requirements.

    Source: VentureBeat

  • Pentagon Designates Anthropic as Supply Chain Risk, Impacting Tech Giants

    This article was generated by AI and cites original sources.

    In a significant move that could impact major tech companies, the U.S. Department of Defense has designated Anthropic as a supply chain risk following President Trump’s ban on the AI company’s products from federal government use. This decision stems from Anthropic’s refusal to provide unrestricted access to its models for defense purposes, leading to concerns about national security implications.

    The designation as a supply chain risk means that no entity doing business with the U.S. military can engage commercially with Anthropic, signaling a significant shift in the tech industry landscape. This development raises questions about the influence of tech companies on national defense and the balance between innovation and security.

    As the Pentagon enforces this designation, tech giants collaborating with Anthropic may face disruptions in their supply chains and operations. The incident serves as a cautionary tale for companies navigating the complexities of integrating AI technologies into critical infrastructure and government operations.

    Source: The Verge