Category: AI

  • Google’s Gemini App Introduces Personal Intelligence Beta for Tailored AI Responses

    This article was generated by AI and cites original sources.

    Google has unveiled a new beta feature in its Gemini app that enhances the AI assistant’s responses by tapping into a user’s Google ecosystem, including Gmail, Photos, Search, and YouTube history. This feature, known as Personal Intelligence, allows Gemini to analyze data across these platforms to offer proactive and context-aware results. Users have full control over this feature, as it is off by default, giving them the choice of when to connect their Google apps to Gemini.

    According to Josh Woodward, VP of the Gemini app at Google Labs and AI Studio, Personal Intelligence enables Gemini to reason across diverse sources and extract specific details from emails or photos to provide tailored answers. For instance, Gemini can suggest personalized solutions based on family photos stored in Google Photos, as demonstrated when Woodward needed information on his car’s tire size and license plate number.

    By leveraging Personal Intelligence, users can receive customized recommendations for various aspects of their lives, from books and shows to travel and shopping. This feature aims to offer a more personalized and helpful AI experience, catering to individual preferences and needs.

    Source: TechCrunch

  • AI-Powered Police Report Error Highlights Risks of Relying on Microsoft Copilot

    This article was generated by AI and cites original sources.

    The use of artificial intelligence in law enforcement operations has come under scrutiny after the chief constable of West Midlands Police in the UK acknowledged a significant error in a football intelligence report attributed to Microsoft’s Copilot AI assistant. The mistaken inclusion of a non-existent football match between West Ham and Maccabi Tel Aviv led to Israeli football fans being wrongly banned from a match.

    Craig Guildford, chief constable of West Midlands Police, revealed the error in a letter to the Home Affairs Committee, attributing it to the use of Microsoft Copilot. Microsoft’s Copilot, an AI-powered tool that assists in various tasks, including generating text, appeared to have hallucinated the fictitious game, which subsequently made its way into the official police report.

    This incident highlights the potential risks associated with relying on AI technologies for critical decision-making processes, especially in sensitive areas like law enforcement and public safety. The repercussions of this mistake were felt in real-world scenarios, with Maccabi Tel Aviv fans facing unwarranted bans from a Europa League match due to the inaccurate intelligence report.

    Despite disclaimers about the possibility of errors in the Copilot interface, this incident underscores the importance of rigorous validation and oversight when integrating AI tools into operational workflows. As the incident raises questions about the reliability and accountability of AI systems in law enforcement, it prompts a broader conversation about the role of technology in policing practices and the necessity for robust validation mechanisms to prevent similar errors in the future.

    Source: The Verge

  • Consumer Watchdog Raises Concerns About Google’s AI Shopping Protocol, Google Responds

    This article was generated by AI and cites original sources.

    A consumer economics watchdog has raised concerns about Google’s new Universal Commerce Protocol for AI-powered shopping agents. Lindsay Owens, executive director of Groundwork Collaborative, expressed worries about potential overcharging through personalized upselling based on chat data analysis. Google, however, refuted these claims, stating that they prohibit merchants from displaying higher prices on Google than on their own sites. The company clarified that ‘upselling’ refers to offering premium product options, with users retaining the final decision on purchases. Google also highlighted a pilot program, ‘Direct Offers,’ which enables merchants to provide lower prices or additional services like free shipping. The tech giant emphasized that their Business Agent does not have the capability to modify prices as alleged.

    Source: TechCrunch

  • Google’s Veo 3.1 Update Empowers Vertical Video Creation from Reference Images

    This article was generated by AI and cites original sources.

    Google has enhanced its Veo 3.1 AI video generation model, enabling users to create vertical videos for social platforms directly from reference images. This update allows creators to produce native 9:16 vertical format videos for platforms like YouTube Shorts, Instagram, and TikTok, eliminating the need for cropping. The feature has been integrated into the YouTube Shorts and YouTube Create app, streamlining the video creation process.

    Introduced in October 2025, Veo 3.1 initially focused on improving audio output and editing controls. With this recent update, the AI model now delivers videos with enhanced character expressions and movements based on reference images, even with concise prompts. Google highlights that the update improves consistency across characters, objects, and backgrounds, allowing users to seamlessly blend various elements for a coherent final output.

    These new features are accessible through the Gemini app, while professional users can leverage them via Google’s video editor Flow, Gemini API, Vertex AI, and Google Vids. Furthermore, the update includes an enhanced upscaling feature supporting resolutions up to 1080p and 4K on Flow, Gemini API, and Vertex AI within Google Cloud.

    Source: TechCrunch

  • 1X Unveils AI World Model to Enhance Neo Robot Learning Capabilities

    This article was generated by AI and cites original sources.

    1X, the robotics company known for the Neo humanoid robot, has introduced a new AI model called the 1X World Model. This model is designed to enhance the learning capabilities of Neo robots by enabling them to understand real-world dynamics and facilitate self-learning.

    The 1X World Model, based on physics principles, leverages video data and prompts to empower Neo robots with new skills. By utilizing internet-scale video content, the Neo robots can acquire knowledge beyond their initial training, allowing them to tackle novel tasks.

    This advancement is part of 1X’s preparation to launch Neo humanoids for household use, following the commencement of pre-orders last October. While specific shipment timelines remain undisclosed, 1X indicated a surge in pre-orders beyond expectations.

    According to the CEO of 1X, Bernt Børnich, the development of the world model coupled with Neo’s human-like design enables the robot to extrapolate actions from prompts, facilitating continuous learning and adaptation. However, the model does not enable instantaneous task execution but rather enhances the bots’ understanding of the physical world through iterative feedback loops, providing users with insights into Neo’s decision-making processes and responses to various stimuli.

    Source: TechCrunch

  • Anthropic Unveils Cowork: An AI-Powered Desktop Assistant for Non-Technical Users

    This article was generated by AI and cites original sources.

    Anthropic, a prominent AI company, has announced the launch of Cowork, a desktop agent designed to empower non-technical users to streamline their tasks without coding expertise. Cowork represents a significant advancement in the AI landscape, bridging the gap between technical complexity and user-friendly accessibility.

    The development of Cowork was inspired by Anthropic’s earlier success with Claude Code, a tool primarily intended for developers but creatively repurposed by users for non-coding tasks. This unexpected adaptation prompted Anthropic to create a more consumer-friendly interface, resulting in the creation of Cowork.

    Cowork operates within a folder-based architecture, allowing users to grant specific access for tasks such as file organization, report generation, and document creation. Anthropic’s innovative approach involves an ‘agentic loop,’ enabling Cowork to autonomously execute tasks, seek clarification, and handle multiple instructions simultaneously.

    Cowork’s rapid development timeline showcases a recursive loop where AI systems enhance their own capabilities, underscoring the potential for AI tools to exponentially evolve and expand their functionalities.

    Furthermore, Cowork integrates seamlessly with Anthropic’s ecosystem of connectors and browser automation tools, extending its utility beyond local file systems. The inclusion of specialized ‘skills’ enhances Cowork’s document creation capabilities, providing users with a versatile AI assistant.

    The launch of Cowork signifies a shift in AI adoption dynamics, emphasizing workflow integration and user trust as pivotal factors. As organizations navigate the evolving landscape of AI assistants, the capabilities of tools like Cowork raise important questions about user readiness and system capabilities.

    Source: VentureBeat

  • Anthropic Unveils Claude for Healthcare: Advancing AI-Powered Medical Assistance

    This article was generated by AI and cites original sources.

    Following OpenAI’s recent introduction of ChatGPT Health, Anthropic has announced the launch of Claude for Healthcare. This new offering aims to provide advanced tools for healthcare providers, payers, and patients.

    Claude for Healthcare, similar to ChatGPT Health, enables users to sync their health data from various devices such as phones and smartwatches. Both Anthropic and OpenAI have confirmed that the collected data will not be used for model training purposes. However, Claude for Healthcare promises a higher level of sophistication compared to ChatGPT Health, which appears to focus on enhancing patient-side chat experiences.

    Anthropic’s Claude introduces ‘connectors’ that grant the AI access to crucial databases, including the CMS Coverage Database, ICD-10, National Provider Identifier Standard, and PubMed. These connectors streamline research processes and report generation for payers and providers.

    Anthropic’s Chief Product Officer, Mike Krieger, highlighted the significance of Claude for Health in expediting prior authorization reviews. This feature aims to automate administrative tasks, allowing clinicians to focus more on patient care.

    With its innovative approach, Claude for Healthcare exemplifies the ongoing evolution of AI in the medical field, showcasing the potential for improved efficiency and patient care.

    Source: TechCrunch

  • Meta Unveils ‘Meta Compute’ to Bolster AI Infrastructure

    This article was generated by AI and cites original sources.

    Meta, formerly known as Facebook, is intensifying its focus on AI by unveiling Meta Compute, a new initiative aimed at strengthening the company’s AI infrastructure. The move comes after Meta’s previous announcement of significant investments to expand its AI capabilities. Last year, Meta disclosed its plans to enhance its AI business, emphasizing the importance of advanced AI infrastructure in developing superior AI models and product experiences. According to Meta’s CFO, Susan Li, leading AI infrastructure provides a strategic advantage.

    CEO Mark Zuckerberg revealed the launch of Meta Compute to reinforce the tech giant’s AI infrastructure, signaling a substantial increase in energy consumption in the upcoming years. Zuckerberg emphasized Meta’s ambitious plans to build tens of gigawatts of capacity this decade and potentially hundreds of gigawatts over time. This expansion in energy consumption is crucial to support Meta’s AI operations, which are projected to significantly escalate America’s electrical usage over the next decade.

    To drive this initiative, Zuckerberg appointed key executives, including Santosh Janardhan, Meta’s head of global infrastructure, and Daniel Gross, who will lead a dedicated group responsible for long-term capacity strategy and supplier partnerships. The strategic focus on AI infrastructure underscores Meta’s commitment to fortifying its AI capabilities for future growth and innovation.

    Source: TechCrunch

  • Anthropic Unveils ‘Claude Cowork’ Feature to Enhance AI Agent Capabilities

    This article was generated by AI and cites original sources.

    Anthropic, a leading AI technology company, has introduced a new feature called ‘Claude Cowork’ aimed at expanding the capabilities of its AI agent, Claude. This innovative addition is designed to provide a more user-friendly approach to non-coding tasks, leveraging the growing interest in Anthropic’s Claude Code platform.

    According to The Verge, Anthropic unveiled the ‘Claude Cowork’ feature on Monday, positioning it as a ‘research preview’ to gather insights on user interactions and enhance future developments. The functionality of ‘Claude Cowork’ is impressive, allowing users to grant Claude access to a designated folder, enabling the AI chatbot to perform tasks such as file editing, creation, and organization.

    Anthropic emphasized the seamless workflow facilitated by ‘Claude Cowork,’ eliminating the need for manual context provision or format conversions. Users can queue tasks for Claude to work on concurrently, fostering a collaborative and efficient dynamic akin to interacting with a coworker.

    This strategic move by Anthropic aligns with the industry’s collective pursuit of developing highly practical AI agents, underscoring a commitment to enhancing user experiences and productivity.

    Source: The Verge

  • Anthropic Unveils Cowork: A Streamlined Alternative to Claude Code

    This article was generated by AI and cites original sources.

    Anthropic, a leading AI company, has introduced Cowork, a user-friendly alternative to its Claude Code tool. Incorporated into the Claude Desktop app, Cowork allows users to specify a folder where the Claude AI can interact with files, all managed through a simple chat interface. This tool offers a streamlined experience similar to a sandboxed version of Claude Code, but with a lower technical barrier to entry.

    Currently in a research preview phase, Cowork is exclusively accessible to Max subscribers, with other users able to join a waitlist for future access. The development of Cowork was influenced by the increasing number of subscribers leveraging Claude Code for tasks beyond coding, viewing it as a versatile AI tool. Operating on the Claude Agent SDK foundation, Cowork shares the same core model as Claude Code, empowering users to control file access through designated folders without the need for complex technical configurations.

    This new offering from Anthropic opens up diverse applications for users, such as compiling expense reports from image folders, managing media assets, monitoring social media content, and analyzing conversations. Like Claude Code, Cowork automates sequences of actions without constant user input, highlighting the importance of clear and unambiguous instructions to prevent potential risks like prompt injection or unintended file deletion.

    Anthropic emphasizes the need for precise user instructions to mitigate these risks, especially as Cowork represents a more advanced tool beyond traditional conversational AI interfaces. Launched initially as a command-line tool in 2024, Claude Code has proven to be a standout product for the company, showcasing its commitment to advancing AI capabilities.

    Source: TechCrunch

  • Apple Collaborates with Google to Enhance Siri’s AI Capabilities

    This article was generated by AI and cites original sources.

    Apple has announced a strategic partnership with Google to enhance its AI features, such as Siri, by leveraging Google’s Gemini models and cloud technology. This non-exclusive, multi-year agreement marks a significant collaboration in the tech industry.

    In a joint statement, Apple and Google expressed enthusiasm for the partnership, highlighting Google’s advanced AI capabilities as the ideal foundation for improving Apple’s AI offerings. While the financial details remain undisclosed, reports suggest that Apple could be investing approximately $1 billion to access Google’s Gemini models and cloud infrastructure.

    Apple’s decision to collaborate with Google follows a period of exploring technologies from competitors like OpenAI and Anthropic. By incorporating Google’s Gemini models and cloud technology, Apple aims to enhance the capabilities of its AI features, addressing past criticisms of Siri’s performance compared to rival assistants.

    Apple’s approach to AI integration prioritizes user privacy, with a focus on on-device processing and secure infrastructure. This commitment to privacy will continue throughout the partnership with Google, ensuring that user data remains protected.

    While Apple’s AI advancements may not always receive the same level of attention as other industry players, the company’s methodical progress in AI development underscores its dedication to delivering reliable and privacy-conscious technologies.

    Source: TechCrunch

  • Apple Collaborates with Google’s Gemini AI to Enhance Siri

    This article was generated by AI and cites original sources.

    Apple has announced a partnership with Google to leverage the Gemini AI model for a more personalized Siri experience. The collaboration will enable Apple to integrate Gemini and Google’s cloud technology into future Siri models, as reported by CNBC.

    Over the past year, Apple has been working to enhance Siri’s capabilities, aiming for a version that can execute tasks on behalf of users and better understand individual contexts. A previous delay in the update was acknowledged by Apple, citing unforeseen challenges in development.

    Bloomberg disclosed that Apple had also considered using a customized iteration of Gemini to power AI features within Siri, such as a ‘World Knowledge Answers’ feature that allows users to access AI-generated summaries from web information. The company made organizational changes within its AI division, appointing Mike Rockwell to lead Vision Pro after the departure of former AI chief John Giannandrea.

    While exploring partnerships with other AI firms, Apple’s CEO Tim Cook hinted at future collaborations to broaden AI integration. As the tech giant continues to innovate, users can expect advanced AI capabilities in Siri driven by the collaboration with Google’s Gemini AI model.

    Source: The Verge

  • Google Unveils Universal Commerce Protocol to Enhance AI-Powered Shopping

    This article was generated by AI and cites original sources.

    Google has made significant strides in the realm of AI-powered shopping by unveiling plans to transform Gemini into a merchant platform and introducing an open-source standard, the Universal Commerce Protocol (UCP), in collaboration with key retailers such as Shopify, Walmart, and Target. This initiative aims to enhance communication between AI agents and retailers’ systems across the entire shopping journey, encompassing product discovery, payment processing, and post-sales support.

    At the recent National Retail Federation conference, Google disclosed its partnership with major players like Shopify, Target, Walmart, Wayfair, and Etsy to establish a new industry benchmark for AI-enabled shopping. The UCP is designed to provide a common framework for leveraging generative AI capabilities, offering compatibility with existing industry standards like the Model Context Protocol. This move is poised to revolutionize the e-commerce landscape by enabling seamless integration between AI tools and retailers’ systems.

    Google envisions that the forthcoming “checkout feature” on Search and Gemini, enabled by UCP, will empower users to make direct purchases through AI tools, eliminating the need to navigate between different platforms or websites. This aligns Google’s Gemini and AI Mode in Search with competitors like Microsoft’s Copilot and OpenAI’s ChatGPT, which already offer similar purchasing functionalities.

    The introduction of UCP as an open-source standard aims to drive widespread adoption among retailers and stakeholders in the e-commerce ecosystem, ultimately establishing it as the go-to solution for navigating the complexities of AI-driven shopping experiences.

    Source: The Verge

  • Motional Shifts Focus to AI for Driverless Robotaxi Service Launch in 2026

    This article was generated by AI and cites original sources.

    Motional, a key player in the autonomous vehicle industry, is making strategic changes to prioritize artificial intelligence in its quest to launch a driverless robotaxi service by the end of 2026. The company, formed through a collaboration between Hyundai Motor Group and Aptiv, is now aiming for a driverless service debut in Las Vegas.

    With a renewed emphasis on AI, Motional is reimagining its self-driving system to achieve a commercially viable and scalable solution. By leveraging the latest advancements in AI technology, Motional plans to transition from a human safety operator model to a fully autonomous robotaxi service in the near future.

    Recognizing the potential of AI and the need for cost-effective global scalability, Motional has taken a decisive step towards an AI-centric approach. The company’s president and CEO, Laura Major, highlighted this shift during a recent presentation in Las Vegas, emphasizing the importance of adapting to industry trends and technological progress.

    This strategic pivot underscores Motional’s commitment to innovation and its determination to navigate the evolving landscape of autonomous transportation. As Motional drives forward with its AI-driven robotaxi ambitions, the industry eagerly anticipates the impact of this technological evolution on the future of mobility.

    Source: TechCrunch

  • Google Addresses Inaccurate AI-Generated Medical Overviews

    This article was generated by AI and cites original sources.

    Google has taken action to remove misleading and potentially harmful AI-generated overviews from certain medical search results. The move comes after an investigation by The Guardian revealed instances where Google’s AI provided inaccurate information that could have serious consequences for individuals seeking medical advice.

    One concerning case highlighted by experts involved Google wrongly advising pancreatic cancer patients to avoid high-fat foods, a recommendation that could have adverse effects on their health outcomes. In another instance, Google inaccurately presented information on crucial liver function tests, potentially leading individuals with serious liver conditions to believe they are healthy when they are not.

    This development underscores the critical importance of ensuring the accuracy and reliability of AI-driven information, particularly in sensitive areas like healthcare. As AI continues to play a significant role in information dissemination, the need for rigorous fact-checking and oversight becomes increasingly apparent.

    Source: The Verge

  • Google Removes AI-Generated Health Overviews After Accuracy Concerns

    This article was generated by AI and cites original sources.

    Google has taken action to address concerns raised by an investigation from the Guardian regarding potentially misleading information in its AI-generated Overviews for health-related queries. The Guardian initially reported discrepancies in responses to queries about liver blood tests, prompting Google to remove AI Overviews for specific questions related to liver blood tests and liver function tests. However, variations of the queries could still trigger AI-generated summaries.

    Following the Guardian’s report, a Google spokesperson stated that the company aims to make broad improvements to healthcare-related search results, without commenting on individual removals within Search. Google’s efforts to enhance its healthcare search features were acknowledged last year with the introduction of improved overviews and health-focused AI models.

    Vanessa Hebditch, the director of communications and policy at the British Liver Trust, welcomed the removal of the AI Overviews, emphasizing broader concerns about the accuracy and reliability of health information available online.

    Source: TechCrunch

  • OpenAI’s Request for Real Work Samples Raises Intellectual Property Concerns

    This article was generated by AI and cites original sources.

    OpenAI, in collaboration with Handshake AI, is reportedly seeking third-party contractors to upload real work samples from their previous and current positions as part of an effort to enhance training data quality for AI models. This move reflects a broader trend in AI companies to leverage contractors for generating high-quality training data, aiming to advance the automation of white-collar tasks.

    Contractors are asked to detail tasks undertaken in prior roles and share tangible examples of their actual work output, such as Word documents, PDFs, images, and more. OpenAI emphasizes the removal of proprietary or personally identifiable information before submission, suggesting the use of a ChatGPT “Superstar Scrubbing” tool for this purpose.

    However, intellectual property lawyer Evan Brown has raised concerns, highlighting the significant risk OpenAI faces with this approach that heavily relies on contractors’ judgment to identify and safeguard confidential information properly.

    An OpenAI representative opted not to provide a comment on the matter.

    Source: TechCrunch

  • Indonesia Blocks xAI’s Chatbot Grok Over Concerns About Non-Consensual Deepfakes

    This article was generated by AI and cites original sources.

    Indonesian officials have temporarily blocked access to xAI’s chatbot, Grok, in response to concerns over the proliferation of sexualized deepfake content. The government’s action follows reports of AI-generated imagery on the social network X, showcasing non-consensual and explicit material, including instances of assault and abuse involving real individuals, particularly women and minors.

    Indonesia’s communications and digital minister, Meutya Hafid, emphasized the gravity of non-consensual sexual deepfakes, citing violations of human rights, dignity, and digital citizens’ security. The ministry has engaged with X officials to address these issues, mirroring similar reactions globally.

    India has directed xAI to curb obscene content generation by Grok, while the European Commission has instructed the company to preserve all Grok-related documentation, possibly hinting at a forthcoming probe. In the UK, Ofcom is poised to investigate potential compliance breaches, supported by Prime Minister Keir Starmer’s endorsement of regulatory actions.

    Conversely, the US administration has remained notably silent on the matter, despite xAI CEO Elon Musk’s ties to the administration. Democratic senators have urged Apple and Google to delist X from their app stores in response to the deepfake concerns.

    Source: TechCrunch

  • OpenAI Taps Contractors to Benchmark AI Performance Through Real-World Tasks

    This article was generated by AI and cites original sources.

    OpenAI, a prominent player in the AI domain, is leveraging real-world tasks from contractors to refine the capabilities of its AI models. In a move aimed at evaluating AI performance, OpenAI is soliciting actual assignments and projects from third-party contractors, as revealed by documents from OpenAI and Handshake AI obtained by WIRED.

    The initiative is designed to set a human benchmark for diverse tasks, allowing for a comparative analysis with AI models. Earlier, OpenAI initiated an evaluation process to assess AI model performance against human professionals in various sectors, marking a significant step towards achieving Artificial General Intelligence (AGI).

    Contractors are instructed to outline and upload concrete examples of tasks they have undertaken, emphasizing tangible outputs like Word documents, PDFs, images, and more. Additionally, contractors can provide fabricated work examples to showcase their problem-solving approach in specific scenarios.

    Real-world tasks encompass two key elements: the task request and the task deliverable, highlighting the instructions received and the work produced in response. While both OpenAI and Handshake AI have refrained from commenting on this development, the endeavor underscores a strategic move to enhance AI proficiency through practical job-related tasks.

    Source: WIRED

  • Anthropic Enforces Stricter Controls to Prevent Unauthorized Claude Usage

    This article was generated by AI and cites original sources.

    Anthropic, a leading AI technology company, has recently implemented stringent measures to prevent third-party applications from spoofing its official coding client, Claude Code. These measures aim to curb unauthorized access to Anthropic’s AI models by applications seeking more favorable pricing and limits, which could disrupt workflows for users of the popular open-source coding agent OpenCode. Additionally, Anthropic has restricted the usage of its AI models by rival labs, such as xAI, through tools like Cursor, to train competing systems to Claude Code.

    According to Thariq Shihipar, a Member of Technical Staff at Anthropic, the move is a response to unauthorized harnesses, software wrappers that enable automated workflows, which can introduce bugs and usage patterns that Anthropic cannot properly diagnose. These harnesses bridge the gap between a subscription and an automated workflow, like those seen in OpenCode.

    The economic tension surrounding this crackdown stems from the cost dynamics. Third-party harnesses enable high-intensity automation that could be cost-prohibitive on metered plans, prompting discussions within the developer community about the true cost of such automation.

    By blocking unauthorized harnesses, Anthropic is redirecting high-volume automation towards sanctioned pathways like the Commercial API or Claude Code, where they can maintain control over rate limits and execution environments.

    The community response has been mixed, with some expressing concerns about customer hostility, while others acknowledge the need for safeguarding against abuse of subscription authentication.

    This consolidation of the ecosystem indicates a shift towards more controlled access to Claude’s reasoning capabilities, reflecting a broader trend in the industry to protect intellectual property and computing resources.

    Source: VentureBeat