Tag: WIRED

  • Grammarly Faces Class Action Lawsuit Over AI-Generated ‘Expert’ Content

    This article was generated by AI and cites original sources.

    Grammarly, the popular writing software company, is facing a class action lawsuit over an AI feature that generated editing suggestions attributed to established authors and academics without their consent.

    The ‘Expert Review’ feature falsely presented insights from notable figures like Stephen King and Neil deGrasse Tyson to Grammarly users. The lawsuit, filed by investigative journalist Julia Angwin, alleges that Grammarly profited by misusing the identities of numerous writers and editors.

    The lawsuit, seeking damages exceeding $5 million, was lodged in the Southern District of New York. Following public criticism, Grammarly disabled the contentious feature, acknowledging the need to better empower experts regarding their representation within the tool.

    Grammarly’s product management director, Ailian Gan, expressed regret over the misstep, emphasizing the intent to enhance user experience while respecting experts’ preferences. The incident underscores the complexities of utilizing AI in content creation and the importance of ethical considerations when leveraging prominent personalities’ identities.

    Source: WIRED

  • US Tech Giants Identified as Potential Targets in Escalating Digital Conflict

    This article was generated by AI and cites original sources.

    Amid the escalating conflict involving Iran, Israel, and the US, major US technology companies such as Google, Microsoft, and Palantir have been identified as potential targets as the digital realm becomes a new battleground.

    Iranian state-linked media recently published a list of US companies with Israeli connections, highlighting their technology used in military applications. The listed companies include Google, Microsoft, Palantir, IBM, Nvidia, and Oracle, many of which have operational presence in the Gulf region, including the United Arab Emirates.

    No public statements have been released by these companies regarding the potential targeting. The list was disclosed by the Tasnim News Agency, associated with Iran’s Islamic Revolutionary Guard Corps, along with a cautionary message that the conflict’s scope could extend beyond conventional military targets.

    Recent events saw Iranian drone strikes causing damage to Amazon Web Services data centers in the UAE and Bahrain, underscoring the vulnerability of physical tech infrastructure in the region. Iranian reports have also mentioned an Israeli strike on a bank building in Tehran, prompting Iranian officials to consider economic infrastructure as potential targets.

    The evolving situation raises concerns about the impact on tech companies caught in the crossfire of geopolitical tensions, highlighting the need for robust cybersecurity measures and contingency plans in an increasingly digitized world.

    Source: WIRED

  • Nvidia Invests $26 Billion in Open-Source AI Models to Bolster Its AI Capabilities

    This article was generated by AI and cites original sources.

    Nvidia, a prominent AI infrastructure provider, is set to invest $26 billion over the next five years in developing open-source artificial intelligence models. This strategic move, as reported by WIRED, positions Nvidia to compete with industry leaders like OpenAI, Anthropic, and DeepSeek.

    By expanding into open-source AI models, Nvidia aims to enhance its capabilities beyond chip manufacturing, potentially transforming into a cutting-edge AI research hub. This investment aligns with Nvidia’s hardware-focused approach, as these models are optimized for the company’s chips.

    Open-source models are characterized by the public release of the model’s weights and parameters, enabling widespread access for experimentation and utilization. Nvidia’s transparency in sharing the technical details of its model development fosters collaboration among startups and researchers, encouraging innovation and iteration in the AI field.

    Recently, Nvidia unveiled Nemotron 3 Super, its latest open-source AI model with an impressive 128 billion parameters. Surpassing OpenAI’s GPT-OSS in performance across various benchmarks, Nemotron 3 Super showcases Nvidia’s commitment to pushing the boundaries of AI capabilities.

    The introduction of advanced training methodologies underscores Nvidia’s dedication to enhancing model reasoning and contextual understanding, setting new standards for AI model development.

    Source: WIRED

  • Nick Clegg Shifts Focus to Educational AI Innovations

    This article was generated by AI and cites original sources.

    Former UK deputy prime minister Nick Clegg, known for his recent involvement in the AI industry, is shifting his focus away from discussions about superintelligence. After departing Meta, Clegg has taken on new roles at British data center firm Nscale and education startup Efekta, demonstrating a focus on AI applications in education.

    At Efekta, a subsidiary of EF Education First, Clegg’s expertise in politics and technology is expected to guide the expansion of an AI-based teaching assistant. This assistant personalizes learning experiences, provides student progress reports, and aims to replicate one-on-one instruction on a larger scale. Currently serving around 4 million students, primarily in Latin America and Southeast Asia, Efekta looks to leverage Clegg’s insights for further growth.

    In a recent interview, Clegg emphasized the transformative potential of AI in educational settings while expressing concerns about power concentration in Silicon Valley and regulatory challenges in Europe. His pragmatic stance positions him between AI doomsayers and enthusiasts, highlighting the importance of balanced discourse in the AI landscape.

    Source: WIRED

  • Trump Administration’s Potential Ban on Anthropic Tools Raises Concerns in AI Industry

    This article was generated by AI and cites original sources.

    The Trump administration is considering further action against Anthropic, an AI startup, by finalizing an executive order to ban the company’s tools from government use. The White House’s move comes despite Anthropic’s legal challenge against the previous sanctions. During a recent court hearing, the Justice Department refused to commit to refraining from imposing additional penalties on Anthropic, signaling ongoing tension between the company and the government.

    With significant revenue at stake, Anthropic is seeking court intervention to suspend the risk designation and prevent future punitive measures. The company faces business uncertainty as customers withdraw from deals due to the government’s actions. The legal battle underscores the complexities of AI regulation and its impact on tech startups like Anthropic.

    As the situation unfolds, the tech industry is closely watching how this conflict between Anthropic and the Trump administration could shape the regulatory landscape for AI companies. The outcome of this case may set precedents for government intervention in the AI sector and influence future business strategies within the industry.

    Source: WIRED

  • Anthropic Faces Potential Revenue Loss Due to Supply Chain Risk Designation

    This article was generated by AI and cites original sources.

    Anthropic, an AI startup, is grappling with potential revenue loss after being labeled a supply-chain risk by the US Department of Defense. This designation has led to disrupted deal talks and raised concerns among current and prospective clients, putting billions of dollars in sales at risk.

    According to court filings, Anthropic’s Chief Financial Officer, Krishna Rao, warned of the significant financial impact, with hundreds of millions in expected revenue already in jeopardy for the company. Rao highlighted the possibility of losing billions in sales if the government’s pressure extends to a broader business avoidance trend towards Anthropic.

    Despite achieving significant sales exceeding $5 billion since 2023, Anthropic faces financial challenges due to heavy investments in computing infrastructure and ongoing profitability issues. The company has invested over $10 billion in training and deploying its models, showcasing the high costs associated with AI development.

    Anthropic’s Chief Commercial Officer, Paul Smith, cited examples of partners expressing distrust and fear of association due to the supply-chain risk designation. Financial services customers have paused negotiations, and some have refused to proceed with deals, reflecting a growing apprehension within the business ecosystem.

    These developments underscore the intricate interplay between AI startups, government designations, and business repercussions, highlighting the vulnerabilities that emerging tech companies face in navigating regulatory landscapes.

    Source: WIRED

  • DHS Reassigns CBP Privacy Officers Amid Concerns Over Surveillance Records

    This article was generated by AI and cites original sources.

    The U.S. Department of Homeland Security (DHS) has recently made significant changes within the Customs and Border Protection (CBP) agency, reassigning top officials amid concerns over record-handling practices related to surveillance technologies. This move comes after objections were raised regarding the mislabeling of government records to prevent their public release under the Freedom of Information Act (FOIA).

    According to WIRED, the DHS took action following disputes over the classification of records, particularly privacy assessments, as ‘drafts’ to avoid disclosure. These actions led to the removal of key individuals responsible for ensuring CBP technologies align with federal privacy regulations. The reshuffling of personnel within the CBP’s privacy and FOIA offices signals a broader conflict over transparency and privacy compliance.

    One notable incident that triggered these changes was the release of a redacted Privacy Threshold Analysis (PTA) related to the Mobile Fortify face recognition app. The PTA revealed details about the app’s data collection practices, including the capture of individuals’ faces and fingerprints without explicit consent.

    The repercussions of these reassignments and the handling of privacy assessments raise questions about the transparency and accountability of government surveillance initiatives. This development underscores the ongoing challenges in balancing security needs with individual privacy rights, especially in the realm of emerging surveillance technologies.

    Source: WIRED

  • AI-Generated Misinformation Spreads Amid Iran Conflict on Social Media

    This article was generated by AI and cites original sources.

    In the midst of the conflict between the US, Israel, and Iran, the proliferation of AI-generated misinformation has inundated social media platforms, notably X, with misleading images and videos. Tal Hagin, a disinformation expert, highlighted how X’s AI-powered chatbot, Grok, failed to verify the accuracy of Iranian missile strike footage, often resorting to sharing its own potentially misleading AI-generated images.

    Since the outbreak of the conflict, X has become a breeding ground for misinformation, with an increasing number of accounts spreading fabricated and repurposed videos. This trend has been exacerbated by the surge in AI-generated content, disseminated by verified accounts and Iranian officials, showcasing exaggerated scenarios.

    The accessibility of AI image and video creation tools has facilitated the production of convincing but false content. For instance, videos depicting a fictional attack on a high-rise building in Bahrain and the capture of US troops by Iranian forces garnered millions of views before being removed.

    While some of the AI content shared on X is more obviously fictitious, such as a video purportedly showing missile production inside a cave, the widespread dissemination of these misleading materials underscores the challenge of combating misinformation in the digital age.

    Source: WIRED

  • GPS Attacks Disrupt Delivery and Mapping Apps: Understanding the Vulnerabilities of Satellite Navigation

    This article was generated by AI and cites original sources.

    Recent disruptions in delivery and navigation apps have left users puzzled as routes suddenly change and locations appear inaccurate. These anomalies are attributed to electronic warfare tactics, particularly in regions near Iran where GPS attacks have become prevalent. While such attacks are commonly used in military conflicts to hinder opponent guidance systems, the repercussions extend beyond the battlefield to civilian services.

    Electronic warfare techniques such as GPS jamming and GPS spoofing are the primary methods employed to disrupt satellite signals. GPS jamming involves overpowering GPS satellite signals with stronger noise signals, rendering navigation and timing systems ineffective. On the other hand, GPS spoofing deceives receivers by providing false location information, creating a different kind of disruption.

    GPS, despite being a vital technology for various sectors like aviation, shipping, and digital services, is susceptible to disruption due to its weak signal transmission from satellites. The ease with which GPS signals can be disrupted highlights the vulnerability of our reliance on satellite-based navigation systems.

    Understanding the intricacies of GPS attacks sheds light on the challenges faced by both military operations and civilian applications that heavily rely on precise location data. As technology continues to advance, securing satellite signals against such attacks becomes increasingly crucial for ensuring the seamless functioning of essential services.

    Source: WIRED

  • New Ultralight Aircraft to Take Flight in U.S. Skies Ahead of FAA Approval

    This article was generated by AI and cites original sources.

    The U.S. Department of Transportation has unveiled a pilot program set to launch new types of aircraft, often referred to as ‘flying cars,’ into U.S. airspace starting this summer. These innovative vehicles, such as electric vertical takeoff and landing aircraft (eVTOLs), combine the capabilities of helicopters and airplanes, enabling them to take off and land in tight spaces while functioning as traditional aircraft.

    Eight regions, including New York, New Jersey, Texas, Florida, and Albuquerque, New Mexico, will participate in a three-year initiative that will see these aircraft transporting passengers and cargo before obtaining full FAA certifications. The companies behind these technologies claim their aircraft are more environmentally friendly, quieter, and cost-effective compared to conventional air transportation methods. Some even offer fully autonomous flight options.

    These new aircraft, like eVTOLs and ultra-short takeoff models, require minimal space for operation, allowing them to take off and land outside of conventional airports, closer to urban areas. Envisioned scenarios include individuals traveling between nearby cities in mere minutes, bypassing ground traffic and potentially reshaping economic dynamics.

    Archer Aviation’s electric air taxi, Midnight, designed for short to medium-range trips with up to four passengers, will participate in pilot projects in several states.

    Source: WIRED

  • CBP’s Use of Online Ad Data for Phone Tracking Raises Privacy Concerns

    This article was generated by AI and cites original sources.

    Recent reports have revealed that the United States Customs and Border Protection (CBP) utilized online advertising data to track phone locations, raising significant privacy concerns. This practice highlights the evolving landscape of surveillance technology and its implications on individual privacy.

    While the use of online ad data for tracking purposes may have provided CBP with valuable insights, the potential privacy infringements and surveillance capabilities associated with this approach are concerning. The intersection of digital advertising and law enforcement activities underscores the need for robust data protection measures and transparency in surveillance practices.

    Concerns regarding the misuse of personal data and the potential implications for civil liberties have come to the forefront. The utilization of such data highlights the complex relationship between technology, data privacy, and national security.

    As discussions surrounding data privacy and surveillance practices continue, it is essential for policymakers, tech companies, and regulatory bodies to address the ethical and legal implications of utilizing online ad data for tracking purposes.

    Source: WIRED

  • Jack Dorsey’s Block Restructures with AI-Driven Layoffs

    This article was generated by AI and cites original sources.

    Jack Dorsey, CEO of Block, recently announced a 40% workforce reduction, citing the need to transform the company into an AI-driven organization. In an interview with WIRED, Dorsey explained that advancements in artificial intelligence are reshaping businesses, prompting him to streamline Block’s operations for increased agility.

    Dorsey’s decision reflects a broader trend in the tech industry, where companies are reevaluating traditional structures in favor of AI-powered models. This move underscores the growing importance of AI in optimizing processes and enhancing competitiveness.

    Despite facing scrutiny over the layoffs, Dorsey remains committed to leveraging technology for strategic advantage. His emphasis on AI integration highlights the evolving landscape of tech companies adapting to digital transformation imperatives.

    Source: WIRED

  • Hacking Consumer Security Cameras: A New Frontier in Modern Warfare

    This article was generated by AI and cites original sources.

    In the realm of modern warfare, the traditional tools of surveillance have expanded to include an unexpected asset: consumer security cameras. Recent research highlighted by the Tel Aviv-based security firm Check Point has revealed a surge in hacking attempts targeting everyday security cameras across the Middle East, particularly during critical missile and drone strikes in the region.

    These hacking efforts, believed to be orchestrated by Iranian state hackers, indicate a concerning trend where civilian surveillance devices are being leveraged by militaries to identify targets, strategize attacks, and evaluate the aftermath of military actions. Notably, Iran, Israel, Russia, and Ukraine have all been implicated in utilizing hacked security cameras for military surveillance purposes, signaling a shift towards a new form of reconnaissance in warfare.

    Iran’s adoption of this surveillance tactic echoes similar actions by other nations. Reports have surfaced of the Israeli military gaining access to Tehran’s traffic cameras to facilitate a targeted air strike, underscoring the evolving landscape of cyber-enabled military operations. Additionally, Ukraine has long raised alarms about Russia’s exploitation of civilian cameras for intelligence gathering, prompting reciprocal hacking efforts by Ukrainian hackers to monitor troop movements and potential threats.

    As armed forces worldwide capitalize on the vulnerabilities of networked consumer cameras, the act of hacking these devices has become a standard practice in military operations. This cost-effective strategy provides a remote vantage point for military planners, enabling them to surveil distant targets efficiently and discreetly.

    Source: WIRED

  • OpenAI’s Military Ties: Navigating Conflicting Policies with Microsoft

    This article was generated by AI and cites original sources.

    OpenAI, known for its ChatGPT models, faced scrutiny as sources revealed the Defense Department’s testing of Microsoft’s version of OpenAI technology, despite OpenAI’s ban on military use. The controversy arose after OpenAI’s deal with the US military, prompting internal criticism and calls for transparency from CEO Sam Altman. In 2023, OpenAI explicitly prohibited military access to its AI models, yet the Pentagon had already begun utilizing Azure OpenAI, a Microsoft-offered variant of OpenAI’s technology. This revelation raised questions about the clarity of OpenAI’s usage policies and the involvement of Microsoft, the startup’s major investor with licensing rights.

    While some OpenAI employees expressed wariness towards Pentagon ties, confusion prevailed regarding the applicability of OpenAI’s policies to Microsoft’s products. OpenAI and Microsoft clarified that Azure OpenAI products were not bound by OpenAI’s restrictions. Microsoft’s spokesperson emphasized that the Azure OpenAI Service, available to the US Government since 2023, operated under Microsoft’s terms of service. Notably, Microsoft refrained from specifying when the service was accessible to the Pentagon, highlighting that it did not have ‘top secret’ approval.

    Source: WIRED

  • Apple Unveils Upgraded MacBook Air and Pro Models with Enhanced M5 Chip Lineup

    This article was generated by AI and cites original sources.

    Apple has announced updates to its MacBook Air and MacBook Pro lineup, introducing the latest M5 chip to the MacBook Air and expanding the M5 chip series with the M5 Pro and M5 Max in the MacBook Pro models. The MacBook Air now features the M5 chip, offering faster SSD technology and starting at 512 GB of storage. Additionally, the new M5 Pro and M5 Max configurations in the MacBook Pro deliver improved multicore CPU and graphics performance.

    The M5 Pro can be configured with up to 18 CPU cores and 20 GPU cores, while the M5 Max extends up to 40 GPU cores. These upgrades promise a significant boost in multithreaded performance compared to previous models. The MacBook Air and MacBook Pro updates aim to cater to users seeking improved performance and storage options.

    Preorders for the MacBook Air will be available on Wednesday, with sales beginning on March 11. The announcement of the updated MacBook models showcases Apple’s commitment to enhancing its product lineup with the latest technological advancements.

    Source: WIRED

  • Apple Restricts ByteDance’s Chinese Apps for US Users

    This article was generated by AI and cites original sources.

    Apple has taken steps to prevent iOS users in the United States from downloading or updating ByteDance apps intended for the Chinese market. This move follows TikTok’s recent agreement to transfer its US operations to new ownership.

    ByteDance, the parent company of TikTok, offers a range of apps covering social media, entertainment, and artificial intelligence. Notably, Douyin, the Chinese counterpart of TikTok with over 1 billion monthly active users, is among ByteDance’s popular offerings.

    Previously, iPhone users worldwide could access ByteDance apps via the App Store with a Chinese account. However, since late January, individuals in the US with Chinese App Store accounts have encountered difficulties when attempting to download or update ByteDance apps on Apple devices located in the United States.

    Upon the download attempt, users are now presented with a message stating, ‘This app is unavailable in the country or region you’re in,’ indicating the imposed restrictions. Notably, the restriction seems to be exclusive to ByteDance apps and not applicable to those from other Chinese developers.

    Apple and ByteDance have chosen not to provide comments on this matter. The timing aligns with TikTok’s decision to transfer its US operations, a move influenced by the legislative action known as the TikTok ban law that restricts the distribution of apps majority-owned by ByteDance.

    Source: WIRED

  • Smack Technologies Develops Military AI Models Amid Ethical Concerns

    This article was generated by AI and cites original sources.

    Smack Technologies, an AI startup, has secured a $32 million funding round to develop AI models for military applications, as reported by WIRED. The company, led by CEO Andy Markoff, a former US Marine Forces Special Operations Command member, is focused on creating models that excel in planning and executing battlefield operations.

    Unlike Anthropic, another player in the field, Smack Technologies is less apprehensive about restricting military use cases. Markoff emphasizes the importance of ethical deployment, stating that those overseeing the technology should adhere to the rules of war.

    Smack’s models learn to devise optimal mission strategies through a trial-and-error approach reminiscent of Google’s AlphaGo program. Expert analysts provide feedback on the viability of chosen strategies, contributing to the model’s learning process. Despite not having the vast resources of established AI labs, Smack is investing significantly in training its initial AI models.

    The intersection of AI and military operations has sparked debates in Silicon Valley, particularly evident in the clash between the Department of Defense and Anthropic over a substantial contract. Anthropic’s reluctance to place restrictions on autonomous model use led to tensions, with defense officials citing the company as a supply chain risk.

    Source: WIRED

  • Grammarly’s AI-Powered Expert Reviews: Blurring the Lines Between Automation and Human Expertise

    This article was generated by AI and cites original sources.

    Superhuman, the company behind Grammarly, has announced a new AI feature that offers ‘expert’ reviews from renowned authors, both living and deceased. This innovation marks a significant shift in the evolution of Grammarly, originally known for grammar and spelling corrections.

    CEO Shishir Mehrotra announced the rebranding of the company to Superhuman to align with its expanded suite of AI products. The enhanced Grammarly platform now includes diverse AI functionalities such as an AI chatbot, a ‘paraphraser’ for style suggestions, and an ‘AI grader’ that predicts document scores as college coursework.

    One of the most intriguing additions is the ‘expert review’ option, allowing users to receive feedback from real academics and authors, albeit without their involvement in the process. This move raises questions about the boundaries of AI-generated content and the ethical implications of simulating expert critiques.

    While Grammarly’s AI advancements offer users a range of writing assistance tools, the introduction of ‘expert’ reviews blurs the line between automated feedback and human expertise. The seamless integration of AI technologies into everyday writing tasks signifies a larger trend towards AI-driven content creation and collaboration.

    Source: WIRED

  • Congress Investigates Vulnerability of Computers to Espionage Techniques

    This article was generated by AI and cites original sources.

    U.S. lawmakers are raising concerns about the susceptibility of computers to espionage techniques that exploit electromagnetic and acoustic leaks, known as side-channel attacks. The National Security Agency’s spying technique, codenamed TEMPEST, has resurfaced as a topic of interest for Senators Ron Wyden and Representative Shontel Brown.

    Side-channel attacks capitalize on the unintentional emanations from electronic devices, such as radio waves, sound, and vibrations, to intercept private data and activities. Despite being a longstanding issue in computer security, the potential risks posed by these attacks on public and classified information have prompted the call for a thorough investigation.

    Wyden and Brown have requested the Government Accountability Office to assess the vulnerability of modern computers to TEMPEST-style surveillance and evaluate the need for enhanced protective measures by device manufacturers. Their initiative aims to address the broader implications of side-channel attacks, emphasizing the importance of safeguarding critical technologies from potential exploitation by adversaries.

    Accompanying their inquiry is a Congressional Research Service report shedding light on the historical context of TEMPEST and its contemporary relevance in the realm of cybersecurity. The report underscores the significance of understanding and mitigating the risks associated with side-channel attacks to uphold national security interests.

    Source: WIRED

  • Extending the Life of Older PCs with Google’s ChromeOS Flex USB Sticks

    This article was generated by AI and cites original sources.

    Google, in collaboration with Back Market, is introducing a solution to extend the life of older laptops and desktops through innovative USB sticks. These USB sticks, priced at $3 each, contain ChromeOS Flex, Google’s cloud-based operating system, allowing users to breathe new life into aging Windows and Intel-powered Mac devices.

    The initiative aims to combat e-waste by providing a cost-effective alternative for individuals with older PCs struggling with outdated hardware and software support. With an initial release of 3,000 USB keys scheduled for March 30, Back Market plans to scale production based on demand, addressing the growing need for sustainable tech solutions.

    ChromeOS Flex enables users to leverage Google’s cloud infrastructure to run resource-intensive programs on legacy devices. While the service is compatible with most Windows laptops, its functionality on older Intel-based Apple computers is limited due to incompatibility with Apple’s newer M-series chips.

    As the tech industry grapples with rising hardware costs driven by memory shortages, initiatives like ChromeOS Flex offer a cost-efficient strategy to prolong the usability of existing devices. This development comes amid increasing challenges in affordable PC upgrades, exemplified by the recent price hikes in Apple’s latest MacBook models.

    Source: WIRED