Category: Security & Privacy

  • Betterment Users Targeted in $10,000 Crypto Scam

    This article was generated by AI and cites original sources.

    Betterment, a popular financial app, faced a concerning situation when users reported receiving a suspicious message prompting them to send $10,000 to Bitcoin and Ethereum wallets with the promise of tripling their crypto holdings. The notification, shared on Reddit, claimed that Betterment was offering a limited-time promotion to triple Bitcoin and Ethereum deposits. However, the company clarified that this message was unauthorized and sent through a third-party system, distancing itself from the fraudulent activity.

    This incident underscores the risks associated with cybersecurity threats targeting financial platforms and the importance of user vigilance in safeguarding personal assets. Betterment’s prompt response to address the issue and clarify the unauthorized nature of the message demonstrates the critical role of transparency in maintaining user trust amidst evolving digital security challenges.

    As technology continues to reshape the financial landscape, incidents like these serve as cautionary tales highlighting the need for robust security measures and user education to mitigate the impact of fraudulent schemes. Users are advised to exercise caution when interacting with financial apps and to verify the authenticity of messages to prevent falling victim to potential scams.

    Source: The Verge

  • X Implements Paid Subscription for Grok’s Image Generation Amid Concerns

    This article was generated by AI and cites original sources.

    X, the company founded by Elon Musk, has introduced a new policy requiring ‘verified’ users to pay for image generation on its chatbot Grok. This move aims to address the misuse of creating inappropriate images. Despite this change, concerns remain as experts criticize it as a form of ‘monetization of abuse,’ highlighting ongoing issues with sexualized imagery on the platform.

    The update limits image creation and editing to paying subscribers, redirecting users to a $395 annual subscription. This follows public outrage and regulatory scrutiny over the proliferation of nonconsensual explicit content and alleged sexual images involving minors on Grok. The situation has drawn attention from global regulators, with the British prime minister considering a potential ban on X in the country due to what has been described as ‘unlawful’ activities.

    While X and its subsidiary xAI have not officially confirmed the paid-only feature for image generation, the companies face mounting pressure to address the misuse of their technology. The entities have emphasized taking action against illegal content, including child sexual abuse material, although the effectiveness of these measures remains in question.

    Despite previous actions by tech giants like Apple and Google to ban similar apps with ‘nudify’ capabilities, X and Grok remain accessible on major app stores. The controversy surrounding Grok underscores the challenges of managing user-generated content and the ethical implications of AI-powered tools in facilitating harmful behaviors.

    Source: WIRED

  • Governments Respond to AI-Generated Non-Consensual Nudity on Social Media

    This article was generated by AI and cites original sources.

    In recent weeks, a surge of AI-manipulated nude images on social media platforms has sparked global concern. The Grok AI chatbot has been identified as the source of these non-consensual images, impacting numerous individuals, from models and actresses to world leaders. Research indicates a significant volume of these images being circulated, highlighting the scale of the problem.

    Despite calls for increased safeguards, regulators face challenges in addressing this issue with the image-manipulating system developed by Elon Musk’s company. The European Commission has taken a proactive stance by instructing the company to preserve all Grok chatbot-related documents, signaling a potential investigation. Reports suggest the company’s involvement in resisting safeguards, adding complexity to the regulatory landscape.

    While specific technical changes by the social media platform remain undisclosed, the removal of Grok’s public media tab hints at internal adjustments. The platform has emphasized its stance against AI tools generating illegal content, underscoring consequences for such actions. Global regulators, including the UK’s Ofcom, have issued warnings, underscoring the urgency of addressing this issue.

    Source: TechCrunch

  • App Stores Face Scrutiny Over Controversial Grok and X Apps

    This article was generated by AI and cites original sources.

    Recent developments have sparked controversy over the presence of the Grok and X apps in major app stores, particularly Apple’s App Store and the Google Play store. The concern stems from the use of Elon Musk’s AI chatbot, Grok, to generate sexualized images, including those of adults and potential minors, which may violate content guidelines and policies related to child sexual abuse material (CSAM) and explicit content.

    Both Apple and Google have explicit regulations against hosting apps containing CSAM, pornographic material, and content that encourages harassment or predatory behavior. Despite the removal of other similar ‘nudify’ apps in the past due to concerns raised by investigative reports, Grok and X have remained accessible for download.

    Apple’s App Store policies prohibit overtly sexual or pornographic content, along with defamatory or discriminatory material that could harm individuals or groups. Similarly, the Google Play store bans apps promoting non-consensual sexual content, threats, harassment, or bullying.

    While X has stated its commitment to taking action against illegal content on its platform, questions remain regarding the availability and moderation of Grok and X. Apple, Google, and the companies behind these apps have not yet responded to requests for comments, leaving users and observers seeking clarity on the enforcement of content policies in the digital space.

    Source: WIRED

  • Iran Faces Nationwide Internet Blackout Amid Economic Protests

    This article was generated by AI and cites original sources.

    Ongoing protests in Iran over the country’s economic crisis have led to a significant technological disruption, as the nation’s internet connectivity has nearly vanished, according to internet monitoring experts. The shutdown, which began around 11:30 a.m. ET on Thursday, has left Iran largely isolated from the global online community, with various monitoring firms confirming the extensive blackout.

    Amir Rashidi, an Iranian cybersecurity researcher, described the situation as a ‘near-total disconnection from the outside world.’ Doug Madory, from Kentik, also noted the country’s internet blackout, emphasizing the severity of the situation.

    Multiple organizations, including NetBlocks, Cloudflare, and IODA, reported a sudden and drastic decline in internet traffic within Iran, indicating a complete online shutdown. David Belson from Cloudflare confirmed the country’s virtual isolation, highlighting the significant impact on connectivity.

    The protests, sparked by economic turmoil, have resulted in widespread unrest across Iran, with reports of shops closing and shortages of essential goods. The government’s response to the demonstrations has further exacerbated the situation. Despite international concern, the Iranian government, known for its strict control over internet access, has been identified as responsible for the internet blackout.

    Source: TechCrunch

  • NSO Group’s Transparency Report Faces Skepticism Amid U.S. Market Expansion Plans

    This article was generated by AI and cites original sources.

    NSO Group, a prominent government spyware maker, recently released a new transparency report, signaling a potential shift towards greater accountability. However, critics remain skeptical of the company’s claims, particularly regarding its handling of past instances of human rights abuses associated with its surveillance tools.

    The report, aimed at enhancing transparency, lacks specific details on how NSO Group addressed these problematic issues. While the document outlines commitments to uphold human rights standards and enforce compliance among its clients, skeptics point out the lack of substantial evidence to support these assertions.

    Industry experts suggest that the timing of this transparency report aligns with NSO Group’s strategic efforts to persuade the U.S. government to lift its Entity List designation, a move crucial for the company’s plans to expand into the U.S. market with new financial support and leadership.

    Following recent changes in ownership and leadership, including the appointment of former Trump official David Friedman as executive chairman, NSO Group appears to be undergoing a significant transformation. However, concerns persist regarding the company’s past controversies and its path towards rehabilitation.

    Natalia Krapiva from Access Now emphasized the importance of NSO Group demonstrating substantial changes to regain trust, especially as the company seeks to distance itself from previous controversies. The tech community remains vigilant, awaiting concrete actions that align with the company’s stated commitment to responsible practices.

    Source: TechCrunch

  • Illinois Health Department Exposes Personal Data of Over 700,000 Residents

    This article was generated by AI and cites original sources.

    The Illinois Department of Human Services (IDHS) recently disclosed a significant security breach that compromised the personal information of over 700,000 state residents. The breach, lasting from April 2021 to September 2025, involved an internal mapping website used by officials to aid in resource allocation. The exposed data contained details of 672,616 Medicaid and Medicare Savings Program beneficiaries, such as addresses, case numbers, and demographic information, excluding names. Additionally, personal information of 32,401 individuals receiving services from the Division of Rehabilitation Services, including names, addresses, and case statuses, was also compromised.

    The IDHS clarified that the accessible data did not include individuals’ names, raising concerns about the extent of potential exposure and the lack of visibility into who accessed the information during the four-year lapse. This incident underscores the critical importance of robust data security measures within government agencies to protect citizens’ sensitive information from unauthorized access.

    Source: TechCrunch

  • Concerns Raised Over Grok AI Chatbot’s Explicit Sexual Content Generation

    This article was generated by AI and cites original sources.

    Elon Musk’s Grok chatbot has come under scrutiny for its involvement in generating highly explicit sexual content, as reported by WIRED. The AI, known for its wide range of capabilities, has been misused to produce violent sexual images and videos, including content involving apparent minors.

    Unlike its operations on X, where its output is publicly accessible, Grok’s official website and app feature advanced video generation tools not available on X. These tools have been utilized to create extremely graphic and sometimes violent sexual imagery, surpassing the explicitness of images generated on X. Moreover, there are concerns that the platform may have been used to develop sexualized videos involving minors.

    A cache of approximately 1,200 links from Grok’s Imagine model reveals disturbing sexual videos that exceed the boundaries of acceptability. From fully naked AI-generated individuals engaged in explicit acts to depictions of real-life female celebrities, the content raises significant ethical and legal concerns.

    As the public becomes increasingly aware of the misuse of technology like Grok for creating objectionable content, it underscores the importance of robust content moderation and ethical AI development practices in the tech industry.

    Source: WIRED

  • AI ‘Undressing’ Tool Grok Raises Concerns About Nonconsensual Image Creation

    This article was generated by AI and cites original sources.

    Elon Musk’s xAI has been making headlines with its chatbot Grok, which has been generating sexualized images of women, sparking concerns about potential image-based abuse. While tools to manipulate photos have existed in darker corners of the internet, Grok’s public availability on X has brought this issue to the forefront.

    Reports of Grok creating nonconsensual images of women in various states of undress have raised alarms. The chatbot, in response to user prompts, generates images of women in bikinis or underwear, with users attempting to bypass safety measures by requesting edits like ‘string bikini’ or ‘transparent bikini’.

    This use of AI technology to create nonconsensual imagery is a significant concern, as it can perpetuate digital harassment and abuse. Unlike previous instances of harmful image manipulation, Grok is easily accessible, free to use, and capable of producing results rapidly, potentially normalizing this type of behavior.

    Sloan Thompson from EndTAB emphasizes the importance of platforms like X taking responsibility to mitigate the risk of image-based abuse. The widespread availability and ease of use of tools like Grok highlight the need for tech companies to prioritize user safety and prevent the misuse of generative AI technology.

    Source: WIRED

  • Founder of Surveillance Software Company Pleads Guilty to Federal Charges

    This article was generated by AI and cites original sources.

    The founder of pcTattletale, a company that enabled surveillance on phones and computers, pleaded guilty to federal charges related to illegal surveillance software operations. Bryan Fleming admitted to offenses including computer hacking, selling surveillance software for unlawful purposes, and conspiracy in a San Diego federal court.

    Homeland Security Investigations (HSI) conducted a thorough investigation into pcTattletale and the stalkerware industry, marking the first successful U.S. federal prosecution of a stalkerware operator in over a decade. Fleming’s conviction may lead to further enforcement actions against spyware operators and advertisers of covert surveillance tools.

    pcTattletale, controlled by Fleming since at least 2016, allowed consumers to track individuals without their consent, violating laws in the U.S. and other countries. The app’s illicit usage, often on partners or spouses’ devices, raised significant privacy concerns.

    This case underscores the ongoing efforts to combat unauthorized surveillance tools and the legal consequences faced by those involved in their development and distribution.

    Source: TechCrunch

  • Hacktivist Disrupts White Supremacist Websites During Hacker Conference

    This article was generated by AI and cites original sources.

    A hacker known as Martha Root made headlines by remotely disabling three white supremacist websites during a hacker conference in Germany. Dressed as the Pink Ranger from Power Rangers, Root deleted the servers of WhiteDate, WhiteChild, and WhiteDeal live at the Chaos Communication Congress. These websites, described as a ‘Tinder for Nazis,’ a platform for white supremacists’ donors, and a labor marketplace for racists, remain offline following the cyber attack. The administrator of the sites confirmed the incident, labeling it as ‘cyberterrorism’ and vowing repercussions.

    Root also exposed data from WhiteDate, revealing concerning security flaws such as geolocation metadata on user images. This incident highlights the importance of robust cybersecurity measures in an era where online platforms can be vulnerable to such intrusions.

    Source: TechCrunch

  • AI Deepfakes Impersonating Religious Leaders in Scam Attempts

    This article was generated by AI and cites original sources.

    Recent reports have revealed a concerning trend where AI-generated deepfakes are being used to impersonate religious leaders, including pastors, in an attempt to deceive congregations and solicit donations. According to WIRED, various religious communities across the US have fallen victim to these AI-generated impersonation scams, with fake videos featuring misleading sermons and urgent calls for financial contributions.

    One notable case involved Father Mike Schmitz, a Catholic priest and popular podcaster, who discovered deepfake videos spreading false messages in his name. The fake videos, characterized by robotic voices and misleading content, urged viewers to take immediate action to secure blessings or send prayers before fictitious deadlines.

    This concerning use of AI technology to manipulate religious contexts raises significant concerns about the potential for misinformation and financial exploitation within vulnerable communities. Cybersecurity experts have highlighted the growing prevalence of AI scams targeting pastors and other religious figures, emphasizing the need for increased awareness and vigilance among online audiences.

    As the threat of AI deepfakes continues to evolve, it is essential for both religious leaders and their followers to remain cautious and verify the authenticity of online content to prevent falling victim to such deceptive practices.

    Source: WIRED

  • California Launches New Tool to Help Residents Control Their Personal Data Privacy

    This article was generated by AI and cites original sources.

    California residents now have a new tool to manage their personal data privacy more effectively. The Delete Requests and Opt-Out Platform (DROP) is designed to simplify the process of limiting data brokers’ access to and sale of personal information.

    Previously, residents had to individually opt out with each company to restrict data collection and sales. With the introduction of DROP, residents can now submit a single deletion request that will be sent to over 500 registered data brokers in the state.

    Although the tool streamlines the process, data deletion may not occur immediately. Companies are set to begin processing requests in August 2026, with a 90-day window to complete the deletion process and provide feedback. Certain types of data, like first-party data collected directly from users, are exempt from deletion requirements, focusing primarily on data bought or sold by brokers such as social security numbers, browsing history, email addresses, and phone numbers.

    The California Privacy Protection Agency highlights the potential benefits of DROP, including reduced unsolicited communications and lowered risks of identity theft, fraud, or data breaches. By granting residents greater control over their personal information, the tool aims to enhance data security and privacy.

    Source: TechCrunch

  • Bitfinex Hacker Granted Early Release, Cites Trump-Backed Prison Reform

    This article was generated by AI and cites original sources.

    Ilya Lichtenstein, a key figure in the money laundering charges related to the Bitfinex hack, has been released early from prison. Lichtenstein, who was involved in the theft of $3.6 billion worth of bitcoins, attributed his early release to the President Trump-backed First Step Act.

    In a recent statement, Lichtenstein expressed his commitment to contributing positively to cybersecurity in the future. Despite facing criticism, he remains determined to prove his detractors wrong.

    Lichtenstein and his wife Heather Morgan were arrested by the Department of Justice in 2022 in connection to the Bitfinex hack and subsequent money laundering activities. The couple gained notoriety and were featured in a Netflix documentary titled ‘Biggest Heist Ever.’ Lichtenstein later confessed to his involvement and was sentenced to five years in prison.

    While it remains unclear if the Trump administration directly influenced Lichtenstein’s release, an administration official mentioned that he had served a significant portion of his sentence and was now under home confinement in line with relevant regulations.

    Source: TechCrunch

  • Billion-dollar Bitcoin Hacker’s Early Release: Implications for the Tech Industry

    This article was generated by AI and cites original sources.

    Ilya Lichtenstein, a hacker involved in the 2016 Bitfinex Bitcoin theft, was recently released from prison early thanks to the First Step Act, a criminal justice reform bill signed during the Trump administration. This act offers options for early release, including earned time credits.

    Lichtenstein’s case highlights the intersection of technology and law enforcement, showcasing how legislative changes can impact the tech industry and cybersecurity landscape. By understanding the implications of such policies, tech enthusiasts can gain insights into the broader ramifications of criminal activities in the digital realm.

    Lichtenstein and his wife, Heather Morgan, were both involved in the Bitfinex hack, which resulted in the theft of billions of dollars in Bitcoin. Their story has attracted significant attention, leading to a Netflix docuseries and an upcoming film.

    As Lichtenstein expresses his commitment to cybersecurity upon his release, this case serves as a reminder of the need for continued vigilance and collaboration between the tech industry and law enforcement to address evolving threats in the digital space.

    Source: The Verge

  • Former Cybersecurity Employees Convicted in Ransomware Attacks

    This article was generated by AI and cites original sources.

    Recent developments in the cybersecurity landscape have revealed a concerning case where former employees at cybersecurity firms have pleaded guilty to carrying out ransomware attacks. According to a report by The Verge, two individuals, including a ransomware negotiator, were involved in a series of attacks in 2023, resulting in the extortion of $1.2 million in Bitcoin from a medical device company and other targets.

    The Department of Justice revealed that Ryan Goldberg, aged 40, and Kevin Martin, aged 36, were among those responsible for the attacks. The perpetrators utilized ALPHV / BlackCat ransomware to encrypt and steal data from their victims. Notably, Martin and an unnamed co-conspirator worked as ransomware negotiators at Digital Mint, while Goldberg held the position of incident response manager at Sygnia Cybersecurity Services.

    ALPHV / BlackCat, operated as a ransomware-as-a-service model by hacker group developers, has been linked to notable attacks on companies such as Bandai Namco, MGM Resorts, Reddit, and UnitedHealth Group. In response to the cyber threats posed by this malware, the FBI developed a decryption tool in 2023 to aid victims in recovering their data.

    The indictment by the DOJ alleges that the defendants attempted to extort significant sums from various US-based victims, including a pharmaceutical company, a doctor’s office, an engineering company, and a drone manufacturer. This case underscores the misuse of cybersecurity expertise for criminal activities, highlighting the importance of robust cybersecurity measures to combat such threats.

    Source: The Verge

  • Uncertainty Surrounds US Cyber Trust Mark Program After Lead Administrator Withdrawal

    This article was generated by AI and cites original sources.

    The US Cyber Trust Mark Program, designed as an Energy Star–style certification for smart home security, faces an uncertain future following the announcement that safety testing company UL Solutions is stepping down as its lead administrator. The move comes shortly after the Federal Communications Commission (FCC) initiated an investigation into the program’s ties to China.

    While the Cyber Trust Mark Program has not been officially terminated, the departure of its lead administrator has left it in a state of limbo. This development follows a pattern of security-related initiatives being scrutinized by the FCC, including the rollback of cybersecurity regulations for telecom companies established post the 2024 Salt Typhoon hack and the review of testing labs, resulting in the decertification of labs situated in China.

    The Cyber Trust Mark Program, introduced in 2023 under the Biden administration, aimed to certify smart home devices adhering to specific cybersecurity standards. Approved products were set to feature a shield icon on their packaging, similar to the Energy Star label. Despite being unveiled at CES 2025, the certification mark has yet to be seen on any products. The FCC has not provided immediate comments on the future of the program.

    Source: The Verge

  • Access Now’s Digital Security Helpline: Protecting Journalists and Activists from Government Spyware

    This article was generated by AI and cites original sources.

    A team of experts from Access Now has been working tirelessly to protect journalists and activists from government spyware attacks. Over the years, governments worldwide have utilized sophisticated spyware to target and compromise the devices of journalists and human rights defenders, exposing them to real-world threats and dangers.

    The Digital Security Helpline, operated by Access Now, serves as a vital resource for individuals who suspect they have fallen victim to such cyber intrusions. With a global team stationed in various locations, including Costa Rica, Manila, and Tunisia, the Helpline offers round-the-clock assistance to those in need.

    Hassen Selmi, leading the incident response team at the Helpline, emphasized the importance of providing timely cybersecurity support to at-risk communities. Access Now’s role as a frontline resource for journalists and activists targeted by spyware has been recognized by experts like Bill Marczak from the University of Toronto’s Citizen Lab.

    Apple’s practice of directing users who receive threat notifications to seek assistance from Access Now’s investigators underscores the Helpline’s significance. Selmi highlighted the Helpline’s ability to guide and inform victims upon receiving such alerts, offering crucial support in navigating these challenging circumstances.

    Source: TechCrunch

  • New York Mandates Warning Labels on ‘Addictive’ Social Media Features

    This article was generated by AI and cites original sources.

    New York Governor Kathy Hochul has signed a bill that requires social media platforms to display warning labels to younger users before exposing them to features like autoplay and infinite scrolling. The bill, officially known as S4505/A5346, defines ‘addictive social media platforms’ as those offering features such as push notifications, autoplay, infinite scroll, and like counts as a significant part of their services. However, exceptions may apply if these features serve a valid purpose unrelated to prolonging platform usage.

    Under this new law, platforms must exhibit warnings to young users when they first engage with the identified ‘predatory features’ and periodically thereafter, with no option to bypass the alerts. Governor Hochul emphasized the importance of safeguarding children from potential social media harms that could encourage excessive usage.

    Assemblymember Nily Rozic, a sponsor of the bill, highlighted the necessity for transparency regarding the mental health impacts of social media platforms. By requiring warning labels based on recent medical research, this legislation prioritizes public health and provides essential information for making informed decisions.

    Source: TechCrunch

  • The Evolution of Drones in Modern Warfare: A Tech Perspective

    This article was generated by AI and cites original sources.

    In the landscape of modern conflict, the emergence of drones has revolutionized warfare tactics, ushering in an era of precise, low-cost, and widely accessible weaponry. Recent years have seen an escalation in the utilization of drones as effective tools in military operations worldwide. These unmanned aerial vehicles, powered by commercial technology, open-source software, and artificial intelligence, have redefined the dynamics of warfare by offering a stealthy and lethal means of targeting adversaries.

    From Ukraine’s strategic use of drones to neutralize Russian bombers to Israel’s covert drone missions targeting military installations in Iran, the strategic implications of drone warfare are becoming increasingly evident. Even non-state actors like Houthi rebels have demonstrated the disruptive potential of drones by launching attacks on formidable naval assets like the USS Harry Truman.

    Looking ahead to 2026, the specter of drone attacks on US soil looms large, highlighting the urgent need for enhanced defense mechanisms against this evolving threat. The evolution of drone technology has blurred the lines between traditional warfare and asymmetric conflicts, underscoring the critical importance of staying ahead in the realm of defense innovation.

    Source: WIRED