Highlights: 1. Data breach: Anthropic founder’s AI startup admitted to a data breach after an FTC investigation involving non-sensitive information on customer names and credit balances in 2023. 2. FTC investigation: Anthropic, worth $18 billion, has deepened concerns about the s

entertainment 2884℃

highlight:

1. Data breach: Anthropic founder’s AI startup admitted to data breach after FTC investigation involving non-sensitive information of customer names and credit balances in 2023.

2.ftc investigation: The $18 billion Anthropic FTC investigation involving $4 billion in deals with Amazon and other companies has deepened concerns about the security of large-scale language models.

3. Human error: The leak was confirmed to be human error rather than a problem with the company’s internal AI system. Affected customers have been notified, but the company declined to comment on the FTC investigation.

Webmaster Home (chinaz.com) January 29 news: anthropic founders Dario and Daniela Amodei’s AI startup, worth $18 billion, was confirmed in 2024 after an FTC (U.S. Federal Trade Commission) investigation Data breach. The breach involved a subset of non-sensitive information on customer names and credit balances through 2023.

Highlights: 1. Data breach: Anthropic founder’s AI startup admitted to a data breach after an FTC investigation involving non-sensitive information on customer names and credit balances in 2023. 2. FTC investigation: Anthropic, worth $18 billion, has deepened concerns about the s - Lujuba

anthropic received $4 billion from Amazon in September 2023, and this transaction will become one of the focuses of the FTC investigation. FTC plans to investigate in detail similar investments between Anthropic and companies such as Alphabet, Microsoft and OpenAI, seeking information about the agreements, strategic rationale and the practical impact of these collaborations, including decisions on product launches, governance and oversight rights. The company

anthropic stated that the data breach was caused by human error by the contractor rather than a problem with the company's internal AI system. The company has notified affected customers and provided them with relevant guidance. It is worth noting that the company emphasized that the leak had nothing to do with FTC's investigation and would not comment on it.

The leak further deepened enterprises’ concerns about deploying third-party large-scale language models, among which Anthropic’s Claude became the representative in this incident. The startup launched in 2021 and pledged to make claude available to "researchers focused on safety and social impact" through a special research access program, having previously done so in 2022 and in some cases prior to claude's commercialization, in line with Anthropic’s public welfare mission.

Tags: entertainment