· AI-generated false explicit photos of singer Taylor Swift went viral on social media platforms, alarming the White House. Because it was too late to delete the picture, social platform X blocked related searches for Swift. A large number of "indecent photos" featuring Taylor Sw

entertainment 8500℃
Fake explicit photos of singer Taylor Swift generated by

·ai went viral on social media platforms, alarming the White House. Because it was too late to delete the picture, the social platform X temporarily blocked related searches for Swift.

· AI-generated false explicit photos of singer Taylor Swift went viral on social media platforms, alarming the White House. Because it was too late to delete the picture, social platform X blocked related searches for Swift. A large number of 'indecent photos' featuring Taylor Sw - Lujuba

A large number of "indecent photos" featuring Taylor Swift have appeared on foreign social platforms. Source: Internet screenshot

The well-known American female singer Taylor Swift once again became a victim of AI deepfake technology.

Recently, fake and explicit photos of Swift generated by AI went viral on social media platforms. The number of views soared to tens of millions in a short period of time, alarming the White House. White House press secretary Karine Jean-Pierre warned that the spread of AI-generated photos was "concerning" and urged social media companies to prevent the spread of such misinformation. Because it was too late to delete these harmful images, the social platform X (formerly Twitter) once blocked related searches for Swift.

How should ordinary people protect their rights if their AI-based false and explicit photos are spread? What regulatory responsibilities do social platforms have? In this regard, You Yunting, a senior partner at Shanghai Dabang Law Firm, suggested that according to Chinese law, if someone distributes such photos, the victim can be sued for reputation infringement and portrait infringement. If the photo is relatively explicit, the victim can also accuse the poster of distributing obscene materials, and the public security agency will investigate and criminally crack down on him.

ai deep fake technology has repeatedly targeted Swift

On January 25, a fake explicit image of Swift shared by an x ​​user was viewed 47 million times, and his account was subsequently suspended. Despite x's efforts to delete the relevant indecent photos, the content was still shared on multiple media platforms and spread wildly.

Subsequently, Swift's fans quickly paid attention to the matter and published a large number of "protect Taylor Swift" posts in an attempt to drown out the explicit pictures.

According to the British "Daily Mail" report on January 25, a source close to Swift said that Swift was "angry" about the fake images circulating online and was considering deep fakes for spreading these images. Porn sites take legal action. In response to this,

issued a statement on January 26 saying that it is strictly prohibited to post non-consensual explicit content on the platform and adopts a zero-tolerance policy for such content. Accounts of these images take appropriate action.

This incident attracted the attention of the White House. On January 26, White House press secretary Jean-Pierre said: "Lack of law enforcement on the Internet has a greater impact on women, who are the main targets of online harassment and bullying." She also added that legislation should be introduced to address social issues. The misuse of AI technology in the media is an issue, and platforms should also take steps to ban such content on their websites.

As one of the main platforms for the dissemination of fake photos of Swift, x ended the blockade on the evening of January 29 after blocking related searches for her for several consecutive days. Joe Benarroch, director of operations at Swift fell victim to an AI deepfake for the first time when her voice was synthesized into a false advertisement using deepfake technology. In the ad, Swift's cloned voice tells her fans that she's "so excited" to be giving away free cookware sets. When the victim is directed to the fake website, they will be asked to pay a shipping fee of US$9.96, but the kitchenware claimed to be free will not actually be given away.

Deep fake is a portmanteau of "deep learning" and "fake" in English. It refers to the technology that uses deep learning technology to generate artificial images, audio or videos. Whether it’s cloning voices or synthesizing images, deepfakes have demonstrated a powerful ability to create false content.

After the Swift AI explicit photo incident, on January 29, Microsoft announced that it would introduce more "guardrails" to its AI image generation product Designer to avoid generating explicit images of celebrities without consent.

On January 30, the well-known brokerage company WME announced a partnership with the technology company Vermillio, hoping to protect the portraits of its clients from the abuse of artificial intelligence technology. Vermillio has created a platform called Trace ID that uses artificial intelligence technology to track images to protect WME customers' likenesses and intellectual property from theft. The partnership will also seek to leverage technology to enable clients to monetize their likeness and image.

How should ordinary women protect their rights if they become victims of deep fakes?

Not only public figures will become victims of AI counterfeiting technology, but ordinary women also have similar experiences.

According to a CNN report on November 4, 2023, a 14-year-old New Jersey high school student said that photos of her and more than 30 other female classmates were doctored and shared online, and she called on federal agencies Legislation to address explicit images generated by artificial intelligence.

Since 2023, there have been many cases of AI deep fraud and fraud and many incidents of using AI to create pornographic rumors in China, which are difficult to guard against. How should ordinary people protect their rights if they encounter such incidents?

You Yunting, senior partner of Shanghai Dabang Law Firm, suggested that according to Chinese law, if someone distributes such photos, first of all, the victim can sue in court on the grounds of reputation infringement and portrait infringement. Secondly, if the photos are relatively explicit, the victim can also If a publisher is reported to have committed the crime of spreading obscene materials or the crime of insult, the public security organs will investigate and criminally crack down on him. The public security organs can open a case for investigation in accordance with their authority, and if they are suspected of committing a crime, they can transfer the suspect to the procuratorate for prosecution. The victim may also file a criminal private prosecution for the crime of insult.

You Yunting said that social media platforms should tighten their responsibilities, conduct timely inspections of infringing content on the platform, and block relevant keywords to block the spread of infringing content. In addition, if the artificial intelligence technology of a company subject to domestic legal supervision is used for generation, the company providing the technology should also be responsible for review.

Lawyer Wei Yiwei from Beijing Deheheng (Shanghai) Law Firm believes that if a similar incident occurs in China, the victim may wish to follow the following steps to safeguard his rights. First of all, the victim should pay attention to the collection and preservation of infringement evidence, and can use evidence collection tools that generate "time stamps" to fix the infringement facts to prevent the infringer from destroying the evidence; secondly, when the evidence collection is completed, a lawyer's letter can be sent Send letters to companies and website operators that publish AI generated products and other methods, requiring them to take measures such as removing relevant AI generated products, disconnecting links, etc., and requiring them to publicly apologize, compensate for losses, and eliminate any adverse impact on victims; again , if they refuse to make corrections, they can choose to further safeguard their rights through administrative complaints and reports, reputation infringement lawsuits, or criminal charges based on existing evidence.

Tags: entertainment