Quantcast
Channel: Hawaii Podcast -- Legal
Viewing all articles
Browse latest Browse all 43

AI, Taylor Swift and the election

$
0
0

Attorney Julia Brotman with the Settle & Meyer law firm joins producer/host Coralie Chun Matayoshi to discuss why AI poses a threat to elections, responsibility of social media platforms to detect and remove disinformation, governmental efforts to stop election interference from foreign countries, rights of celebrities to stop deepfakes falsely depicting their support of a candidate, use of copyrighted songs at political rallies, and recent Hawaii legislation to keep deepfake messaging out of Hawaii elections

Q.  In the old days, people started smear campaigns through gossip, then social media made it possible for anyone to post anything about anybody.  Now AI can be used to generate and distribute sophisticated disinformation, photos and videos for the world to believe.  Why does AI pose a greater threat to voters who are trying to decide what to believe? 

Our society has already been dealing with misinformation on the internet for years.  This has made some of us more cautious and discerning, and less likely to believe something just because we saw it online.  However, the AI tools available now are getting more advanced and sophisticated at a remarkable pace, which means the content is becoming more and more authentic and believable.  Additionally, the algorithms that drive our social media feeds are incorporating AI tools in ways that exacerbate echo chambers and skew our ability to distinguish fact from fiction.  It’s important for us all to recognize that different demographics are being fed very different news feeds as a result of the algorithms, and that is a big reason why it seems that different groups of people now believe very different sets of basic facts.  Voters, social media platforms, and news outlets all have a greater responsibility now to monitor and vet content before allowing it to spread.  However, in recent years social media companies have reportedly cut their teams that work on monitoring and stopping the spread of misinformation on their platforms.  Now they rely even more heavily on user-to-user reporting, which can be unreliable and allows a lot of misinformation and misconduct online to fall through the cracks.  On an individual level, it’s exhausting and demoralizing for voters to constantly have to assess and evaluate whether something seen online is real and true.  Some of us are just throwing our hands up and paying less attention to what’s going on, because we don’t know what to believe.  That’s bad for the democratic process.  Additionally, we all have examples of friends or relatives who share content that we think is clearly doctored/fake, but they don’t realize it.  With AI, these fakes are getting better and more believable.

Q.  Don’t social media platforms have a responsibility to detect and remove disinformation from their site?

This is a huge topic of debate right now.  Many would argue that yes, social media platforms have a moral responsibility to detect and remove disinformation from their sites.  However, platforms want to limit that responsibility as much as possible.  With thousands or millions of posts to monitor, there’s no way they can stop all disinformation from spreading.  Since they are afraid of liability for missing something, they’ve successfully advocated for maintaining legal protections that take it out of their hands.  You may have heard of “Section 230” or the “Communications Decency Act” of 1996.  In general, this law protects internet platforms from liability for user-generated content: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”  47 U.S.C. 230(c)(1).  Platforms are still required to remove certain material that violates the law, including material that violates sex trafficking laws, and material that constitutes copyright infringement.  Under current law, they aren’t obligated to detect and remove disinformation.  However, they MAY engage in such policing if they choose to do so.  Section 230(c)(2) protects platforms from civil liability for removing or restricting content that they deem to be "obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected."

In this regard, it’s always important for folks to remember that the first amendment freedom of speech only applies to government action.  So, for example, Meta could implement a policy saying that they do not allow deepfakes to be posted on Facebook and violators’ accounts will be deleted.  If you were to post a video that is flagged as a deepfake and your account was deleted under such a policy, your first amendment rights would not have been violated.  Meta is a private company; they can implement and enforce content policies and it’s not a first amendment violation.  Section 230(c)(2) further supports platforms’ ability to establish and enforce content policies if they so choose.

For more information go to:  https://www.khon2.com/whats-the-law/should-social-media-platforms-be-liable-for-content-they-post/.

Q.  Are there any government efforts to stop election interference from foreign countries?  If disinformation from a foreign source was detected, would the government have a right to require a social media platform to take it down?

The Department of Justice (DOJ) has been actively monitoring, investigating, and pursuing foreign actors who implement misinformation campaigns through U.S. social media, primarily in Russia.  The DOJ also set up an election threats task force in 2021.  There is one Russian group in particular that has reportedly been behind a number of disinformation campaigns using AI and creating fake social media accounts to impersonate Americans in order to spread disinformation.  Last month, National Public Radio reported that this group hired a U.S. content creation company to pay American social media influencers to distribute content online.  Their reach has been significant, with a reported 2,000 videos and 16 million views on YouTube. These foreign groups are likely violating the Foreign Agents Registration Act, which prohibits foreign actors from engaging in political activities in the U.S. without registration and certain disclosures.  For example, information materials transmitted in the U.S. in the interest of a foreign actor must be conspicuously labeled as such.  [https://www.npr.org/2024/09/04/nx-s1-5100329/us-russia-election-interference-bots-2024].

Q.  Celebrity deepfakes being created like the poster of Taylor Swift asking people to vote for Donald Trump.  Do people like Taylor Swift have any legal recourse to stop this?

Yes, but it’s different in every state.  The “right of publicity” is an intellectual property right that protects your name, image, and likeness.  There is currently no federal protection, but many states including Hawaii do recognize the right of publicity.  Because it’s a state-level protection, the laws vary from state to state.  Hawaii has one of the strongest right of publicity laws.  In short, someone cannot use your name, image, or likeness for commercial purposes without permission.  The right exists during your lifetime and for 70 years after your death.  In some states, the right only exists while you are alive.  In others, the right only applies to public figures.  In Hawaii, it applies to everyone.  Depending on the facts, there may also be a claim for false endorsement here under Section 43(a) of the federal Lanham Act and/or corresponding state laws.

For more information go to: https://www.khon2.com/whats-the-law/ai-and-publicity-rights-who-owns-your-face/

Q.  Elon Musk posted an AI generated image on his social media platform X that depicts Vice President Kamala Harris wearing a communist uniform captioned with a false assertion that “Kamala vows to be a communist dictator on day one.  Can you believe she wears that outfit!?”  Is there any legal action that can be taken to stop this?

Yes, I would look at right of publicity and defamation, election interference laws, etc.  There are first amendment rights at play here too; as a public figure and political candidate, we recognize that Vice President Harris is going to be the subject of criticism and public debate with respect to her policies and positions.  If this was a political cartoon with the same caption, we probably wouldn’t be having this conversation – but an AI image is different because of how realistic it is.  Some folks will argue that it’s “clearly AI” or doesn’t look enough like Vice President Harris to actually confuse anyone as to whether the photo was real.  But the fact that this became such a big news story indicates that there may have been some confusion here, and that’s incredibly dangerous.

Q.  What if posts went beyond politics and included a sexual nature?   For example, a deepfake photo of a vocal critic of the Bangladesh ruling party was falsely depicted wearing a bikini in an AI created video. 

It would depend on the facts, but there could be a few legal options as discussed above.  Civil and criminal liability for harassment and other privacy related issues could be at play. The U.S. has not yet enacted federal legislation to specifically address deepfakes, but there have been some attempts on the table at the federal level.  Several states have already taken action with respect to deepfakes.  Some, like Texas, New Mexico, Oregon, and Indiana, specifically relate to the use of deepfakes in the context of elections.  Others, like Florida, Louisiana, South Dakota, and Washington, prohibit the use of deepfakes in connection with sexual content, often specifically sexual content involving or depicting minors. [https://www.thomsonreuters.com/en-us/posts/government/deepfakes-federal-state-regulation/].

Q.  Bills were introduced in the Hawaii State Legislative this past session to keep AI deepfake messaging out of Hawaii elections.  Did any of it pass?

House Bill 1766 would have required disclosure of election materials that are “deceptive and fraudulent deepfakes of a candidate or party.”  Similar legislation was enacted in California, Michigan, Minnesota, Texas Washington, and Wisconsin to prevent the spread of misinformation via political deepfakes.  The Bill did not explicitly prohibit the use of AI or deepfakes but mandated strict disclosure of “any form of media that has been altered and manipulated to misrepresent someone, typically in a way that shows the person saying something that was never said.”  This bill did NOT pass, but a similar one did.  Senate Bill 2687 was enacted as Act 191 and became effective on July 3, 2024.  Act 191 has three main components: 1) banning people from “recklessly distributing or entering into an agreement with another person to distribute, materially deceptive media;” 2) establishing criminal penalties for distributing materially deceptive media; and 3) establish civil liability for distribution of materially deceptive media and provide remedies for those harmed by such distribution.

Q.  What about the use of a celebrity’s music at campaign rallies?  Vice President Kamala Harris was careful to ask Beyonce for permission to use her “Freedom” song at the Democratic National Convention, whereas former President Trump’s campaign has used Celine Dion, Bruce Springsteen, Foo Fighters, and Prince’s music without their permission.  Do celebrities have a right to sue for copyright infringement?

As long as they have a public performance license – and in most cases these rally venues or the campaign itself likely obtains blanket licenses that cover virtually any song that the campaign might want to play – then copyright infringement is difficult to establish based solely on the use of the song.  Circulating videos that include the songs playing is a different type of use, known as synchronization use, which requires a separate license that should be separately obtained directly from the copyright owner.  Additionally, some states may have recourse for false endorsement and/or right of publicity, which could be viable claims depending on the facts. There are rare cases where a blanket public performance license does not include the song at issue.  The estate of Isaac Hayes recently won a preliminary injunction against the Trump campaign, prohibiting the campaign from playing Hayes’ song “Hold On, I’m Coming” at campaign events pending resolution of the estate’s copyright infringement lawsuit.  At issue will be whether the Trump campaign’s use was subject to a blanket public performance license.  If so, then it will be difficult for the Hayes estate to establish copyright infringement. [https://www.cbsnews.com/news/trump-isaac-hayes-hold-on-im-coming-lawsuit/; [https://www.cnn.com/2024/08/28/entertainment/beyonce-celine-dion-foo-fighters-trump-campaign/index.html]

To learn more about this subject, tune into this video podcast.

Disclaimer:  this material is intended for informational purposes only and does not constitute legal advice.  The law varies by jurisdiction and is constantly changing.  For legal advice, you should consult a lawyer that can apply the appropriate law to the facts in your case.


Viewing all articles
Browse latest Browse all 43

Trending Articles