Artificial intelligence (AI) has taken the world by storm.
And it looks like it’s here to stay. It’s also ignited a growing debate around laws, ethics and creativity. Some fear that it’s pressing us toward the dystopian-esque reality we’ve read so fearfully of in works such as Nineteen Eighty Four or Brave New World. Or perhaps we’re getting ahead of ourselves.
Can AI be utilized safely without “overthrowing the humans”? Read on as lawyers take us through the issue of AI in some detail.
The panel: “War of the Worlds: Human Creativity vs. Machine”
With the advent of AI, and the rising popularity of tools such as ChatGPT, there’s much to be said about the security of artists’ intellectual property. But how do these types of AI software work to generate artistic works?
OpenAI’s Che Chang explained at a Francisco conference:
“It seems very esoteric and magical, but it’s essentially like a large-scale statistical, mathematical analysis. They look at a bunch of existing content of whatever form you want to take, text or images or whatever, they analyze the content [and] they learn how to predict.”
However, whether these works produced by the platform are copyrightable is still up for debate.
A trademark attorney, Victoria Walker, from Trademarkia, an IP firm operating globally, had this to say:
“Works produced by AI are not protectable under current copyright laws. Copyright law only protects the fruits of intellectual labor that are founded on the creative powers of the mind. This means human creation. Businesses should use caution when creating business assets with the assistance of ChatGPT or similar AI services.
You never want to create a valuable asset only to find out it’s not protectable. AI is moving at a rapid pace and legislative updates take time. We may have to wait a long time to see any changes to current copyright laws. However, a change to allow AI to generate works to be copyrightable may never come as human creation is one of the cornerstone principles in copyright law.”
Even OpenAI expressed concerns about the large language model’s ability to replicate artist’s works:
Chang further commented: “We definitely do not want to replace the role of artists — that’s not the goal of this. The goal of this is to help people create. One thing that we will be announcing in the near future is that we’re not going to allow generation in the style of living artists, even though it’s not legally necessary.”
Another of Trademarkia’s IP attorneys, Antoine Wade, had this to say about a recent case that showed how AI-produced works may not fall under the fair use doctrine:
“The Supreme Court’s analysis of Fair Use in connection with AI generated works does not merely take into account similarities and differences between original and derivative works. But more so commercialization and the channels in which the works are used.
In the Warhol case, the Court found that the copyrighted work at issue was used for a commercial purpose that upended a Fair Use defense. Since derivative work was licensed to a magazine, the Court determined that transaction to serve a commercial purpose. However, the Court’s Fair Use analysis may have been swayed in the opposite direction if the derivative work was used for a non-commercial purpose, such as displayed at a museum for educational purposes.”
A courtroom mishap
Concerns about AI aren’t limited to artists.
ChatGPT has developed a reputation for hallucinations. And for one lawyer from the law offices of Levidow and Oberman, this “neurosis” has been career-damaging.
Steven Schwartz submitted a brief to the federal court that included case references. Nothing sound unusual? Well, the cases were fabricated by the AI tool.
The court initially accused the attorney of fabricating the cases himself, but realizing his error, he admitted to the reason for this mistake.
The case highlights the dangers of blindly trusting technology. But how can attorneys use AI tools responsibly? Or should they use them at all? Trademarkia Attorney Diana Ekins had this to say:
“The use of AI by an attorney can be complicated. Attorneys have ethical responsibilities in both competent representation and confidentiality and use of their tools must comply with these obligations. Just like when researching case law at a library, an attorney must research if the case law is still good law.
The attorney in this case failed to do this requisite due diligence. Additionally, disclosure of confidential information to an AI bot may also have severe consequences, so attorneys should be knowledgeable about the tools they are using and be cautious of the information they use to create prompts. Overall, AI tools can help an attorney with efficiency, but there should be extra expended effort in ensuring accuracy and compliance with ethical obligations.”
Fundamentally it’s crucial to realize that these tools aren’t perfect, and everything they say shouldn’t be taken as law. This is new technology, and we still have a long way to go before it’s free from error.
OpenAI sued for defamation
As a first of its kind, OpenAI is being sued for ChatGPT’s “false allegation” about a radio host from Atlanta. After preparing a federal court case write-up, a reporter from Georgia asked ChatGPT to summarize it. This was when the AI fabricated a case and inserted it into the summary.
And the accusations weren’t mild. ChatGPT accused Mark Walters of embezzling five million dollars.
But here’s the interesting part: the journalist never actually published the information. So how the radio host found out about this is unclear. Nevertheless, Walters is requesting general and punitive damages and legal fees.
What’s also unclear is whether OpenAI should be held liable for this. Attorney Diana Ekins commented:
“Many states require showing of actual malice for a defamation suit, which means the statement was said with knowledge of its falsity. In absence of malice, states also allow a defamation suit based on negligence. As a language model tool, it is hard to say whether OpenAI has this requisite intent, or if it has any intent at all in the statements it makes.”
We also decided to go directly to the source, ChatGPT, which had the following to say in its defense:
Question: “Can OPENAI be held liable for false accusations you make?”
ChatGPT: “In general, the liability for false accusations or any other content generated by an AI language model like myself would depend on the legal jurisdiction and the specific circumstances of the situation.
It’s important to understand that while OpenAI strives to provide accurate and reliable information, they cannot guarantee the absolute correctness or validity of the content generated by AI models like me. OpenAI encourages users to critically evaluate and verify the information obtained from AI systems and to take responsibility for their usage of that information.”
Getty Images vs Stability AI debates source material for training AI
Stock photo provider Getty Images has sued artificial intelligence company Stability AI Inc in the UK and USA accusing it of illegally using over 12 million copyrighted photos to train its popular Stable Diffusion AI image-generation system.
Amid all the debate about copyright over the output created by AI, this case focuses on the source. Getty Images has alleged copyright infringement as the images that Stability AI “scraped” from the internet and used to train its AI technology were not legally licensed. The outcome of this case will likely set a defining precedent for what material AI companies can use to train their programs.
Attorney Diana Ekins shares her opinion on the case:
“Copyright laws play a crucial role in safeguarding the rights of creators and incentivizing innovation. However, they are not designed to stifle creativity. Artists frequently learn from and reference other artists in their works including recreating works they found online.
Similarly, AI generators pull art from everywhere on the internet that it’s freely available to view. The issue seems to be that freely available to view is not the same as freely available to use and when used to create art in large quantities, it could undermine the economic value and exclusivity associated with original works.”
All in good time
As time passes, precedents are set, and AI is further developed, we’ll have a clearer idea of where we stand as it pertains to AI tools, their appropriate use, and their legal position. What we do know is that the future we all wondered about in novels is happening today.
