Not that long ago, it seemed absurd that machines could replace writers and editors.
No more.
On Nov. 20, 2022, a computer program called ChatGPT made its debut and rapidly attracted reviews proclaiming it a game changer for artificial intelligence (AI). ChatGPT is a “chatbot,” which is a program that simulates human conversation by applying artificial intelligence to text or voice commands.
Chatbots have existed for years, and most people experience them when they seek customer service assistance from a company by phoning it or visiting its website. Thousands of companies use AI chatbots equipped to recognize certain words and answer questions. Also, millions of people are familiar with the chatbot Siri, the voice assistant on iPhones.
More fully developed text-writing chatbots emerged in the last few years, but their abilities were narrow. Some could produce acceptable marketing copy, for instance, but failed when asked to do other types of text.
ChatGPT, built by the San Francisco research lab Open AI, has no such limitations. Its reach seems boundless. It can produce solid, well-researched articles. It can write short stories. Even poetry. But can it create content that may also get you into legal trouble?
A Brave New World?
When Open AI released ChatGPT for public testing on Nov. 20, one million people signed up within a few days.
Everyone, or so it seemed, marveled at ChatGPT’s skill at producing at least credible text of all kinds. So convincing were its abilities that predictions began to emerge about what its impact will be:
The college essay will soon be extinct.Journalism is endangered.It will undermine trust in information systems.
Nobody really knows, of course. But plenty of people and industries are contemplating how they can put ChatGPT — and fancier, expensive chatbots that are reportedly just around the corner — to use.
Lawyers Are Sizing It Up
Lawyers, to name one group, are giving AI text generation some attention. After all, they generate a lot of text — and much of it is mechanical in nature. For instance, could a machine do document preparation work just as effectively as a junior associate?
Lawyers are testing ChatGPT and finding that it can be a great assistant, at least. Attorney Omer Tene asked ChatGPT to draft a policy for a grocery shopping app. It wrote “a really nice one,” he reported.
On Dec. 5, Suffolk University law professor Andrew Perlman published a 14-page mock U.S. Supreme Court brief that ChatGPT created in one hour based on his prompts. Perlman said he was impressed although he noted that its responses were “imperfect and at times problematic.”
The important takeaway, he said, is the technology’s potential to create “an imminent reimagination of how we access and create information, obtain legal and other services, and prepare people for their careers.”
Legal Risks in Everyday Use
For nonlawyers interested in using ChatGPT, a lingering question is this: Are there legal risks in having a machine create legitimate-looking documents for you?
The answer is yes.
There are risks that AI chatbots could infringe on intellectual-property rights, create defamatory content, and breach data-protection laws.
For what it’s worth, ChatGPT has its own advice on the legal risks of using it.
We asked this question: “What are the legal risks of using ChatGPT?” In less than 30 seconds, we received a 229-word response that identified the same three issues we mentioned above: IP rights, defamation, and data privacy. It also offered advice on how to avoid the risks.
Copyright infringement: “To avoid this risk, it’s important to ensure that the text is not substantially similar to existing copyrighted works.”Defamation: “To avoid this risk, it’s important to ensure that the model is not generating defamatory content and that any content generated by the model is fact-checked before it’s published or distributed.”Data protection: “To avoid this risk, it’s important to verify that the model is being trained on datasets that do not contain personal information, and also to make sure that any output generated by the model doesn’t contain personal information.”
In short, while ChatGPT may be doing the writing, it still needs a human to edit any content it creates.
Additionally, ChatGPT noted, users need to be aware of laws and regulations that may affect the specific type of application users have in mind. “For example, there may be laws or regulations specific to your industry, such as those related to financial services or health care, that you will need to comply with when using the model.”
It sounds like good advice. Then again, it’s coming from a machine.
Should we trust it?
Related Resources:
Who Owns DALL-E Images? (FindLaw’s Legally Weird)Is Your Firm Ready for Chatbots? (FindLaw’s Technologist)Is Plagiarism a Form of Copyright Infringement? (FindLaw’s Law and Everyday Life)
The post Could You Get in Legal Trouble Using ChatGPT? appeared first on .