From law to journalism, marketing to content creation, artificial intelligence, particularly language models like ChatGPT, are becoming integral components of our professional toolkits. You may have even tried using these tools yourself, drawn in by their remarkable efficiency and their ability to process and make sense of vast quantities of data.
And why wouldn’t you? The potential benefits of AI are immense. These tools can provide insights that were unimaginable a few years ago, and they promise to revolutionize how we work.
However, it is important to be aware of the potential pitfalls. Just as a powerful car can be a danger if driven recklessly, these sophisticated AI tools can lead to serious problems if their outputs are used uncritically.
Consider the cautionary tale of New York lawyers Steven Schwartz and Peter LoDuca. They found themselves in hot water due to their uncritical use of ChatGPT in their legal practice.
The outcome of their case provides a stark reminder that while AI tools can be helpful, they must be used with due diligence and critical thinking.
The Chatbot Incident: A Detailed Look
In preparing their legal brief for a client’s personal injury case against Colombian airline Avianca, Steven Schwartz used ChatGPT to assist with their research. However, he failed to verify the information provided by the AI, and as a result, their brief included six fictitious case citations.
These false citations were initially unnoticed and were included in the brief that LoDuca submitted to the court, with Schwartz later admitting that he had unknowingly used the false information.
The issue came to light when lawyers for Avianca alerted the court that they could not locate some of the cases cited in the brief. Upon further investigation, it was revealed that the cited cases were fake, generated by ChatGPT. Schwartz and LoDuca continued to stand by the fake opinions, even after the court questioned their existence.
The court imposed sanctions on both lawyers. They were ordered to pay a fine of $5,000, with the judge stating that they had acted in bad faith and made misleading statements to the court. The order also stated that the lawyers must notify the judges who were identified as authors of the fake cases of the sanctions.
In the sanctions order, the judge clarified that there was nothing “inherently improper” in lawyers using AI for assistance, but emphasized that lawyer ethics rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings. The judge also highlighted that the lawyers continued to stand by the fake opinions even after the court and the airline questioned their existence.
In addition to the financial penalty, the incident had other consequences. The lawyers’ actions were seen as a breach of lawyer ethics rules, which impose a responsibility on attorneys to ensure the accuracy of their filings. Their actions also tarnished their personal professional reputations and that of their firm, with the judge commenting on their ‘acts of conscious avoidance and false and misleading statements to the court’.
The case of Schwartz and LoDuca offers valuable lessons for content writers and researchers, particularly those who are looking to leverage the power of AI in their work. Here are some key takeaways:
- Validate AI-Generated Content: ChatGPT can be a powerful tool in generating content or assisting with research. However, it’s crucial to remember that AI tools are not infallible. They can make mistakes, misinterpret data, or even generate false information. Therefore, always validate the content or data generated by AI with reliable sources before using it.
- AI is a Tool, Not a Replacement: AI should be used as a tool to supplement your work, not as a replacement for your skills and expertise. This over-reliance on AI can lead to critical errors, as was clearly demonstrated in this case.
- Understand How AI Works: To use AI tools effectively and safely, it’s crucial to have at least a basic understanding of how they work. AI chatbots like ChatGPT are trained on large amounts of data and generate responses based on patterns they’ve learned, but they don’t understand the content in the way humans do. They can’t verify facts or distinguish between reliable and unreliable sources. Knowing this can help you use such tools more judiciously.
- Continuous Learning and Adaptation: As AI and other technologies evolve, it’s important to continually learn and adapt. The field of AI is rapidly changing, with new tools, capabilities, and potential issues emerging all the time. Keeping up-to-date with these developments can help you use AI effectively and safely.
While AI can be a powerful aid in content creation and research, it should be used responsibly and ethically. Always double-check AI-generated content, use it to supplement your work rather than replace it, uphold your ethical standards, and consider the potential impact on your reputation.