Law and Technology: Concerns About AI Usage

Daniel Bomberger
Staff Writer
Introduction
In the past few years, artificial intelligence (AI) has gradually become more integrated within the legal industry. Recent economic volatility has led to shrinking legal budgets and rampant layoffs, pressuring lawyers to find new tools to increase their efficiency [1]. Additionally, as businesses and organizations become more technologically advanced, lawyers are expected to develop expertise in these areas to keep up with client demands [2]. With this context in mind, it is easy to understand why many lawyers are turning to AI for assistance. As of July, more than 38 percent of lawyers have adopted AI into their practice, most claiming that it has positively impacted their workload [3]. Despite its benefits, the majority of lawyers are against the use of AI in the legal industry, arguing that without proper training it poses a risk for them [4]. In order to maximize the benefits of AI adoption among lawyers, it is important to examine AI’s pitfalls and other risks associated with it, enabling lawyers to address them.
The Use of AI in the Legal Industry
In order to understand the risks associated with AI usage in the legal field, it is important to understand in what capacity AI is being utilized. While various AI technologies have been accessible to lawyers for years, recent innovations have led all types of businesses to adopt new generative AI technologies, including those in the legal profession [5]. Generative AI distinguishes itself from other AI by harnessing its access to extensive data sets to predict and generate original content from a given prompt, making informed decisions about the structure, style, and context [6]. The primary way in which this new AI technology has been adopted into the legal industry is in the automation of routine tasks, such as sorting information found in discovery, assisting with legal research, and drafting legal documents [7]. In some civil cases, lawyers have even trusted AI to produce an analysis of the strength of their argument and use precedent to calculate an expected settlement [8]. Outside of casework, legal businesses have also begun to use this technology to create, improve, and personalize their marketing content; creating incentives to adopt it within their firms as well [9]. However, as with all other technology, it is important to recognize the shortcomings of generative AI and adapt the usage of this tool accordingly.
Accuracy Concerns
A common concern that accompanies the use of generative AI within the legal practice is the accuracy of the content that it produces. The data used to train AI will oftentimes not include information from recent years, meaning it will not know about the most recent legal developments and leading it to produce incomplete research or inaccurate legal analysis [11]. Even if AI models were given access to all relevant information, these models commonly make errors in reasoning and have even fabricated information to reach their conclusions [12]. These issues indicate that AI alone cannot be relied upon to produce accurate answers to legal questions. Despite these shortcomings, the answers provided by generative AI are mostly accurate and, with oversight from a legal professional, can still automate hours of menial legal tasks such as legal research and article writing [13].
Privacy Concerns
When using generative AI to expedite their work, lawyers must ensure that their practices adhere to the ethical standards of the legal profession, as improper use can easily jeopardize these core ethical standards. Among these ethical standards, lawyers are particularly concerned that AI usage will jeopardize the strict standard of attorney-client privilege. When you enter a prompt into an AI powered platform, such as ChatGPT, the vendors who made that platform will often record and use that prompt to continue training the AI [14]. If that prompt were to contain confidential client information, it would be fully available to the vendors who created the platform, as well as anyone with access to their training data [15]. Furthermore, the increasingly common practice of using generative AI to power other platforms puts attorneys at risk of unwittingly granting these vendors access to confidential information through a tool that is not clearly labeled [16]. In this context, it is important to proceed with extreme caution when using AI for legal matters so as not to threaten attorney-client privilege. However, this does not mean that lawyers should avoid generative AI altogether. Instead, to avoid the dissemination of confidential information, lawyers may remove or change identifiable details from the case before asking an AI model to analyze it [17]. While this may seem like a perfect solution to the problem of client privacy, this process requires investing attorney time and energy, mitigating some of the efficiency improvements achieved with AI usage. In some cases, lawyers may also obtain the informed consent of their clients before using AI to analyze their information, avoiding the problem of confidentiality altogether [18]. Even with informed consent, it is important for lawyers to avoid disclosing client information whenever possible to protect the client’s interests [19]. However, informed consent is useful because it allows the client to weigh the benefit of improved efficiency against the risk of disseminating their personal information on a case-by-case basis [20].
Copyright Concerns
Finally, it is important to address the copyright and plagiarism concerns associated with the use of generative AI. Firstly, there is the question of whether the use of copyrighted materials to train these AI models is considered appropriate under copyright law. In response to arguments by authors and lawyers that this training process infringes on copyrighted materials, AI vendors have argued that the ability to access the information contained in these works through an AI system is appropriate because it has sufficiently altered the meaning and purpose of the work [21]. These vendors have also claimed that their training processes do not violate copyright law because the copyrighted materials are used exclusively for training rather than being made fully available to the public [22]. Additionally, vendors argue that there are strict limits imposed on their AI models that prevent them from creating works that would serve as market alternatives to copyrighted materials [23]. Despite these arguments, the issue of copyright infringement is still in a state of legal uncertainty. In September 2023, a district court ruled that a jury trial was necessary to determine whether the training of an AI model constitutes copyright infringement, meaning that, as of right now, this issue will likely be decided on a case-by-case basis depending on the specific practices of each AI vendor [24]. The second way in which copyright infringement can occur lies in generative AI’s ability to reproduce existing works. Since these AI models will undoubtedly have access to these copyrighted works, the question of copyright infringement rests on the question of whether the works produced by AI are “substantially similar” to existing works [25]. However, the effectiveness of limits on similarity depends on the specific training of each generative AI model, meaning that this too will likely be decided on a case-by-case basis [26].
Conclusion
While generative AI currently has clear shortcomings in terms of factual accuracy, protecting client confidentiality, and respecting copyright, this does not mean that lawyers should avoid its use entirely. Instead, lawyers should adapt to these shortcomings by taking the necessary precautions in their usage of AI models. Additionally, it is important to recognize that the field of AI is constantly evolving, and that the nature of these shortcomings is constantly changing. Ultimately, in order to keep up with new client demands, the legal industry will need to adapt to the new reality of generative AI usage. As a result, it is every lawyer’s responsibility to monitor legal developments in these areas and ensure that their practices are in line with what is currently considered legally acceptable.
References
[1] Moran, Sarah. “Is 2023 the Tipping Point for AI Adoption in Legal?” Lighthouse, April 12, 2023, https://www.lighthouseglobal.com/blog/is-2023-the-tipping-point-for-ai-adoption-in-legal. [2] Ibid.
[3] Vitelli, Cassie. “62% of Legal Professionals are Not Using AI – and Feel the Industry is Not Ready for the Technology.” Legal Dive, July 27, 2023,
https://www.legaldive.com/press-release/20230726-62-of-legal-professionals-are-not-using-ai-and-feel-the-ind ustry-is-not/.
[4] Ibid.
[5] Chau, Tanguy. “Unlocking the 10x Lawyer: how Generative AI can Transform the Legal Landscape.” Forbes, August 16, 2023, https://www.forbes.com/sites/forbestechcouncil/2023/08/16/unlocking-the-10x-lawyer-how-generative-ai-can-t ransform-the-legal-landscape/?sh=3cb2b9e9401c.
[6] Glover, Ellen. “What is Generative AI?” BuiltIn, October 5, 2023, https://builtin.com/artificial-intelligence/generative-ai.
[7] Mitrofanskiy, Kosta. “Artificial Intelligence (AI) in the Law Industry: Key Trends, Examples, & Usages.” Intellisoft, August 11, 2023, https://intellisoft.io/artificial-intelligence-ai-in-the-law-industry-key-trends-examples-usages/.
[8] Ibid.
[9] Miskolczi, Andrea. “How Generative AI can Reshape Marketing and Business Development.” The Global Legal Post, October 27, 2023, https://www.globallegalpost.com/news/how-generative-ai-can-reshape-marketing-and-business-development-2 80617019.
[10] Edelstein, Jeffrey. “The Shortcomings of AI for Legal Research.” JDSupra, April 7, 2023, https://www.jdsupra.com/legalnews/the-shortcomings-of-ai-for-legal-7608002/
[11] Ibid.
[12] Ibid.
[13] Ibid.
[14] Eliot, Lance. “Is Generative AI Such as ChatGPT Going to Undermine the Famed Attorney-Client Privilege, Frets AI Law and Ethics.” Forbes, March 30, 2023, https://www.forbes.com/sites/lanceeliot/2023/03/30/is-generative-ai-such-as-chatgpt-going-to-undermine-the-f amed-attorney-client-privilege-frets-ai-law-and-ai-ethics/?sh=ab2b4031ea73.
[15] Ibid.
[16] Ibid.
[17] Ibid.
[18] Linna, Daniel, and Muchman, Wendy. “Ethical Obligations to Protect Client Data when Building Artificial Intelligence Tools: Wigmore Meets AI.” American Bar Association, October 2, 2020, https://www.americanbar.org/groups/professional_responsibility/publications/professional_lawyer/27/1/ethical -obligations-protect-client-data-when-building-artificial-intelligence-tools-wigmore-meets-ai/.
[19] Ibid.
[20] Ibid.
[21] Zirpoli, Christopher. “Generative Artificial Intelligence and Copyright Law.” Congressional Research Service, September 29, 2023, https://crsreports.congress.gov/product/pdf/LSB/LSB10922#:~:text=Generative%20AI%20also%20raises%20 questions,that%20resemble%20those%20existing%20works.
[22] Ibid.
[23] Ibid.
[24] Ibid.
[25] Ibid.
[26] Ibid.