Law schools are weighing how to integrate rapidly improving artificial intelligence into the classroom as the legal industry grapples with the technology.
“It’s a fool’s game to ban it entirely,” Polk Wagner, a professor at the University of Pennsylvania Carey Law School, said of artificial intelligence and tools like OpenAI’s ChatGPT. “Students need to become fluent in what AI can and cannot do” in order to toe the emerging ethical line, he said.
Some major law firms have started to experiment with ChatGPT and other generative software, which uses AI to generate text, images or designs, to write legal briefs, and to aid research. That has law professors considering how to integrate AI into classrooms while keeping students academically honest.
Getting a handle on the technology isn’t easy. AI continues to develop at a rapid pace, raising concerns about the accuracy of the information it offers and ethical questions for lawyers.
A pair of New York lawyers in June were fined $5,000 after using AI to draft and error-filled brief, citing previous cases that didn’t exist.
Some federal judges are beginning to draw lines around how the tech can be used in court, warning against over-reliance on AI and urging attorneys to vet anything generated by ChatGPT and similar tools. They also require lawyers using AI to file disclosures alerting the courts.
AI has been used by lawyers for e-discovery, contract and brief drafting, review, and revision, aiding legal research, and training the next generation of lawyers, said Dan Linna, director of law and technology initiatives at Northwestern University’s Pritzker School of Law and McCormick School of Engineering.
An Evolving Industry
Law schools are training aspiring attorneys as the legal industry has yet to settle on clear standards for using new tech tools.
Linna says he has been incorporating AI into his classes for more than a decade.
“I require my students to sign up and experiment with AI and ChatGPT,” Linna said. “We must teach and encourage students to use these tools ethically and well.”
Patrick Barry teaches “digital lawyering” at the University of Michigan. He said educating students how to use AI properly will help teach them how to polish their legal writing.
“With AI at students’ fingertips 24/7, it will provide a bit of competition or a nudge for them to elevate their own work,” said Barry.
The emergence of ChatGPT has prompted concerns about cheating on college campuses across the country.
Barry said he is no more concerned about students using AI to cheat than he is about other forms of academic dishonesty. He said he makes exam questions as specific as possible to combat various forms of cheating.
“The key is to not create generic questions that will be vulnerable to ChatGPT,” he said. “I try to come up with specific questions relating to the class or students.”
Some schools have established policies on the use of AI, including allowing students to turn to the tech to perform certain research or seek feedback on a first draft.
Aram Gavoor, associate dean for academic affairs and lecturer at George Washington University Law School, said the university doesn’t have a firm policy in place. Instead, it has recommended guidelines that leave room for innovation.
The guidelines to date offer discretion to individual professors to blanket permit, partially permit, or ban technology like ChatGPT, as long as academic integrity rules are not violated, Gavoor said.
He’s not overly worried about students using ChatGPT to cheat on course work, he said.
“Law school students recognize the incentive to do their own work,” Gavoor said.
He said that one pitfall of using AI on course work is the possibility that ChatGPT’s output is incorrect.
“At the end of the day, it is still the student’s responsibility to captain their own ship and research concepts based on what ChatGPT says,” Gavoor said.