artificial intelligence

[ANALYSIS] Will ChatGPT (finally) ‘kill’ all the lawyers?

John Molo

This is AI generated summarization, which may have errors. For context, always refer to the full article.

[ANALYSIS] Will ChatGPT (finally) ‘kill’ all the lawyers?
It would be a mistake to underestimate the impact of ChatGPT and its progeny  

I wasn’t good in math. God knows, I wouldn’t have gotten past high school trigonometry if not for the help of our resident math genius. “What’s the point in learning formulas? May calculator naman,” I’d wonder aloud in class. To which our math teacher would reply, “Garbage in, Garbage out.” That shut me up.

It’s a lesson triggered by the recent chatter on artificial intelligence (AI), particularly about ChatGPT. In the greater scheme of things, this “chat bot” is just one of several AI tools available. We already use AI every day without realizing it. I saw a lot of friends fiddling with a painting trend on Facebook that transforms their profile pics into “art.” That’s actually coursed through AI. Grammar tools, navigation apps, and even e-payments have been incorporating AI for some time now. 

As for ChatGPT, it’s a natural language processing (NLP) tool. Its main job is to take information and present it in a format that mimics humans. It is not a lawyer AI, doctor AI or an engineer AI. There’s excitement not because it excels in what it was primarily meant to do (mimic human “speech”) but rather from the fact that its answers tend to be correct. It also readily admits when it makes mistakes. (A scary thought for lawyers like me.)    

ChatGPT was not designed to provide legal services. And yet, from hypothetical scenarios about the West Philippine Sea to Bar Exam questions, even experts acknowledge that its answers are acceptable.  Acceptable, but to be frank, elementary. It being an NLP tool, ChatGPT cannot be deemed a threat to lawyers. Not yet.

But consider this. To become a lawyer, it takes four years of undergraduate study, four years of law school, and five months of prep for the Bar Exam. This is just to establish baseline competence. ChatGPT was “born” only three months ago in the United States. And yet, it demonstrates baseline competence not just in US law but, in Philippine law. Certainly not enough to threaten specialists. But compare that rate of growth to how long it takes a human to reach the same level.

Unfortunately, this isn’t an essay proclaiming the “death of lawyering” (Sorry Shakespeare). ChatGPT wasn’t designed to replace doctors or lawyers. But it proves that with the tech available today, it is becoming feasible to design specialist AIs that can. That said, my math teacher’s point remains true. While no human can match the speed of a calculator, you still need to know what prompts to use to maximize its potential. 

No, ChatGPT won’t make lawyers immediately obsolete. No, law school professors won’t be replaced by chat bots anytime soon. And I don’t foresee judges allowing the entry of lawyers using ChatGPT headsets in the courtroom. But it would be a mistake to underestimate the impact of ChatGPT and its progeny.  

For one, ChatGPT will hasten the commoditization of certain fields of practice including segments of legal education. In professional services such as law, every field lies in a spectrum. On one end, there’s the “rocket science” practice areas. On the other, it’s the commoditized fields. Commoditized means the field has over time become largely repetitive and form work. AI’s impact will be felt earlier by certain fields but, all practices, even litigation (such as discovery, evidence processing) will eventually be affected. 

Genies freed from bottles

Do we shun AI because it might impact our practice areas (or fees)? The decision might not be up to us. Besides, it might not be feasible.

In a tech roundtable with policymakers I attended in AIM this week, one speaker mused on how the internet was once viewed as “evil” when he was in high school. So much so that his teachers banned any research that relied on “internet sources.” Today, it’s difficult to imagine a world that doesn’t rely on the internet for research. Even law firms are slowly easing out “law libraries” in their office spaces. That’s how much we’ve come to incorporate the tool called “internet” in our lives. 

I would look at AI tools like ChatGPT as genies freed from their bottles. “Andiyan na yan e.” We can’t prevent our young lawyers or law students from using it, no matter what countermeasures we deploy. The late constitutionalist Father Bernas once described the folly of trying to put toothpaste back in the tube. Denying AI use could prove messier.

I would say however that now that it’s here, we need to quickly learn how AI can be responsibly integrated in our practice areas and in our classrooms. This requires wisdom, and a lot of trial and error. Upskilling is essential. We’ll need to train lawyers who can understand and work with AI to know where to best deploy it. Because what’s not commonly known is that AI does have biases. ChatGPT for instance, has been observed to favor left-leaning and progressive answers.

The ethical dimension of lawyering also comes to play. And by ethics, I don’t mean the topic of lawyer jokes. Rather, it’s the expectation that rules will be applied by a fellow human being who can grasp the intricacies (and follies) of human experience. Science will always shoot for the impossible. But just because you can clone humans, doesn’t mean you should. And just because AI can now answer family law questions, doesn’t mean we entrust it with child custody decisions. 

For harm or good?

Trust the exponential! So says the technologists (or lately, the crypto bros). The exponential of AI is undoubtable. But will it be exponential good, or harm? The astounding reach of AI makes even the “god” of crypto bros – Elon Musk – pause. He warns that though it “has great, great promise, great capability,” AI poses “one of the biggest risks to the future of civilization.” 

Sam Altman, CEO of OpenAI, counters that AI is the “greatest force” for “economic empowerment,” and “a lot of people getting rich.” His pitch is eerily reminiscent of the early days of social media and the promises of its prophets – shared experiences, a connected world. These came true. But social media also led to rising distrust, societal polarization, and the weakening of institutions (ask scientists and doctors). In Myanmar, it enabled genocide. Within our families, it’s created strife (ie, COVID-19 disinformation). Yes, it made some individuals incredibly wealthy but, it also caused suffering for entire populations.      

AI’s potential completely eclipses whatever has gone before. That’s precisely why it’s important to put guardrails at the start, just when the public is beginning to interact with front-facing interfaces. We failed to do that with social media. Look where it took us. Troll farms, revisionist history, and the monetization of our privacy. 

None of this means I would shun ChatGPT or its successors. I doubt I even could. I sometimes use it to “translate” the occasional ponderous article I need to review as chair of the IBP Journal. That it helps me understand an undecipherable paragraph doesn’t mean I won’t read the original work. Editorial judgment draws from a raft of human experience. ChatGPT is a wondrous tool, but deciding what is publishable requires my personal, flawed judgment as a human. I think my math teacher would agree. – Rappler.com

John Molo is a partner in Mosveldtt Law and a Board member of the Philippine Bar Association. He chairs the political law cluster of the UP College of Law and has argued before the Philippine Supreme Court and international tribunals. He is the coordinator (accountability layer) for #FactsFirstPh and speaks on disinformation across the region

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Loading
Download the Rappler App!