On Computing

To AI or Not to AI?

Joel Howell

Article by Joel Howell Newsletter Editorial Board

Posted

Artificial Intelligence on an everyday level is here and this will give you some background and a start on using it. The dam was broken at the end of last year when ChatGPT was released. It’s a natural language processing tool driven by AI technology that allows you to engage in conversation with the chatbot. It can answer questions to assist you with tasks, such as composing emails, essays, and code. It’s currently open to use without charge. ChatGPT is in its research and feedback-collection phase. You can access it by going to chat.openai.com and creating an OpenAI account. With thanks to the usual internet sources, UpTime Legal, and the NY Times, here’s an overview of this rapidly advancing technology.

To understand what’s going on, first understand the terminology and basic AI principles. Machine learning is a branch of artificial intelligence that enables computers to learn and make decisions without being explicitly programmed. With machine learning, a computer system is fed enormous amounts of data, like images or text, and it learns to recognize patterns and make predictions based on that data. The more data it processes, the better it becomes at making accurate predictions or decisions. Current examples of this are email spam filters and voice assistants.

Generative AI are tools that, as the name implies, generate content. The most notable of these are text-generation tools such as ChatGPT, and graphic/artistic generative tools like Midjourney. Both create high-quality content (text and creative works, respectively) at impressive speeds and low cost.

Large Language Models (LLM’s) like GPT-3 and GPT-4 are advanced artificial intelligence (AI) systems designed to understand and generate text. These models are trained on ENORMOUS amounts of text data from diverse sources, such as books, articles, websites, and more, to learn the intricacies of language, grammar, and various subject matters. Imagine the model as a highly skilled librarian who has read countless books and articles across numerous topics, including the law. This librarian can help answer questions, provide insights, and even draft documents by drawing on their extensive knowledge. These AI models are built using machine learning and are bult to mimic the way human brains process information. By analyzing enormous amounts of text data, the model learns patterns and associations, enabling it to generate coherent, relevant responses. For lawyers and legal professionals, large language models will become increasingly valuable tools. They can assist with things like legal research, document drafting, or contract review by providing suggestions, summarizing information, or identifying relevant precedents. However, it’s important to remember that these models are not infallible and, at least presently, should be used as a supplementary resource rather than a definitive authority. In other words, LLM’s definitely get the facts wrong sometimes.

AI, has actually been in use for a while now. Artificial Narrow Intelligence, or ANI, is exceptionally good at exactly one thing, such as credit card fraud detection, or deciding what video to show on your YouTube feed.

Think of Artificial General Intelligence as what you might envision in science fiction — superintelligence that knows everything about everything (or many things across a wide variety of subjects and domains). AGI is not yet perfected, but some think we’re close. Generative AI tools like GPT-4, by some accounts, live somewhere between ANI and AGI.

Given the foregoing background, consider a few examples of how to use ChatGPT:

Answering broad questions: Providing information on any topic you choose.

Drafting and editing content: Creating blog posts, articles, social media updates, and the like.

Summarizing information: Summarize lengthy texts, articles, or documents.

Brainstorming ideas: Generating creative suggestions or concepts for projects, marketing campaigns, or problem solving.

Assisting with learning: Offering explanations or insights on various subjects to understand new concepts.

Scheduling and reminders: Manage schedules and calendar events or tasks.

Recommending resources: Suggesting books, articles, or other materials related to research needs.

Simulating characters: Creating fictional characters or dialogues for storytelling, screenwritinzg, or role-playing purposes.

The legal tech industry is experiencing rapid innovation as companies build and integrate AI and LLM technologies into their products. This development is in an early stage, but will undoubtably change the way lawyers work, offering tools that can streamline processes, improve efficiency, and help deliver better outcomes for clients.

One example of this trend is Ironclad, which recently released its AI red-lining tool AI Assist. This new application, powered by the GPT-4 model, is the first contract redlining tool to utilize generative AI. It enables review, comparison, and revision of contract drafts to identify discrepancies, suggest edits, and ensure compliance with relevant laws and regulations.

Examples like this demonstrate the transformative potential of artificial intelligence in the legal sector. By automating time-consuming and repetitive tasks, these tools allow legal professionals to focus on higher-level tasks that require their unique expertise and judgment. As AI technology continues to advance and mature, we’ll certainly see even more powerful applications emerge in the legal tech space, helping to reshape the industry and the way legal services are delivered.

However, caveat emptor:

The lawyer for a man suing an airline in a routine personal injury suit used ChatGPT to prepare a filing, but the artificial intelligence bot delivered fake cases that the attorney then presented to the court, prompting a judge to weigh sanctions as the legal community grapples with one of the first cases of AI “hallucinations” making it to court.

Earlier this year, Colombia-based Avianca Airlines sought to dismiss a federal court case in which a man, Roberto Mata, alleged he was “struck by a metal serving cart” onboard a 2019 flight and suffered personal injuries.

When filing a response, Mata’s lawyers cited at least six other cases to show precedent, including Varghese v. China Southern Airlines and Shaboon v. Egypt Air—but the court found that the cases didn’t exist and had “bogus judicial decisions with bogus quotes and bogus internal citations,” leading a federal judge to consider sanctions.


Questions or comments? Drop me an email: jwh3@mindspring.com