Make AI your Copilot, not your Autopilot!
The words “legal” and “innovation” do not appear next to one another in the dictionary, but innovations like artificial intelligence, when wisely implemented, can enhance our day-to-day. Yes, even in the legal sector!
But in saying this, our reliance upon artificial intelligence should never become an “autopilot” feature: It remains our responsibility to verify not only the inputs of artificial intelligence, but also validating the results that are generated.
If we fail to do so, the artificial intelligence might bring about more legal innovation than anticipated: Such as generating fictional court cases based upon vague prompts!
It is my view that there is nothing “inherently improper” about using artificial intelligence to provide a service, but professionals have a duty to ensure that their services are still accurate and reflect a high standard. In addition, legal and other professionals would also need to be very transparent to customers about their use of artificial intelligence in rendering services.
In part one of my two-part blog, I will identify some of the main challenges of artificial intelligence.
Part two of my blog will illustrate the importance of asking the right questions and implementing and continually updating company policies so that we can effectively harness the emerging innovations like artificial intelligence and its impact on our society.
IDENTIFYING THE CHALLENGES OF ARTIFICIAL INTELLIGENCE
No matter the type of innovation or even the industry that it affects, the legal challenges to address innovations remain universal.
I have found that business and technological challenges are two of the main challenges that need to be addressed with innovations like artificial intelligence.
1. Business Challenges
Let us have a look at Business challenges first:
\Most industries are not known for adapting swiftly to changes. This is simply because industries are prone to being risk averse.
In my line of work as an inhouse legal for an IT consultancy, the speed at which technology develops is a constant challenge and balancing act! Constantly learning, relearning, upskilling and cross-skilling is essential for professionals in this industry and perhaps part of its charm in the first place.
The reality is that if industries do not attempt to adapt regulations to address innovations like artificial intelligence, the gap between innovations like artificial intelligence and the company policies that are drafted with the intent to manage it, will only continue to grow wider and wider. This phenomenon is known as the “Pacing Problem.”
In addition to this Pacing Problem, technologies like artificial intelligence often cross the traditional industries boundaries and this needs to be reflected in agreements and policies.
The decentralized nature of artificial intelligence also presents its own set of challenges. Whilst anonymity has its benefits, it can also serve to the benefit of parties who are able to use the features of artificial intelligence for less than savory reasons, such as writing sophisticated interception emails and then being able to disappear without a virtual trace, so to speak.
Therefore, from a compliance point of view, it is vital for companies to make sure that there is awareness and training on how to spot these “phishy” emails.
Technological Challenges
Let us have a look at technological challenges next:
- Data, digital privacy, and security are just some of the main challenges that we need to address from a legal and compliance point of view.
- Because there is no unified agreement on data, regulators all have different approaches.
- In Europe and the United Kingdom, the General Data Protection Regulation (GDPR) forms the basis for data regulation and provides strict requirements with regards to the data of its citizens.
- In South Africa, we have the Protection of Personal Information Act, which is modelled on the GDPR.
In contrast, the United States has sector specific rules and state laws.
As always, with new innovations, the need for strengthened cybersecurity is a prevailing challenge.
Let us look at an example: Whilst an exciting development, the software itself is vulnerable for cyberattacks. It will be important for developers and manufacturers to consider what safety measures will be implemented to ensure that hackers do not use it for malicious ends such as crashing.
Artificial intelligence makes use of algorithms to make a wide variety of strategic decisions. Therefore, it is important to understand the algorithms. The problem often is that these algorithms are either closely guarded as trade secrets or even if shared, are too complex for most to understand. This is known as “the Black Box problem.”
In response to this Black Box problem, regulators have suggested making these algorithms available to the public. Many of these algorithms, however, are not yet publicly available because it is considered as trade secrets and therefore protected by confidentiality agreements.
Next, we look at algorithmic bias: Whilst the assumption is that algorithms should be able to make unbiased decisions, the fact is that algorithms do have inherent biases. The problem with this is that based upon scores calculated with algorithms, people are either getting or not getting what they need and are unable to question this, because they either do not have the insight to understand it or simply because they do not know that such an algorithm even exists!
In part two of my blog, we will look at ways to address these challenges.