These days it is one of the hot topics in Legal circles – alarming predictions of artificial intelligence (AI) replacing lawyers. How will AI, machine learning, and big data affect the legal system as technology improves?
Algorithms pervade our lives today, from music recommendations to credit scores and now, to bail and sentencing decisions. There are courts in United States which use algorithms to determine an Accused’s “risk”, which ranges from the probability that such an individual committing another crime to the likelihood of whether that person appear for his or her court date. These algorithmic outputs inform decisions about bail, sentencing, and parole. Like any other tool, AI here aspires to improve on the accuracy of human decision-making that allows for a better allocation of finite resources.
Artificial intelligence can mimic certain operations of the human mind and is the term used when machines are able to complete tasks that typically require human intelligence. The term machine learning is when computers use rules (algorithms) to analyze data and learn patterns and extract insights from the data. Artificial intelligence is a large factor shifting the way any work is done – legal is not an exception.
The most common AI myth is that it will take the place of human beings. But AI will actually augment the roles that lawyers play within their organizations. It will support, rather than supplant, their jobs. But we must remember the rapid pace of AI’s acceleration is faster than we realize or fully comprehend. Those lawyers, firms and professionals who assess the situation and plan for hiring and training the right skill-set of future lawyers and professionals will be much better prepared for the AI-age.
According to Deloitte, 100,000 legal roles will be automated by 2036. They report that by 2020 law firms will be faced with a “tipping point” for a new talent strategy. Now is the time for all law firms to commit to becoming AI-ready by embracing a growth mindset, set aside the fear of failure and begin to develop internal AI practices.
The common misplaced notion that many legal industry executives, lawyers and law firms have is that Artificial Intelligence or Machine Learning is a threat to their existence, or put simply, that AI is going to replace Lawyers. The evidence, from other industries such as eCommerce, healthcare and accounting is that AI will only enable Judges, lawyers and law firms to do more with less, to become way more productive than their predecessors.
As AI rapidly being applied to all major sectors, including medicine, finance, national defense, transportation, manufacturing, the media, entertainment and social relationships, that will create a lot of new legal issues for lawyers! There will be new subject matters like liability issues of autonomous cars, the legality of lethal autonomous weapons, financial bots that may run afoul of antitrust laws, and the safety of medical robots. But in addition to changing the subject matter that lawyers work on, it will also transform the way lawyers practice their craft. In short, we can say AI isn’t the future of law. But AI-assisted lawyers are.
As the technology of AI continues to develop, practitioners must ensure that AI-enabled systems are governable. This is to ensure they are open, transparent, and understandable. In essence, AI must work effectively with people, so that their operation will remain consistent with human values and aspirations. Researchers and practitioners have increased their attention to these challenges, and should continue to focus on them. Developing and studying machine intelligence can help us better understand and appreciate our human intelligence. Used thoughtfully, AI can augment our intelligence and help us to chart a better and wiser path.
The European Commission for the Efficiency of Justice (CEPEJ) of the Council of Europe has adopted the first European text setting out ethical principles relating to the use of artificial intelligence (AI) in judicial systems. The Charter provides a framework of principles that can guide policy makers, legislators and justice professionals when they grapple with the rapid development of AI in national judicial processes.
The CEPEJ’s view as set out in the Charter is that the application of AI in the field of justice can contribute to improve the efficiency and quality and must be implemented in a responsible manner which complies with the fundamental rights guaranteed in particular in the European Convention on Human Rights (ECHR) and the Council of Europe Convention on the Protection of Personal Data. For the CEPEJ, it is essential to ensure that AI remains a tool in the service of the general interest and that its use respects individual rights.
The CEPEJ has identified the following core principles to be respected in the field of AI and justice:
- Principle of respect of fundamental rights: ensuring that the design and implementation of artificial intelligence tools and services are compatible with fundamental rights;
- Principle of non-discrimination: specifically preventing the development or intensification of any discrimination between individuals or groups of individuals;
- Principle of quality and security: with regard to the processing of judicial decisions and data, using certified sources and intangible data with models conceived in a multi-disciplinary manner, in a secure technological environment;
- Principle of transparency, impartiality and fairness: making data processing methods accessible and understandable, authorising external audits;
- Principle “under user control”: precluding a prescriptive approach and ensuring that users are informed actors and in control of their choices.
- For the CEPEJ, compliance with these principles must be ensured in the processing of judicial decisions and data by algorithms and in the use made of them.
- The CEPEJ Charter is accompanied by an in-depth study on the use of AI in judicial systems, notably AI applications processing judicial decisions and data.