AI Smarts: The Legal Sector: Getting Your People and Content Ready for AI
[ad_1]
Ed. note: This article was first published in the Winter 2023 issue of ILTA Peer to Peer magazine. For more information, Visit our ILTA ATL channel by clicking here.
2023. What a year it has been! Artificial Intelligence, or AI, has revolutionized industries all over the world. The legal sector is not an exception. No matter if you are a skeptic of AI, an early or late adopter, it’s hard to avoid the swirling questions, concerns, and expectations.
A shift has occurred. Technology allows us to imagine or do things before we were able to.
We will undoubtedly see an increase in the use of AI as tools, and within the tools that we use on a daily basis. Consequently, this will — sooner or later — shine a light on our enterprise content’s readiness for AI. This is creating a desire for many organizations to be ready for AI right now. The risk of falling behind and doing nothing is too great to ignore. Either way, the chance to get ahead of the game is too appealing to dismiss with conservative skepticism.
Parallel, users need guidance, answers, and training whether they jump in with both toes or just dip their toes.
Start by focusing on your cases
Organisations have many options. Wait or go ahead? Build in-house, or buy from a third party. Access everyone or just specialists? AI can be exposed to all content or only certain parts.
To find the best approach and increase their chances of success, companies should first define what they want to achieve with AI and its impact on their business. After all, it is technology and all technology should have a specific purpose.
Generative AI is not suitable for all tasks and use cases. In most cases, firms won’t let it near anything that involves client-facing advisory services.
It is best to identify core use-cases that can deliver value by improving productivity, growth, or supporting new business model. Prioritize the use cases that meet as many of these criteria as you can:
- The datasets that are relevant to the use case are limited and clearly defined.
- The data is not sensitive and does contain PII.
- The use case is not complex and does not involve multiple groups of people.
- The use case can be modelled manually and conceptually is simple.
- If implemented successfully, the use-case will have a high visibility and impact.
People-Centric Actions
The success of the project will depend on the people. The rapid adoption of Generative AI has led to many myths, misunderstandings and false expectations. It has also led to a new generation of AI experts who perpetuate these myths, misunderstandings and false expectations.
Onboarding AI — particularly Generative AI — as a technical capability within a firm therefore needs a bit more thinking than your average technology roll-out. In some cases, it may be necessary to reset the thinking of some users before moving on.
It is also vital to address the desire of the user community to use AI tools sooner rather than later to avoid the proliferation shadow AI.
We recommend that you start with an internal communication initiative and an education program for all employees, accompanied by a safe and secure AI interface controlled by the organization, so users can experiment at zero risk.
Allow users to experiment and educate them about:
- Basic prompts for understanding the limitations of a GPT LLM.
- The prompts are known to give incorrect answers.
- Similar prompts can help you understand that LLM answers are “non-deterministic”.
- Prompts where the LLM will be leveraging an authoritative source of your data as part of generating a response.
As part of education, provide clear communication and guidance to help people understand Generative AI such as ChatGPT and why it can sometimes answer incorrectly. (With great confidence too!) How users can avoid risks caused by, for example:
- Lack of transparency
- Models’ accuracy and bias
- Intellectual property (IP), or intellectual property issues,
- Sustainability concerns
Generative AI will be responsible for any work generated by the firm. This is one of many reasons. Organizations will need to implement policies to detect the use data in prompts and biased/inaccurate results.
It is important to get the basics right before implementing AI. It can be met with or create profound cultural resistance — sometimes for good reasons. It is not surprising that AI misinformation can cause anxiety, nervousness, and fear.
To ensure a smooth adoption, effective change management and communication are required to overcome user resistance and fear when implementing any AI-based use case for end users.
Depending on the use cases chosen as high priorities, the change programs can — and should — be highly adaptable, and you can learn from each of these before moving on to the next one.
But when it comes down to real business and for AI to be effective, knowledge management teams — and subject matter experts such as PSLs — are the perfect home for honing AI-related skills and ensuring the truth and validity of the firm’s data is both protected and leveraged in What we call the AI Sweet Spot. (More on this below). The KM function, in fact, has never been more relevant or important than it is today.
We recommend creating an inter-disciplinary and cross-practice Center of Excellence within your firm in order to build and share the expertise and experiences gathered around Generative Artificial Intelligence. The CoE can help maximize the firm’s AI potential through centralized strategy and governance, by bringing together the right people, clarifying goals, owning the use-cases, executing the implementation and measuring progress.
Content-Centric Actions
As organizations experiment with AI interfaces for organizational content, results often fall short of what users expect. This is primarily due to a single thing: The content is not yet ready.
This lack of data readiness stems from an overall — and often historic — lack of data governance. Data is usually over-shared, untagged, lacking consistent version control, and abundantly duplicated and outdated.
Another practical issue is the fact that the content may not be available in a form or system that AI can use. Microsoft 365 Copilot requires high-quality content to be available on SharePoint Online to be indexed into the Microsoft Graph, which can then be made available through the Semantic Index.
AI challenges are similar to those faced by those who have worked with enterprise search solutions. In fact, AI requires many of the same hygiene elements that make up a reliable and efficient search solution.
In this context we speak of an AI Sweet Spot. This is “good” content that can be used as a “grounding” for AI during the prompting process.
On one end of a scale, content is a binary object, a file, with a small amount of metadata (such a filename and date) that provides little context beyond the actual document content.
A file can also be tagged to include legal subject areas, related sectors, and legal jurisdictions. This will turn it into content which can be used for knowledge in the appropriate context. Content can be linked to create expertise.
AI can thrive at the other end of this scale, where knowledge exists and can be linked as expertise. This is where AI thrives.
The technology architecture behind it dictates that we can identify more content relevant to a user’s request the more we know about the content. The reason for this is that the user prompt is broken into “sub prompts”, some of which are simply sub-searches happening behind the scene. The challenges of AI and enterprise search are very similar.
But arguably, when — and if — you consider using Generative AI to draft an overview of relevant specialisms for a proposal, article or summary, for instance, the challenges and consequences go deeper.
For example, ignore any system prompts and consider the following prompt: “For an offer to Bank of Laska please provide a summary of our relevant specialisms. Each with a heading and a description of 100 words, based upon other bids to the same or similar customers.”
A human with experience would do a few searches, and then come up with the answer confidently.
What will Gen AI do? First, it must be assumed that it can access your data across all platforms. Second, it will need to be able identify similar clients based on the content or metadata. Third, it must search for the most relevant document from which to extract content. So on.
By examining these three steps, it is easy to identify the areas that need attention when preparing for AI.
- Consolidate your content into a modern content-management system, like Microsoft 365. Or connect and streamline the indexing of content from multiple systems, (iManage Netdocs etc.).
- Enhance your content by adding metadata to make it more identifiable. This will allow you to infer relationships more confidently.
- Remove redundant and outdated content and *at minimum* adopt proper file level version control rather than named versions.
This takes care of a large part of the problem, but there are still some interesting esoteric issues. What comes first when you examine the content of 100 previous tender responses? (Don’t list them all, as it has huge costs implications.)
This is the next step of content preparation. It could include:
- Identifying and defining the “right” rules and types of content to be made available to Gen AI.
- Checking for accuracy, completeness and reliability.
- Cleaning and preparing content repositories in accordance, including checking the permissions.
It is not surprising that CIOs, CISOs, and CDOs see these challenges as a mountainous task to overcome, given that most organizations hold hundreds of gigabytes per employee. This mountain may be too high.
As an alternative, AI could be given access to the highest-graded, gold-standard content. The organizations with the greatest foresight and ambitious will implement a knowledge platform that allows them to create, maintain, and manage (including disposal of) this higher-graded content more effectively and dynamically.
Your high-priority use cases will be those that determine if *all content* should be in scope or if you can prepare and make smaller sets of content available for specific use cases and tasks.
Atlas has developed an innovative concept that supports this. This concept, called “knowledge collections,” allows organizations to efficiently manage their content for specific tasks and uses cases.
How an Intelligent Knowledge Platform Accelerates the Journey
To make AI sing your tune, you need to do a lot of work.
A Knowledge Platform equipped with the capabilities to prepare, utilize, and deliver on use-cases supported by Generative AI will accelerate the journey. This platform, also called an Intelligent Knowledge Platform (IKP), should be viewed as an orchestration tool to leverage AI capabilities, while delivering advanced features and hygiene requirements for driving knowledge-centric collaboration, productivity, and communication.
Atlas – an Intelligent Knowledge Platform – accelerates the journey in several ways, including those in the table below:
Intelligent Knowledge Platform characteristics | How it accelerates AI journey | Other Benefits |
Automated tagging across at least five Categories of tags | Generative AI can produce better outcomes if content is richly described and tagged. | Comprehensive tagging provides the foundation for successful search scenarios including enhanced usability and context search results. |
Content governance that is consistent, scalable and uniform Controls | Consistency is essential for AI generated outcomes to be trusted. Hence, the management of content across 1,000s of Sites and Libraries, Teams, channels, etc., is crucial. It is a necessity. | Reduces the risk that data will leak; increases the ability for auto-labelling. Improves the ability to set “just right” permissions. |
Automated application or security or sensitivity labels | Areas of content or individual content items that have labels can be excluded from various scenarios and views – including Microsoft 365 Copilot usage. | Labels are useful for excluding content, setting ethical walls, as well as assisting in the management of content lifecycle by allowing policies to be applied. |
Collaborative knowledge base for subject matter experts with ring-fenced Management and content grading | Give knowledge managers or subject matter experts complete control over the management of their knowledge and its grading. | It helps drive a firm-wide knowledge agenda by enabling delegates and decentralized knowledge management and creation, all underpinned by global governance control. |
Knowledge collections that are dynamically updated | Authorized knowledge owners can define dynamic Business users can create their own GPT data sets by combining knowledge that AI can consume. | Reduces the reliance on IT operations, and reduces the overall cost for creating, running and maintaining RAG vectorizations. |
AI Assistant User Interface | Users can work with GenerativeAI in a safe, controlled, and governed environment. | Visibility of usage costs and control. Controlled access by group or user. Logging prompts (and optionally their responses) as well as additional metrics. Custom Terms of Service Policies Shadow AI eliminated. |
Gabriel Karawani is a Co-Founder of ClearPeople, the company behind Atlas – The Intelligent Knowledge Platform. Gabriel has a wealth practical experience helping businesses to get the most of Microsoft 365 in order to enhance their knowledge architecture and management. Gabriel’s knowledge of Information Architecture for AI within the enterprise is a valuable asset to the industry. He has a strong background in engineering, and a focus on the real-world applications.
[ad_2]