Good governance is essential for enterprises implementing AI

Good governance is essential for enterprises implementing AI | itkovian

Laurel: Is fantastic. Thanks for this detailed explanation. So since you specialize in governance yourself, how can companies balance both protecting AI and implementing machine learning while still encouraging innovation?

Stephanie: So balancing safeguards for AI/ML implementation and encouraging innovation can be a really challenging task for companies. It’s on a massive scale and it’s changing very fast. However, this is critically important to having that balance. Otherwise, what’s the point of having innovation here? There are a few key strategies that can help achieve this balance. Number one, establish clear governance policies and procedures, review and update existing policies where they may not be suitable for AI/ML development and implementation according to the new policies and procedures needed, such as ongoing monitoring and compliance as I mentioned previously. Second, involve all stakeholders in the AI/ML development process. Let’s start with the data engineers, the business, the data scientists, even the ML engineers who deploy the models into production. Model reviewers. Corporate stakeholders and organizations at risk. And that’s what we’re focusing on. We are building integrated systems that provide transparency, automation and a good user experience from start to finish.

So all of this will help streamline the process and bring everyone together. Third, we needed to build systems that would not only enable this overall workflow, but also capture the data that would enable automation. Often many of the activities that take place in the ML lifecycle process are performed through different tools because they reside in different groups and departments. And that involves participants manually sharing information, reviewing, and signing off. So having an integrated system is essential. Four, monitoring and evaluating the performance of AI/ML models, as I mentioned earlier, is very important because if we don’t monitor the models, it will actually have a negative effect on its original intent. And doing it manually will stifle innovation. Model deployment requires automation, so having this is key to enabling your models to be developed and deployed in the production environment, actually operational. It’s reproducible, it works in production.

It’s very, very important. And have well-defined metrics to monitor the models, and this involves the performance of the infrastructure model itself and the data. Finally, provide training and education, as it is a team sport, everyone comes from different backgrounds and plays a different role. Having a cross understanding of the whole lifecycle process is really important. And having the education to understand what is the right data to use and whether we are using the data correctly for use cases will prevent us from rejecting the model deployment much later. So, I think all of these are key to balancing governance and innovation.

Laurel: So there’s another topic here to discuss, and you touched on that in your answer, which is, how does everyone understand the AI ​​process? Could you describe the role of transparency in the AI/ML lifecycle from creation to governance to implementation?

Stephanie: Safe. So AI/ML, it’s still quite new, it’s still evolving, but in general, people have settled into a high-level process flow that is defining the business problem, capturing the data, and processing the data to solve the problem , then build the model, which is model development and then model deployment. But before implementation, we carry out a review in our company to ensure that the models are developed according to the right principles of responsible AI and therefore continuous monitoring. When people talk about the role of transparency, it’s not just about the ability to capture all the metadata artifacts throughout the lifecycle, the lifecycle events, all of this metadata needs to be timestamped transparent so that the people can know what happened. And that’s how we shared information. And having this transparency is so important because it builds trust, it ensures fairness. We need to make sure that the correct data is used and this facilitates explainability.

There’s this thing about templates that needs explaining. How does he make decisions? And then it helps support continuous monitoring and it can be done in a number of ways. The one thing we stress a lot early on is understanding what are the goals of the AI ​​initiative, the goal of the use case, and what is the intended use of the data? We examine it. How did you process the data? What is the data lineage and transformation process? What algorithms are used and what ensemble algorithms are used? And the specifics of the model must be documented and explained. What is the boundary of when the template should be used and when it shouldn’t be used? Explainability, verifiability, can we actually track how this model is produced all the way through the lineage of the model itself? Also, the specifics of the technology such as the infrastructure, the containers it is involved in, how this actually affects the performance of the model, where it is deployed, which business application is actually consuming the output prediction from the model, and who has access to the decisions from the model. So, all these are part of the transparency theme.

Laurel: Yes, it’s quite extensive. So considering AI is a rapidly evolving field with many emerging tech technologies like Generative AI, how do JPMorgan Chase teams keep pace with these new inventions, while also choosing when and where to implement them?

Hi, I’m Samuel