Build an AI strategy that survives first contact with reality

Build an AI strategy that survives first contact with reality | itkovian

For one of our customers, one of the world’s leading manufacturers of snack foods, AI is supporting the elements of recipe creation, a historically complicated task given the dozens of possible ingredients and ways to combine them. By pairing product specialists with AI, your organization can generate higher quality recipes faster. The organization’s system has reduced the number of steps required to develop recipes for new products from 150 (on average) to just 15. Now it can delight customers more quickly with new products and new experiences to keep them connected to the brand.

Notably, AI doesn’t work in isolation, but rather augments skilled teams, providing guidance and feedback to further improve outcomes. This is a hallmark of successful AI solutions – they are ultimately designed for people and a multi-disciplined team that encompasses domain and technical expertise, as well as a human focus, to enable organizations to derive maximum value from them.

Guardrails matter

When thinking about how to get the most out of AI, your AI strategy should also consider appropriate guardrails.

As solutions become more sophisticated and integrated more frequently and deeply into software, products and day-to-day operations, their potential to allow people to make mistakes also increases. A common antipattern we see is when humans unintentionally become too dependent on a fairly stable AI: think of the developer who doesn’t control the AI-generated code, or the Tesla driver lulled into a false sense of security by the pilot’s capabilities automatic of the car.

Accurate governance metrics on the use of AI are needed to avoid that kind of over-reliance and risk exposure.

While many of your AI experiments might yield exciting ideas to explore, you need to be aware of the tools that underpin them. Some AI solutions aren’t built following the kind of sound engineering practices you require for other business software. Think carefully about which ones you would be safe to deploy in production.

It helps test AI models the same way you would any other application, and don’t let the rush to market cloud your judgement. AI solutions should be underpinned by the same principles of continuous delivery that underpin good product development, with progress made through incremental changes that can easily be reversed if they don’t have the desired impact.

You’ll find that it helps to be upfront about what you consider a « desired » outcome—it may not be just financial metrics that define your success. Depending on the context of your organization, productivity and customer experience may also be important considerations. You might look at other leading indicators, such as your team’s awareness of the potential of AI and their level of comfort in exploring, adopting, or implementing AI solutions. These factors can give you confidence that your team is on track to improve any lagging indicators of customer experience, productivity, and revenue. However you approach it, you’re more likely to be successful if you identified these metrics early on.

Finally, for all the bravado about the threat AI poses to people’s jobs, or even humanity at large, you’d do well to remember that your people will be using the technology. Consider the human side of the change, where you strike a balance between encouraging people to adopt and innovate with AI, while remaining sensitive to the problems it can present. For example, you may want to introduce guidelines to protect intellectual property in models that draw on external sources or privacy, where you may use sensitive customer data. We often find it best to give our people a voice on where AI enhances their work. They know, better than anyone, where it can have the greatest impact.

This content was produced by Thoughtworks. It was not written by the editorial staff of MIT Technology Review.

Hi, I’m Samuel