The following are the main findings of the report:
Companies buy into AI/ML, but struggle to scale across the organization. The vast majority (93%) of respondents have several AI/ML projects experimental or in use, with larger companies likely to have more implementation. A majority (82%) say machine learning investments will increase over the next 18 months and closely tie AI and machine learning to revenue goals. However, scalability is a major challenge, as is hiring skilled workers, finding appropriate use cases, and demonstrating value.
Successful implementation requires a strategy of talent and expertise. The challenge goes beyond attracting basic data scientists. Enterprises need hybrid talent and translators to lead AI/ML design, testing and governance, and a workforce strategy to ensure all users play a role in technology development. Competitive companies should offer clear opportunities, advancements and impacts for the workers that set them apart. For the broader workforce, upskilling and engagement are key to supporting AI/ML innovations.
Centers of Excellence (CoE) provide a foundation for broad implementation, balancing technology sharing with tailored solutions. Firms with mature capabilities, typically larger companies, tend to develop systems in-house. A CoE provides a hub-and-spoke model, with core ML consulting across divisions to develop broadly implementable solutions alongside tailored tools. ML teams should be incentivized to stay abreast of rapidly evolving AI/ML data science developments.
AI/ML governance requires robust model operations, including transparency and data provenance, regulatory foresight, and accountable AI. The intersection of multiple automated systems can lead to increased risk, such as cybersecurity issues, illegal discrimination, and macro volatility, for advanced data science tools. Regulators and civil society groups are looking into AI affecting citizens and governments, with a particular focus on systemically important areas. Businesses need a responsible AI strategy based on comprehensive data provenance, risk assessment, and audits and controls. This requires technical interventions, such as automatic reporting of errors or risks of the AI/ML model, as well as social, cultural and other reforms.