One of the common reasons AI/ML project fail are unrealistic expectations or not sufficient business support. The root cause of those issues lies in lack of understanding of AI/ML capabilities. As one of the most hyped topics recently it has a lot of predictions, opinions, overviews circulating in media and internet. The views range from AI is completely going to replace humans and all of us can happily retire to other extreme when the technology is just an empty trend. Hence the first step in successful AI project is to educate business stakeholders on the potential AI has and be upfront of the limitations too. I found that the least technical the explanation the easier it is to understand especially if the explanation complimented with industry specific examples and use-cases. Be practical!
In order to significantly increase success chances of any innovation project we need to start with a well-defined business case. The business case should include clear definition of what is the problem we are solving, who are we solving it for, what would be success criteria and how long would it take. The clearer the better, qualify and quantify. It should be no surprise that projects with highest impact and shortest implementation time are easier to get support and funding. After all everyone want to be part of success story.
It is very important to understand technology limitations and risk associated with it. During the webinar I covered 3 main:
The quality of AI decision in directly correlated with volume, quality and variety of data. Garbage in – garbage out is very applicable of AI. While deciding on AI business case it is crucial to evaluate required data availability and quality also considering how difficult it to integrate this data into analytics pipeline.
Quality of models could be significantly improved by providing additional data point coming from external data source like IoT devices, mobile sensors, video analytics, social media and others. Since this data resides outside of organisation it is important to consider how it would be integrated and governed.
Access to talent is one of the main obstacles in AI adoption. Pragmatic organisation can leverage wide eco-system to improve their ROI and delivery speed. I covered 5 different areas to get relevant resources:
Data preparation and feature engineering are most time-consuming tasks mostly performed manually. Organisation should consider using tools to automate and increase efficiency. In this blog you can find how Hitachi data scientists optimised time required for it from weeks to just hours: Pentaho Machine Learning and Operationalizing Machine Learning
The model accuracy diminishes as the world around changes and it no longer represented by set of old training data. That is why models need to re-train often to keep required accuracy levels. Many fraud detection models are updated daily. We’ve created tools that allow to automate and operationalise this process with very little manual intervention. I would recommend reading this blog to understand more: Machine Learning Model Management
If you are interested in listening to webinar recording it is available here