Article by: Asst. Prof. Suwan Juntiwasarakij, Ph.D., Senior Editor & MEGA Tech
Artificial intelligence (AI) is gradually being implemented in almost every aspect of our lives. Its applications are everywhere in medicine, geology, customer data analysis, autonomous vehicles, and even art, and its uses are constantly evolving. It will be a game-changer at every value chain level for manufacturers. Direct automation, predictive maintenance, reduced downtime, 24/7 production, improved safety, lower operational costs, greater efficiency, quality control, and faster decision-making are just some rewards to the organization that embraces the transformation and masters the implementation of AI throughout its entire business.

Implementing AI in manufacturing facilities is getting popular among manufacturers. According to Capgemini’s research, more than half of the European manufacturers (51%) are implementing AI solutions, with Japan (30%) and the US (28%) following in second and third. The same study also reveals that the most popular AI use cases in manufacturing are improving maintenance (29%) and quality (27%). The popularity is driven by manufacturing data being a good fit for AI/machine learning. Manufacturing is full of analytical data, which is easier for machines to analyze. In addition, hundreds of variables impact the production process. While these are very hard to study for humans, machine learning models can easily predict the impact of individual variables in such complex situations. In other industries involving language or emotions, machines still operate below human capabilities, showing down their adoption.


According to PwC, the economic benefits of AI will primarily be the result of the productivity gains from businesses that automate processes and augment the work of their existing labor force with different types of AI technologies and increased consumer demand, resulting from the availability of personalized and higher-quality digital and AI-enhanced products and services. While the technology has been implemented throughout the critical parts of the business, companies have focused slightly more on adding AI solutions to their core production processes: production development, engineering, assembly, and quality testing. Manufacturing is one of the most critical sectors in the world’s economy. It accounted for 17% of the global GDP in 2021 and generated an output of $16.5 trillion globally.

The global smart manufacturing market in 2022 was valued at USD97.6 billion and is projected to reach USD 228.3 billion by 2027. It is estimated to grow at a CAGR of 18.5% from 2022 to 2027. Predictive maintenance is one of the core tenants of machine learning’s role in manufacturing. PWC reported that predictive maintenance will be one of the most significant growing machine learning technologies in manufacturing, increasing 38% in market value from 2020 to 2025. Unscheduled maintenance has the potential to cut into a business’s deeply. Knowing well ahead of time when an asset will fail avoids unplanned downtimes and broken investments. According to Deloitte, predictive maintenance increases productivity by 25%, reduces breakdowns by 70%, lowers maintenance costs by 25%, and increases equipment uptime by 10 to 20%. Overall, maintenance costs are reduced by 5 to 10%, and maintenance planning time is even reduced by 20 to 50%.

Having decided to implement predictive maintenance, businesses begin a journey with new learnings and insights waiting along the way. Initially, there is no certainty as to which level of failure prediction can be reached. Each step yields better results leading to reduced downtimes and increased productivity. When starting at stage 0, understanding the process through data is the primary objective. For step 1, we need a consistent sensor data stream into an integrated platform. With the help of powerful visualization, experts can identify which parameters indicate imminent failure. For step 2, expert insights about the process and its parameters are vital. With their help, we can deduct simple rules. Applying these may prevent a large proportion of failures from occurring already.

However, anomaly detection requires sufficient accumulated sensor data with a minimum frequency of measurements per time unit and sensor. This data allows us to define the norm for any given process. Aberrations from the norm can then trigger alerts to the operators – who will have to decide if an actual failure has occurred. We need many failures with detailed log data to be reactive and preventive. Applying advanced analytics, failures can then be detected immediately and reliably. Root cause analysis requires recordings of successful and unsuccessful approaches to resolve failures. Causes for occurring losses can be narrowed down, and appropriate actions proposed.