Book Introduction: Industrial AI Applications with Sustainable Performance
What Industrial AI Killer Applications Look Like
In some of my previous posts I presented industrial applications of AI and methods in assessing AI proposals. In this post I want to introduce a book published in 2020, titled “Industrial AI: Applications with Sustainable Performance”. Before terms like "Industry 4.0" or "smart factories" became keywords in the digital transformation literature, the author of this book, Professor Jay Lee had already spent decades building the foundations of what we now call Industrial AI as an operating discipline. As the founding director of the NSF Industry/University Cooperative Research Center on Intelligent Maintenance Systems (IMS), Jay Lee helped design and deploy predictive maintenance systems. His work at IMS and later at the Industrial AI Center has served more than 100 companies across sectors like aerospace, energy, and semiconductor manufacturing.
What sets Lee apart is his persistent philosophy that Industrial AI must be engineered as an integrated system and why success depends on how AI connects to computer maintenance management system (CMMS), human operators, scheduling routines, supply chains, and feedback loops. That perspective is what brought him to the attention of Foxconn, the world's largest contract electronics manufacturer, best known for assembling iPhones but also deeply embedded in the supply chains of HP, Sony, Dell, and Cisco.
In 2019, Lee took a leave from the University of Cincinnati to become Vice Chairman of Foxconn, leading initiatives at their Wisconsin campus to build a Smart Manufacturing Science Park, including a 5G-enabled Industrial AI Institute. During these years Foxconn earned five World Economic Forum (WEF) Lighthouse Factory awards.
This books starts with a foreword by Terry Gou, Foxconn’s founder and former chairman, which distills why Lee’s approach matters. Terry Gou explains most of what passes as AI in manufacturing is either too abstract or too shallow, and what Jay Lee brings is a systems-centric engineering method, deeply embedded in real plant constraints and scalable across geographies.
Chapter 1 of the book frames AI as a transformative general-purpose technology, emphasizes the shift from traditional automation to intelligent, self-optimizing systems, and argues for a systems-level view of AI adoption in industry, beyond algorithmic thinking. Chapter 2 distinguishes Industrial AI from general AI, outlines core challenges of data scarcity, reproducibility, reliability, safety/security, and stresses that Industrial AI must solve actual industrial problems, not just optimize pre-existing data patterns. Chapter 3 defines the evolution of Industrial AI, its purpose, and frames it as a convergence of five pillars of data, analytics, platform, operations, and human-machine interaction. This chapter also introduces some more known tools (back in 2020) like GE Predix as a case of mixed success.
Chapter 5, and the final chapter, is about how to establish Industrial AI technology and capability, and offers a capability maturity framework for organizations, discusses metrics, transformation pathways, and benchmarking via industrial AI competitions.
The main part of the book that I want to talk about is Chapter 4, where the core use cases are profiled. This chapter includes examples like predictive maintenance and large-scale scheduling solvers, either stem from Foxconn’s own transformation or reflect the broader industrial frameworks Lee helped shape.
The insights in Chapter 4 of this book, titled “Killer Applications of Industrial AI”, are problems that matter, implemented under constraints that most AI vendors overlook, and evaluated against outcomes other than model accuracy like uptime, yield, and coordination across physical space.
In Jay Lee’s framework, not every model that runs in production qualifies as a killer application. A killer application, in the industrial context, is defined by both novelty and systemic impact across four core business values:
Cost savings
Efficiency gains
Product and service value enhancement
New business model creation
This chapter also categorizes Industrial AI opportunities using a “4F” scenario model from Foxconn:
Factory = product and process optimization
Field = equipment status and task matching
Fleet = logistics and operations
Facility = continuous energy and environmental control
Within these scenarios, killer apps emerge when previously invisible problems become quantifiable, business goals can be directly abstracted into AI modeling frameworks, the system enables coordinated action across machines, humans, and planning systems, and cross-domain collaboration or integration occurs, creating compound gains across multiple layers of value delivery.
Lee’s emphasis is on what he calls “problem abstraction” as the ability to frame messy plant-floor realities into model-ready structures aligning sensors, features, constraints, and interfaces in ways that scale without collapsing under variability or noise. In this definition, a killer app is not the application(s) with the best model metrics, but instead the application with engineered leverage solving the right problem at the right system node with sufficient reliability to remain embedded.
Case Studies:
Several case studies are presented in this chapter in great detail. Case studies are built on realistic system boundaries. You see equipment names, dashboard screenshots. data flows, model input/output, KPIs, fault diagnostic flow charts, and sometimes even architecture diagrams, which is a rare asset in these type of books. The math and type of machine learning algorithm used in many solutions is provided, but these aren’t pitched as generic ML demos and are contextualized in the business urgency. This helps the reader understand why these use cases matter now, not in some hypothetical AI future. For some case studies specifies physical attributes are used, differences between the use of regression vs. ensemble learning is shown, and real evaluation benchmarks, like the 2016 PHM Data Challenge, are presented. Now, I will present three representative example of the case studies in Chapter 4, but I don’t include actual diagrams or exact details, for copyright purposes. My goal is to show the nature of the applications and how these can be used as a benchmark for future AI development initiatives.
1) Predictive Maintenance in Foxconn’s Unmanned SMT Lines
Killer application domain in this case study covers Factory + Fleet elements, and the business value delivered is related to cost savings, process continuity, and yield protection
This use case, developed at Foxconn and central to their “Lighthouse Factory” model, targets a pain point that is common across many manufacturing operations. Unplanned downtime due to component wear, especially in precision parts like suction nozzles on surface-mount technology (SMT) lines. What elevates this case study from a routine anomaly detection task to a killer application is its system-level integration of predictive analytics feeding directly into Foxconn’s ERP and CMMS infrastructure, enabling maintenance to occur within optimal time windows.
The problem is abstracted as a production stability problem, translated into a machine learning framework with cost and throughput explicitly modeled. Multilayered diagnostics, real-time health scoring, and failure forecasting converge into a decision-execution loop, minimizing human error and maximizing line yield. In environments where product margins are thin and downtime costs compound rapidly, this alignment delivers leverage that is hard to replicate with isolated pilots.
2) Virtual Metrology in Semiconductor CMP
Killer application domain in this case study is chemical mechanical planarization (Factory) + Field, and the business value delivered is quality control without delay, defect prevention, and yield gain.
Developed originally for the 2016 PHM Society Data Challenge, this use case shows how virtual sensors, in this case, for mean removal rate (MRR) in chemical mechanical planarization, can substitute for costly, slow physical metrology. While the original models were built for competition, the real value lies in how the use case is positioned within a bottleneck context of wafer polishing lines where measurement lag threatens downstream throughput.
What makes this a killer application is its potential to restructure quality control workflows. By using inline process variables (pad condition, slurry pressure, spindle velocity) to anticipate quality drift, the system shortens the reaction time between deviation and correction which is a critical shift in high-cost-per-defect industries like semiconductors. Again, this is a 2020 book, and some of these deployments may seem common practice now, but we have to realize that the AI landscape is quickly changing and this case study was a significant achievement back then.
This case is a good example for Jay Lee’s criteria in how it makes invisible process degradation measurable, links modeling directly to business loss mitigation (scrap rate, downtime), and enables zero-delay feedback within existing control hierarchies.
3) Intelligent Production Scheduling Across Multi-Plant Networks
Killer application domain here is Fleet + Facility, with Business Value Delivered as coordinated throughput, working capital optimization, and delivery performance.
In my opinion this is the most scalable and mature of the three case studies I have included in this post. Developed by Cardinal Operations, the system models production planning across multiple factories with interdependent constraints such as machine availability, inventory, raw material transit, shift labor rules, and Bill of Material (BOM) hierarchies. The technical feat is the ability to model billions of variables and constraints using continuous linear programming (no integer variables), converging in under an hour. This is something most commercial solvers cannot achieve at this scale, and again, think of computation power available prior to 2020. How tightly the optimization is entwined with real business operations in this use case is the most important factor. The planner suggests an optimal schedule and generates executable work orders, triggers upstream ordering, and provides root-cause analytics when performance targets aren’t met. Such features are way easier to include in an AI offering today with Generative AI, but in 2020, an impressive innovation. This is aligned with Jay Lee’s principle of cross-layer impact where the model lives at the intersection of supply chain, operations, and production, absorbing real-world constraints without collapsing. It replaces a previously manual, high-friction process with a feedback-aware, high-leverage AI capability.
I am not saying this book is flawless and I want to address some shortcomings that came to my attention. First thing is that the book provides somewhat a limited data context depth. While data types and sensors are named, the backend data flow, access and ingestion of raw data, cleaning, labeling, feature drift handling is not really touched. We don’t get a view of how noisy industrial data actually gets shaped for learning. There’s also minimal discussion of how models age or how ML libraries are maintained. The other issue is that the case studies are success stories. The early failures, iteration cycles, or false starts are not mentioned. Without that, we don’t know the pain the AI champions went through and it’s harder for practitioners to benchmark readiness or anticipate pitfalls.
Source:
Industrial AI. Applications with sustainable performance
Jay Lee, Springer Singapore, 2020, ISBN 978-981-15-2143-0 (Hardcover) ISBN 978-981-15-2144-7 (eBook). https://doi.org/10.1007/978-981-15-2144-7, Copyright Holder: Shanghai Jiao Tong University Press