Your browser does not support JavaScript!

Author: Tronserve admin

Friday 17th September 2021 11:17 PM

Operationalizing AI


image cap
147 Views

When AI practitioners talk about taking their machine learning models and deploying them into real-world environments,

 they don’t call it deployment. Instead the term that’s used is “operationalizing”. This might be confusing for traditional IT operations managers and applications developers. 

Why don’t we deploy or put into production AI models? What does AI operationalization mean and how is it different from the typical application development and IT systems deployment?


One of the unique things about an AI project versus a traditional application development project is that there isn’t the same build / test / deploy / manage order of operations. Rather there are two distinct phases of operation: a “training” phase and an “inference” phase. The training phase involves the selection of one or more machine learning algorithms, the identification and selection of appropriate, clean, well-labeled data, the application of the data to the algorithm along with hyperparameter configurations to create an ML model, and then the validation and testing of that model to make sure that it can generalize properly without too much overfitting of training data or underfitting for generalization. All of those steps comprise just the training phase of an AI project.


On the other hand, the inference phase of an AI project focuses on the application of the ML model to the particular use case, ongoing evaluation to determine if the system is generalizing properly to real-world data, and adjustments to the model, development of new training set data, and hyperparameter configurations to iteratively improve the model. The inference phase can also be used to determine if there are additional use cases for the ML model that are broader than originally specified with the training data. In essence, the training phase happens in the organizational “laboratory” and the inference phase happens in the “real world”.


Today In: Innovation

But the real world is where things get messy and complicated. First of all, there’s no such thing as a single platform for machine learning. The universal machine learning / AI platform doesn’t exist because there are so many diverse places in which we can use an ML model to make inferences, do classification, predict values, and all the other problems we are looking for ML systems to solve. We could be using an ML model in an Internet of Things (IoT) device deployed at the edge, or in a mobile application that can operate disconnected from the internet, or in a cloud-based always-on setting, or in a large enterprise server system with private, highly regulated, or classified content, or in desktop applications, or in autonomous vehicles, or in distributed applications, or… you get the picture. Any place where the power of cognitive technology is needed is a place where these AI systems can be used.


This is both empowering and challenging. The data scientist developing the ML model might not have any expectations for how and where the ML model will be used, and so instead of “deploying” this model to a specific system, it needs to be “operationalized” in as many different systems, interfaces, and deployments as necessary. The very same model could be deployed in an IoT driver update as well as a cloud service API call. As far as the data scientists and data engineers are concerned, this is not a problem at all. The specifics of deployment are specific to the platforms on which the ML model will be used. But the requirements for the real-world usage and operation (hence the word “operationalization”) of the model are the same regardless of the specific application or deployment.


Forbes 


Share this post:


This is the old design: Please remove this section after work on the functionalities for new design

Posted on : Friday 17th September 2021 11:17 PM

Operationalizing AI


none
Posted by  Tronserve admin
image cap

When AI practitioners talk about taking their machine learning models and deploying them into real-world environments,

 they don’t call it deployment. Instead the term that’s used is “operationalizing”. This might be confusing for traditional IT operations managers and applications developers. 

Why don’t we deploy or put into production AI models? What does AI operationalization mean and how is it different from the typical application development and IT systems deployment?


One of the unique things about an AI project versus a traditional application development project is that there isn’t the same build / test / deploy / manage order of operations. Rather there are two distinct phases of operation: a “training” phase and an “inference” phase. The training phase involves the selection of one or more machine learning algorithms, the identification and selection of appropriate, clean, well-labeled data, the application of the data to the algorithm along with hyperparameter configurations to create an ML model, and then the validation and testing of that model to make sure that it can generalize properly without too much overfitting of training data or underfitting for generalization. All of those steps comprise just the training phase of an AI project.


On the other hand, the inference phase of an AI project focuses on the application of the ML model to the particular use case, ongoing evaluation to determine if the system is generalizing properly to real-world data, and adjustments to the model, development of new training set data, and hyperparameter configurations to iteratively improve the model. The inference phase can also be used to determine if there are additional use cases for the ML model that are broader than originally specified with the training data. In essence, the training phase happens in the organizational “laboratory” and the inference phase happens in the “real world”.


Today In: Innovation

But the real world is where things get messy and complicated. First of all, there’s no such thing as a single platform for machine learning. The universal machine learning / AI platform doesn’t exist because there are so many diverse places in which we can use an ML model to make inferences, do classification, predict values, and all the other problems we are looking for ML systems to solve. We could be using an ML model in an Internet of Things (IoT) device deployed at the edge, or in a mobile application that can operate disconnected from the internet, or in a cloud-based always-on setting, or in a large enterprise server system with private, highly regulated, or classified content, or in desktop applications, or in autonomous vehicles, or in distributed applications, or… you get the picture. Any place where the power of cognitive technology is needed is a place where these AI systems can be used.


This is both empowering and challenging. The data scientist developing the ML model might not have any expectations for how and where the ML model will be used, and so instead of “deploying” this model to a specific system, it needs to be “operationalized” in as many different systems, interfaces, and deployments as necessary. The very same model could be deployed in an IoT driver update as well as a cloud service API call. As far as the data scientists and data engineers are concerned, this is not a problem at all. The specifics of deployment are specific to the platforms on which the ML model will be used. But the requirements for the real-world usage and operation (hence the word “operationalization”) of the model are the same regardless of the specific application or deployment.


Forbes 

Tags:
operational ai tech it ai practitioners system security do projects