The Machine Learning Lifecycle and MLOps: Building and Operationalizing ML Models - Part II
Read - The Machine Learning Lifecycle and MLOps: Building and Operationalizing ML Models - Part I
To teach a child to hit a ball, you choose the right bat, throw them lots of practice pitches, and help adjust their swing. Eventually they learn to hit, and step into the batter’s box.
Developing a machine learning model is similar. You choose a machine learning technique, throw lots of data at the algorithm, and help adjust its responses. Eventually the algorithm hits with enough accuracy to become a final model, ready to put into production.
That’s the story of this blog, the second in a series on the machine learning (ML) lifecycle. Our first blog in the series defined ML, and examined the steps and personas of the data and feature engineering phase. This blog examines the model development phase: how to select your ML technique, train your ML model, then manage it before it steps into the batter’s box. Our next blog will explore the third phase of model operationalization, also known as MLOps. Subsequent blogs will define how business owners, data scientists, ML engineers, data engineers, and developers learn new skills and collaborate to make all this work.
Figure 1 illustrates the full ML lifecycle to which they contribute.
Figure 1. The Machine Learning Lifecycle
To quickly recap the fundamentals, machine learning (ML) is a subset of artificial intelligence in which an algorithm discovers patterns in data. These patterns help people or applications predict, classify, or prescribe a future outcome. ML relies on a model, which is essentially an equation that defines the relationship between key data inputs (your “features”) and outcomes (also known as “labels”). ML applies various techniques to create this model, including supervised learning, which studies known prior outcomes, and unsupervised learning, which finds patterns without knowing outcomes beforehand.
Now let’s dig into the model development phase. Each step includes an italicized summary of the key challenges it poses to stakeholders.
STEP 1: Select ML Technique
Data scientists can find many ML algorithms on open-source libraries such as PyTorch, Scikit-Learn, and TensorFlow. But which one should they use? While multiple techniques might work for a given business problem, the best fit depends on factors such as the business objective, amount and type of data, number and types of features, and project deadlines. Let’s walk through a 101-level summary of ML techniques. (For a 201-level summary, check out this KDNuggets article.)
Supervised ML Techniques
We start with supervised ML, whose primary techniques include regression and classification.
Regression. This supervised ML technique predicts a continuous set of numerical outcomes (a.k.a. labels) based on features, typically on an X-Y axis. For example, our first blog described the use case of predicting house prices. In that example, a basic ML linear regression would create a line that approximates the relationship between a feature, such as the local school ranking (our X value), and the outcome, which is the predicted house price (our Y value). The line approximates this relationship by overlaying a scatterplot of X-Y value pairs. Figure 2 illustrates this example.
Figure 2. ML Regression Technique: Predict House Prices Based on School Rankings
- Classification. Supervised ML also can classify outcomes. The classification technique, for example, might simply divide the outcomes of a linear regression into value ranges. In the scatterplot above, we could classify any predicted house price above $400,000 as “premium” and below $400,000 as “midrange.” Classification also might have a decision tree with a series of “if/then” decisions. Figure 3 illustrates how a zoo might use the ML classification technique to spot escaping bears on their nighttime webcam.
Figure 3. ML Classification Technique: Spot Zoo Escapees with a Decision Tree
Is that animal inside or outside the zoo gate?
If outside, is it a dog or a bear?
If it is a bear, sound the alarm!
Classification techniques like these address a variety of use cases. To use another example from the first blog, nurses might use an ML classification technique to decide whether a hospital patient has a high, medium, or low risk of infection, based on their vital signs and demographic information.
Unsupervised ML Techniques
Common unsupervised learning techniques include clustering and association mining. These techniques discover hidden patterns in data to predict outcomes that it did not know or label in advance.
Clustering. This unsupervised ML technique identifies clusters or groups of similar outcomes. For example, an ecommerce firm can use clustering to segment customers according to features such as discount sensitivity, product choices, and shipping preference. This might result in three segments that we decide to call frugal minimalists, impulse buyers, and pragmatic planners. Figure 4 illustrates the results of this clustering technique.
Figure 4. ML Clustering Technique: Segment Your Customers by Features
Association mining. The unsupervised ML technique of association mining helps identify outcomes that, perhaps unexpectedly, coincide with one another. Perhaps online shoppers of expensive mountain bikes also like to download audiobooks and stock up on Rogaine. If so, that’s a worthwhile association for retailers like Amazon to find—and bake into their customer recommendation engine. Association mining helps characterize many other types of “if/then” relationships. For example, it can improve medical diagnoses by observing conditions or symptoms that coincide with one another. This is also known as market basket analysis.
Key Challenges: Data scientists should lead the selection of ML techniques because they understand the logical problem to solve. Business owners provide the guiding business objectives, and data engineers help ensure model fit with datasets. Data teams also should consider using an autoML tool, which recommends a technique based on their features and labels. When in doubt, choose the simpler technique to reduce risk.
STEP 2: Train your ML Algorithm to Produce your Model
Once the data scientist selects their ML technique, they “train” the algorithm—potentially multiple versions of it—on one or more historical datasets to create the actual model. ML engineers, who serve a dual role of developer and data scientist, also might assist data scientists with the training process. Let’s first examine training for supervised ML, which has labeled outcomes. “Training” in this case means that the data scientist and ML engineer apply the algorithm to combinations of historical features and outcomes (a.k.a. labels) so that it can learn the relationship between them. As it learns, the algorithm generates a score hat predicts, classifies, or prescribes outcomes of features. The data scientist compares those predicted outcomes to real historical outcomes, then makes changes to improve accuracy, with the help of the ML engineer and data engineer.
They can improve accuracy in a number of ways. They can increase the amount of training data, change some features, or create a new feature that sheds additional light on the business problem. They also can change parameters, such as the weight applied to different features. (Picture the slope of the line in Figure 2 for our ML regression example.) They might even apply a different technique, or combine multiple techniques and algorithms into an “ensemble” to improve accuracy. After each round of changes, the data scientist and ML engineer run the algorithm again, generate a new score, and check the results. They repeat the process until the scores become sufficiently accurate. This results in a final ML model, ready for production. Figure 5 illustrates this iterative training process.
Figure 5. Train Your Supervised ML Algorithm to Become a Model
Training an unsupervised ML algorithm gets tricky because you do not have labeled outcomes as a comparison point. You need to measure their accuracy in creative ways. For example, you can check your predictions for consistency across two or more samples of historical data. Consistent generally means accurate, as shown in Figure 6.
Figure 6. Check Consistency of Predictions for Different Historical Data Samples
To evaluate a clustering ML technique, you can assess variability within and between the clusters you define:
How much do feature values vary within a given cluster? Are customers in our earlier example similar to one another?
How much do feature values vary between clusters? Are clusters different from one another?
Ideally you want customers to be similar to one another in a given cluster. But you want those clusters to be distinct from one another. Accuracy checks like these help you go back and make changes to input data, parameters, etc. Gradually you make the algorithm more accurate until you declare it a production-ready model.
Key Challenges: Data scientists, in collaboration with data engineers and ML engineers, should expect a highly iterative training process that makes fundamental changes to ensure accuracy. They also should carefully select the training data set—for example, discarding anomalous periods of time during COVID lockdown—to best characterize production reality. Do not shortcut training and rush to production.
STEP 3: Manage your ML model
Once the data scientist has produced this production-ready model, they hand that model off to the ML engineer to put into production. The ITOps manager, who manages and monitors various IT components, also assists the ML engineer with this step. They store the model, along with various other production and training model versions, in repositories such as Databricks, TensorFlow or the GitHub developer platform. They also catalog those models in ML catalogs or data catalogs, and apply role-based access controls to govern their usage. ML engineers, ITOps managers, and developers can search for and browse models in a catalogue, inspect their features and labels, and add various tags to guide colleagues. They can generate reports on these models to assist compliance efforts.
Finally, the ML engineer and ITOps manager assign responsibility and accountability to different stakeholders (including developers) for implementing or operationalizing the model in the MLOps phase. They decide where to host a given model (systems, containers, etc.), how to integrate it into production workflows, and how to serve that model into production.
Key Challenges: Models should not be kept secret. Assemble and curate them in an accessible but governed catalogue, ready for contributions from all ML stakeholders.
The next blog will dive into the results of this planning process. It will examine the model operations or MLOps phase, in which you implement, operate, and monitor your models—and govern them with each step. That’s when your batter starts swinging at real pitches, which is the most fun and most difficult phase in the process.
Read - The Machine Learning Lifecycle and MLOps: Building and Operationalizing ML Models - Part III