Once the data scientist selects their ML technique, they “train” the algorithm—potentially multiple versions of it—on one or more historical datasets to create the actual model. ML engineers, who serve a dual role of developer and data scientist, also might assist data scientists with the training process. Let’s first examine training for supervised ML, which has labeled outcomes. “Training” in this case means that the data scientist and ML engineer apply the algorithm to combinations of historical features and outcomes (a.k.a. labels) so that it can learn the relationship between them... Once the data scientist has produced this production-ready model, they hand that model off to the ML engineer to put into production. The ITOps manager, who manages and monitors various IT components, also assists the ML engineer with this step. They store the model, along with various other production and training model versions, in repositories such as Databricks, TensorFlow or the GitHub developer platform. They also catalog those models in ML catalogs or data catalogs, and apply role-based access controls to govern their usage. ML engineers, ITOps managers, and developers can search for and browse models in a catalogue, inspect their features and labels, and add various tags to guide colleagues. They can generate reports on these models to assist compliance efforts. Finally, the ML engineer and ITOps manager assign responsibility and accountability to different stakeholders (including developers) for implementing or operationalizing the model in the MLOps phase. They decide where to host a given model (systems, containers, etc.), how to integrate it into production workflows, and how to serve that model into production.