top of page
Search
Writer's pictureSkill Magnet

Modern Machine Learning Tooling

Context

Data Science is evolving at a fast pace and Machine Learning roles are transitioning out from a Data Science hybrid role to more engineering or analysis-oriented roles, often referred to the Type A and B data scientists.

A few evolutions are contributing to these changes:

  • An increased embedding of Machine Learning models into production systems, requiring more in-depth technical skills than before

  • An increased pace of change in business offering and user behavior, increasing the need for automation.

  • An increase in regulatory requirements such as GDPR’s “Right to an Explanation” being put into place, increasing the demand for traceability of data and interpretability of predictions and decision


Shift in tooling

This changing context has caused a change in the tooling used by data scientists. This evolution pushes data scientists towards leveraging the cloud, automation, and interpretability and repeatable processes.

  • Could based ML: Cloud infrastructure and Kubernetes (K8S) have changed the way we do Machine Learning. From being able to leverage prebuilt solutions as a Saas application to being able to run a full Machine Learning stack on K8S.

  • AutoML and Orchestration: Training models have been simplified with AutoML, providing an automated way to make data preparation, feature engineering, hyperparameter optimization, or model selection.

  • Interpretable and Reproducible ML: In the past couple of years, a few libraries and tools have emerged to help understand and give meaning to the model predictions and weights behind them. Tools such as the whatiftools, Lime, Shap, or Manifold help do just that.


Cloud-based ML

A move towards the Cloud and Kubernetes has been gradually happening, pushing the need for more DevOps or data-ops capabilities for Machine Learning Engineers.

This move has been accentuated by the growing interest in deep learning, particularly with Keras that helped democratized the discipline. Deep Learning can be particularly hungry in resources. It’s utilization changes a lot depending on the workload, forcing the need for more elastic and scalable infrastructure, supported by having Tensorflow executors running on K8S


SageMarker’s web interface


Another factor contributing to this move is the increased importance of ML in production. This shift, in turn, increasing the need for a close alignment between the prototyping stack and production aided by diverse Saas cloud offerings such as AWS SageMaker / Google Cloud ML Engine … These tools offer features such as model deployment and API provisioning, simplifying the process of pushing models onto production.


AutoML & Machine Learning workflows

The increased importance of having production Machine Learning systems has, in turn, accelerated the need for automation across the machine learning value chain, from training to deployment. Automation allowing to iterate and improve models faster.

AutoML provides a layer of automation around the model training process, handling some of its’ more repetitive tasks. It can handles tasks such as hyperparameter optimization, feature and, model selection. Libraries such as Tpot or AutoKeras, as well as a majority of Cloud providers’ ML offerings, now include AutoML as part of their solution.

The need for automation also increased the need to have the tooling to orchestrate the different parts of the process. Workflow tools such as Airflow, Kubeflow, MLFlow, and MetaFlow are some of the crucial tools used to help in this. They take care of handling the full machine learning process as a pipeline, helping orchestrate the process end to end, from data acquisition to model serving.


Interpretable & Reproducible ML:

GDPR and other regulations have influenced how we build machine learning models. They have pushed for interpretable and reproducible models.

On the interpretable front, a series of tools have emerged to help data scientists make sense out of their models. These tools evaluate different scenarios, analyze how variables interoperate, and provide dashboards to help explain model predictions. Whatiftool, Lime, SHAP, and Manifold are some of the tools introduced to take on this challenge.

Besides its’ benefits in providing a reliable way to debug your models, reproducibility is another aspect impacted by regulation. There is always the possibility to create reproducible machine learning pipelines using workflow tools. Still, some specific tooling has emerged to make the process easier.


Screenshot of Weight & Biases interface


DVC, Dolt, Weight and Biases (WANDB) and Dags Hub, are some of the specialized tools that make building models, in a reproducible manner, more straightforward. DVC is responsible for versioning models and datasets, while Dolt is strictly limited to datasets themselves. WANDB and Dags Hub, are instead focusing on keeping track of the weights and results of the model building/training.


Summary

Machine learning tools have changed quite a lot from just being able to leverage some prediction libraries and a Jupyter notebook. Doing data-science nowadays requires mastery of a wider toolset, that can include cloud libraries, workflow tools, interpretation, and versioning tools. This increase tooling should help data-science move away from some of its research image, into a more engineering or business function.


4 views0 comments

Recent Posts

See All

Commentaires


bottom of page