Passa al contenuto principale

Deploy Your Own MLflow Workspace On-Premise with Docker

Deploy Your Own MLflow Workspace On-Premise with Docker

MLflow (Si apre in una nuova finestra) is an open source platform to manage the lifecycle of ML models end to end. It tracks the code, data and results for each ML experiment, which means you have a history of all experiments at any time. A dream for every Data Scientist. Moreover, MLflow is library-agnostic, which means you can use all ML libraries like TensorFlow, PyTorch or scikit-learn. All MLflow functions are available via a REST API (Si apre in una nuova finestra), CLI (Si apre in una nuova finestra), Python API (Si apre in una nuova finestra), R API (Si apre in una nuova finestra) and Java API (Si apre in una nuova finestra).

As a Data Scientist, you spend a lot of time optimizing ML models. The best models often depend on an optimal hyperparameter or feature selection, and it is challenging to find an optimal combination. Also, you have to remember all experiments, which is very time-consuming. MLflow is an efficient platform to address these challenges.

In this post, we briefly introduce the basics of MLflow and show how to set up an MLflow workspace on-premise. We set up the MLflow environment in a Docker stack so that we can run it on all systems. In this context, we have the services Postgres database, SFTP server, JupyterLab and MLflow tracking server UI. Let’s start.

To read this post you'll need to become a member. Members help us fund our work to ensure we can stick around long-term.

See our plans (Si apre in una nuova finestra)

Argomento Data Science

0 commenti

Vuoi essere la prima persona a commentare?
Abbonati a Tinz Twins Hub e avvia una conversazione.
Sostieni