https://kedro.org/ logo
#plugins-integrations
Title
# plugins-integrations
n

Nok Lam Chan

02/23/2024, 12:57 AM
Is
mlflow serve
any good? I guess this is question for @Yolan Honoré-Rougé and @Takieddine Kadiri, how much of Mlflow were you using? My guess is
mlflow serve
is not good enough thus
kedro-boot
has its own FastAPI serving. btw I was reading https://github.com/Galileo-Galilei/kedro-mlflow-tutorial/ and I think it's a very nice read, especially enjoy how it puts MLOps into perspective.
❤️ 3
m

marrrcin

02/23/2024, 8:16 AM

https://youtu.be/SVZVwjaSoyE?si=TPwhdZP6wX7liPd0&t=877

from 14:37 to 21:13
getindata 3
n

Nok Lam Chan

02/23/2024, 11:20 AM
Love that you have a video for every single questions! I saw allegro tech, curious if you use ClearML too and what's your experience with it?
m

marrrcin

02/23/2024, 11:21 AM
No experience on my side, besides following the meme guy 😄
Sorry, wrong profile -this is the one https://twitter.com/untitled01ipynb - I was pretty sure that that was the guy from ClearML (previous profile I’ve sent https://twitter.com/LSTMeow) but now I’m not that sure 🤔
n

Nok Lam Chan

02/23/2024, 11:51 AM
Ya I recognise LSTMeow😁
y

Yolan Honoré-Rougé

02/23/2024, 11:56 AM
Ah there is a huge debate about mlflow serve. "Model serving in one click" what tools/ providers emphasize on, but in practice you always serve a pipeline which includes pre/post processing. kedro-mlflow and the tutorial provided above help to package these pre/post processing steps in a custom model, hence you benefit from the "mlflow serve ability". This is limited in practice because you do not want to couple too much your model and the post processing : e.g. you do not want to retrain the whole model just to display the output in a different way. Data scientists often use the mlflow serve command during development to make demos, but we always end up with a more customised API, hence kedro-boot
this 1
💡 1
👍 1
n

Nok Lam Chan

02/23/2024, 12:17 PM
So I guess the short answer is, you don't really use
mlflow serve
for production. Follow up questions: • The
kedro-mlflow
takes an interesting approach to bundle
training
+
inference
into an object, I don't see this in
kedro-boot
anymore. Do you also evolve to different patterns for
training
and
inference
? • How are you using
kedro-mlfow
with
kedro-boot
? Do you simply fetch the model files from Mlflow more or less like an object store or you keep the Mlflow concept of "Model" which has the pre/post processing too? p.s. We are trying to create some docs about Mlflow, more from the MLOps perspective what's the role of Kedro/mlflow, not necessary a specific plugin(kedro-mlflow/kedro-boot). I think the tracking part is pretty straight forward for reproducibility/collaboration, the serving part is less clear to me.
t

Takieddine Kadiri

02/23/2024, 2:22 PM
Our model is basically a Kedro Pipeline. More precisely a
PipelineML.
At the end of the
pipeline ml
running
kedro mlflow
log a trained model, which is a pyfunc mlflow model containing a pickeled inference pipeline and all trained artifacts that have been saved in the training process (ex: classifier, encoder, …). This is useful when having multiples objects to be fitted in a training process, and this limit the training-serving skew problem.
Pipeline ml
take features and labels datasets as an inputs. Thoses features and labels are processed using a
features pipeline
. The
prediction pipeline
have a node that load the pyfunc mlflow model (pickeled inference pipeline) and do the mlflow model predictions, it also have some nodes for predictions post processing. The
main pipeline
(that we call also inference, in an orchestration point of vue), is composed of the features pipeline and the prediction pipeline. The main pipeline could differ depending on the use case
Kedro boot
offer a way to serve the main pipeline using a fully fledged rest api, that can be developed using nearly all the capabilities of fastapi. Having the full control over the API is mandatory in production. There is more to say in this topic, like model versioning (aligning it with the code versioning), making the cicd aware of the model version, evaluation pipeline, training datasets selections (from features and labels store), training datasets management, monitoring …
❤️ 1
y

Yolan Honoré-Rougé

02/23/2024, 9:32 PM
So the short answer is : yes, data scientists use
kedro-mlflow
to package their pipeline as a custom mlflow model because it is a very convenient way to have a "consistent" model with code+artifacts well versioned (the pain point is really about artifacts, because you all know the good deployments with my_encoder.pkl transferred in a zip folder alongside the well versioned code🫠 ). In the end we add an extra layer with
kedro-boot
as Taki explains in details, but sometimes it is not the data scientist which originally build the model who will develop with last step for serving the model.
👍 1
👍🏼 1
j

Jorit Studer

03/19/2024, 9:20 PM
I‘m quite interested in how you guys are keeping track of „model lineage“. Our company is an Azure house. • Dataset versioning is handled using Kedro's Datasets versioning feature, though there's room for improvement in this area. Something like DVC would be very cool. Also some models are built with PySpark and a custom in house Spark architecture was built. • The entire Kedro project, encompassing preprocessing, training, evaluation, and inference pipelines, is packaged using Poetry. GitLab CI is employed to build pip package of the kedro source code and uploaded to our internal Nexus PyPI repository. • With each pull request, a new model is trained and logged to an MLflow Experiment. • Upon merging to the main branch, a new model version is registered within the MLflow Model Registry. We're currently evaluating options for hosting a centralized MLflow Model Registry and leaning towards Databricks as we see no other vendor right now which has good AAD authentication integrated and AML is not available. Deployment of models would then be facilitated using Databricks MLflow. However, one challenge we face is the dependency installation during boot-up on serverless infra, which could conflict with dependencies quarantined in our internal PyPI repository.
n

Nok Lam Chan

04/05/2024, 9:52 AM
Can you elaborate on model lineage? What kind of lineage are you thinking?
j

Jorit Studer

04/06/2024, 7:25 PM
keeping track of the whole model lifecycle. Datasets that went in, hyperparams, company governance reports, git commit for a given model, stress testing, challenger models, monte carlo simulations, custom approval flows and so on I‘ve seen this loosely spread everywhere and it should be part of the „model registry“ in my opinion.
4 Views