Hey team, another question; after training and tes...
# plugins-integrations
Hey team, another question; after training and testing the AzureML kedro model, the .pck file is generated. For the inference pipeline is any guidance to follow in Azure ml? As far as I understand, we don't need Kedro for that step? Is there any tutorial for that? Please keep in mind that there are 4-5 steps /functions that need to be executed for the data preprocessing step of the input data during the inference part for which I have the functions in Python.
It’s best to prepare the model in MLflow format (including the pre-processing steps there). https://mlflow.org/docs/latest/python_api/mlflow.pyfunc.html#creating-custom-pyfunc-models, store it during training and then just deploy it via standard Azure ML deployment procedures.
And spoiler : this does showcase how to do it easily with the kedro-mlflow plugin ;) https://github.com/Galileo-Galilei/kedro-mlflow-tutorial
Ok, great thanks a lot! I appreciate your support! As far as I understood, @marrrcin suggests doing the inference with mlflow separately than the local kedro dockerizing in azure-ml after training the model and generating the model artifact in azure ml? In other words, create an inference.py file in azureml and use the mlflow.sklearn.load model to link the generated (using kedro in azureml, see the attached) with the uri path
Please correct me if I'm wrong. Thank you very much in advance!
In particular, during the inference part, I want to connect the endpoint with an Excel file and take the input vector from there. Then, to do the pre-processing steps and the prediction using the inference pipeline. That's why I think that might be too complicated to do with Kedro. Maybe I'm wrong..
No, the very goal of the plugin is to package your entire inference pipeline in a pickle so you can deploy it easily (e.g. instead of regressor.pickle you need to create inference_pipeline.pickle (which will include your preprocessing) and then deploy this custom model. Have you tried the tutorial? Does it help?
Hey team, it was quite a busy period. In the meantime, I read the provided documentation and tried to implement the steps for the MLOps of my project using kedro and mlflow. Thank you very much for your replies! Very helpful! So the key here is to use the same approach


(including dockerizing) but for model training, validation, and inference use an MLflow model artifact ?? After creating the inference with those tools, would that be then possible to get the inference endpoint and use it in Excel, taking as input a column vector, preprocessing, and making predictions? How should I then create the input dataset format to the catalog.yml for the inference pipeline?