Hey, I'm trying to make inference from kedro pipel...
# questions
s
Hey, I'm trying to make inference from kedro pipeline which reads mlflow model and includes pre and post processing functions. The model is light and my concern here is the latency. I've seen deployment steps on step functions but is it possible to deploy it on a single lambda function?
n
Is the bottleneck the startup time?
I think it really depends on your requirements and use case. lambda may also run into memory issue so you need to check whether Lambda can be used
y
👍🏼 1
This is optimized for low latency by preloading artifacts, but we need to understand your setup to help more
s
@Nok Lam Chan the model is light so memory won’t be an issue, only concern is latency. Not counting cold start time here, will step function using ecs take more time than a simple lambda function itself? If so, how much more would it be?
@Yolan Honoré-Rougé yes, I’m following this but in my case the pre and post processing is complex so not sure if mlflow serve or similar docker based approach for lambda will be sufficient. If I use kedro docker, then I can’t find a way or docs to deploy on a single lambda function
y
It should work, but I am afraid it will reload all your artifacts on each call, so there's likely a performance penalty
If the pre and postprocessing are too complex, kedro-boot is likely the way to go
👍 1
Mlflow's advantage is to package all artifacts in a single "model", but if you want to have some transformations unrelated to the model itself, it it likely not a good idea to use mlflow to serve your whole pipeline
👍 1