hi everyone, going back to this discussion with <@...
# questions
j
hi everyone, going back to this discussion with @Deepyaman Datta and @Juan Luis, I wanted to ask if anyone has experience with this. in summary, I have three kedro pipelines. data_processing model_training inference • gets input either from file in batch or from a user request on-demand I want to run the inference pipeline in an scheduled/batch job, and on demand. 1. how would you deploy the inference pipeline? 2. what aws services do you recommend for the scheduled job? ECR + ECS? aws batch? the input is a file in an s3 location, and the output goes also to an s3 location. 3. for the user interface API I’m using a lambda function and I’m loading the model.pkl from s3:/data/06_model_output/. I would like to use here the inference pipeline instead. how can I pass the request input to the pipeline? it is a dataframe but it is not in the catalog. and the output has to be returned from the lambda, also the dataframe is converted to json. sorry if this sounds trivial, but I’m having a hard time figuring out the architecture for this. thanks in advance!
K 1
n
I would also love to hear everyone's experiences with what AWS service to use to deploy Kedro pipelines.