I have another basic question. I'm learning how to...
# questions
e
I have another basic question. I'm learning how to productionize ML apps and have worked through some tutorials but nothing real. In the tutorials I've seen, when developers want to make their ML model available to perform inferences they need to use a framework like fastapi or flask so that the consumer call pass data to an endpoint and get back an inference. what I don't quite understand yet is with kedro, everything is encapsulated within pipelines and if I call the project, then the default pipeline runs, which could be the data ingestion and the model training. How do we handle inferencing with kedro? do I make an inference pipeline separate from the other pipelines? do I use fastapi to create endpoints? in the spaceflights example, the purpose is supposed to be to generate predictions, but I don't see where in that example the inferencing with the trained model is addressed. Any wisdom is greatly appreciated.
m
Hi Emilio You might enjoy this (very brief) tutorial on that subject:

https://www.youtube.com/watch?v=z7MIq-B4hPA&t=484s

Regards M
👍 3
e
I'll check it out. I had seen his videos but hadn't watched.