about
@Hugo Evers question on inference: upon reading
Hopsworks FTI architecture these days I was thinking that, if the output of a Kedro training pipeline is a model that can be serialized (ONNX or your format of choice), and also given that latency might be sensitive in some cases, is it worth doing the inference from Kedro?
this might seem like I don't want you to use Kedro for inference pipelines
while of course I want you to use Kedro for everything, but I'm seeking to understand how others do it.