Anyone dealt with defining datasets for blobs like...
# questions
e
Anyone dealt with defining datasets for blobs like model archives for TorchServe? Pretty straight forward use case, but was wondering if anyone else had tried anything šŸ™‚
d
Do the different pickle options available not work?
e
Unfortunately not - TorchServe expects a model archive that contains various model artifacts and a customer_handler.py that handles loading the model, sticking it behind an API, and routing. https://github.com/pytorch/serve/blob/master/model-archiver/README.md I'll probably just create a simple Dataset that uploads / downloads the model-archive to the local FS and go from there. Just wanted to see what others might have come up with
d
I guess you could also subclass/work off
APIDataSet
to do the saving to the endpoint that way
šŸ‘ 1
e
That's a great idea! We deploy TorchServe instances per model given the types of models and pipelines we have but this might be useful in the future!
d
Out of interest (this is something tangental Iā€™m working on) would you like it if your Kedro pipeline was a service that also lived as an endpoint?
e
It would certainly make deployment of models more seamless
K 1
Right now I have a node that calls TorchServe through
os.command
which is hacky but works. Something more native would be really cool
d
super interesting thank you
šŸ‘ 1