which kind of hook do you want to run?
# questions
d
which kind of hook do you want to run?
a
the one I've linked - after data catalog creation
I need to load some items and initialize some singleton class with the data
and I want it to run only in inference pipeline
d
I think a
before_pipeline_run
hook is better, you get a created data catalog and you can check what pipeline is running and then mutate the catalog there
a
yes I'm afraid that before_pipeline_run will be run every time the pipeline is executed
and I want to package that pipeline as a serving endpoint for mlflow
and I would like it to run only once for initialization
d
but you can check which
--pipeline
name was passed from the CLI I think
via
pipeline_name
a
yeah, I'm not familiar enough to get to know where I can get that context in kedro
so that's why I'm asking where to get that info :)_
well I'll do test with before_pipeline as I'm not sure if it will work or not
d
I’m 99% sure it will work
a
I mean for it to run only once or many times
d
so you can find some hacky ways to maintain state like writing a file to disk the 1st time and checking for it’s presence
a nice way to do this is to create a hook, put a breakpoint in and then play around with the objects live
a
yeah I often do that by embedding
IPython.embed()