Running modular pipelines with custom inputs in no...
# questions
b
Running modular pipelines with custom inputs in notebooks Hello everyone, I’m trying to run a modular pipeline on some custom inputs in a notebook. What would be the best way to do it? I am currently trying with
session.run()
but I’m unsure of how to modify the input datasets.
n
Where is the input coming from? Are you trying to run something in python and then pass that in memory object to Kedro?
In short, it's not possible to do it with Session at the moment since Datacatalog is created by session and any changes would be lost. To do that you need to create Datacatalog, runner. And use Datacatalog.add(data, dataset) to override or add new dataset
b
will try that. Thank you!