Running modular pipelines with custom inputs in notebooks
Hello everyone,
I’m trying to run a modular pipeline on some custom inputs in a notebook.
What would be the best way to do it? I am currently trying with
session.run()
but I’m unsure of how to modify the input datasets.
n
Nok Lam Chan
05/03/2024, 9:03 AM
Where is the input coming from? Are you trying to run something in python and then pass that in memory object to Kedro?
Nok Lam Chan
05/03/2024, 9:05 AM
In short, it's not possible to do it with Session at the moment since Datacatalog is created by session and any changes would be lost.
To do that you need to create Datacatalog, runner. And use Datacatalog.add(data, dataset) to override or add new dataset