hey, i am trying to find a solution for running th...
# questions
d
hey, i am trying to find a solution for running the same pipeline with different parameters. i have a preprocess pipeline that i want to run over 5 different datasets. i want that if in the future i will have more datasets i can add them to the catalog and the pipeline will process it as well. how can i create a template pipeline and set the preprocess pipeline to run over a list of datasets names from the catalog?
m
Will
PartitionedDataSet
be enough for you?
d
not exactly, in further steps i will need to run over each dataset and extract features from each dataset according to the config .yml file.
i want to control the whole project from the config file. how can i use dynamic inputs rather than specific names of variables
m
Yeah, dynamic pipelines are something recurring here in the #questions channel and Kedro does not support them in general. There are some workarounds but they are rather hacky 😉
j
This is a question that recurs so often that I think it would be worth
implementing this functionality in some future version of Kedro. It seems like a somewhat common use case to want to processes many different partitioned datasets with the same pipeline without having to manually add all inputs and outputs to the catalog