Say I have a dataset that contains data for many different entities and I need to create a separate model for each entity with say sklearn. So the number of models I will be creating is dynamic. What should I take into account when building pipelines to create multiple models? Should I for example use a pickle dataset to store a dictionary or list of models or something like that?
And more importantly, if I am building multiple models with each kedro run, I guess I would have to have a loop in a single kedro node, which would make a lot of stuff happen in a single node.. 🤔