Hello Kedro community,
I'm using Kedro with training a Pytorch model using PyTorch_lightning.
I know that there is a plugin (Kedro_azureml) that allows distributed training. As I'm currently not working within azurem, this is not an option for me (correct me if I'm wrong).
My problem occurs when trying to use the from torch.utils.data import DataLoader with num_workers > 0 as Kedro seems to somehow block this process (pl.Trainer.fit() freezes).
Is there a way to use distributed training without computing on a azure machine?