Jeremi DeBlois-Beaucage06/13/2023, 4:32 PM
Deepyaman Datta06/13/2023, 5:56 PM
marrrcin06/13/2023, 6:02 PM
Nok Lam Chan06/14/2023, 10:53 AM
it’s just a few lines of extra code. See https://pytorch.org/tutorials/beginner/blitz/data_parallel_tutorial.html For model inference it could be quite different, something like Ray can be useful if you need to handle a large amount of requests. For batch inference I don’t think there is any difference to squeeze extra performance from there.