Jeremi DeBlois-Beaucage
06/13/2023, 4:32 PMDeepyaman Datta
06/13/2023, 5:56 PMmarrrcin
06/13/2023, 6:02 PMNok Lam Chan
06/14/2023, 10:53 AMtorch
it’s just a few lines of extra code.
See https://pytorch.org/tutorials/beginner/blitz/data_parallel_tutorial.html
For model inference it could be quite different, something like Ray can be useful if you need to handle a large amount of requests. For batch inference I don’t think there is any difference to squeeze extra performance from there.