Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It used to be true, but nowadays there are several options to deploy Pytorch models:

1. PyTorch C++ API which can trivially load and execute your models exported via JIT

2. ONNX export and inference in TensorRT (highest performance inference option)

3. Or just deploy straight up PyTorch Python code - it'll run fine in "production".

One place where PyTorch is weaker than TF is mobile. TFLite is a lot more mature and has all sorts of acceleration support (GPU, DSP). So if that's what you need, at this point there's really no other good choice IMO.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: