You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Aug 30, 2018. It is now read-only.
Based on this tutorial http://pytorch.org/tutorials/advanced/super_resolution_with_caffe2.html, we need to specify the batch_size while exporting the model from pytorch to onnx. In some cases we need a dynamic batch_size for inference, do you have any advice of how we can do this?