Skip to content

Model formats

André Pedersen edited this page Nov 14, 2020 · 3 revisions

In order to deploy CNN models in FastPathology, only a selected proportion of formats are supported. Hence, if a model is trained in for instance Keras or PyTorch, models need to be converted before they can be used. The supported formats are: TensorFlow (.pb), OpenVINO (.xml/.bin/.mapping), and TensorRT (.uff).

  • Keras models can be converted to TensorFlow using this GitHub repo.
  • PyTorch models may be converted to OpenVINO through TorchScript and ONNX. An example can be seen here.
  • Matlab models can be converted to OpenVINO through ONNX. An example can be seen here.
  • TensorFlow to TensorRT. Example

All these I have tested personally and have gotten working. However, some models cannot always be converted. An example is models containing custom layers. However, there are ways to convert even such models, but it requires some additional work. For instance for TensorFlow to OpenVINO see here. While some formats are much more restricted, e.g. TensorRT. OpenVINO seems to support the most common formats, while TensorRT is much more limited. For more information about pros/cons see here.

The only format that might be introduced in the near future is ONNX, as it is best supported for PyTorch and Matlab. It is also tempting to introduce some kind of deployment through Python, as most people train models in Python, and not all models are easy to convert. However, as we want to perform real-time rendering of predictions while performing inference simultaneously, it might take some time before proper support will be added.

Clone this wiki locally