Broader Model Support

OctoML platform is releasing an update that substantially increases model coverage across TensorFlow, PyTorch, and ONNX.

Previously, we packaged and accelerated only models that were convertible to Relay (TVM). We have since updated our platform to help you deploy and accelerate models beyond the coverage of TVM.

Our proprietary “engine of engines” approach tries multiple optimization strategies to deliver the best-possible performance for your model on your desired hardware. You no longer have to spend weeks manually trying different ONNX-RT flags, TensorRT thread optimizations, or TVM settings— we automate the entire search process for you and ensure you always walk away with the fastest-possible, immediately deployable model.

Visit our platform today to deploy and accelerate your model!

Did this answer your question?