ONNX, the Open Neural Network Exchange is an open format for ML models, allowing you to interchange models between various ML frameworks and tools.
With ONNX you can build and train neural networks on cloud platforms or on dedicated machine learning systems, using well-known algorithms. Once trained, the algorithms can run on another framework, or with Windows machine learning as part of an application on a PC, or even an IoT device.
ONNX currently supports a range of machine learning frameworks, including Facebook’s popular Caffe 2, the Python-based PyTorch, and Microsoft’s own Cognitive Toolkit (formerly named CNTK). While there’s not direct support for Google’s TensorFlow, you can find unofficial connectors that let you export as ONNX, with an official import/export tool currently under development. ONNX offers its own runtimes and libraries, so your models can run on your hardware and take advantage of any accelerators you have.
To get an ONNX model to use with Windows ML, you can:
- Download a pre-trained ONNX model from the ONNX Model Zoo.
- Train your own model with services like Azure Custom Vision Service, Azure Machine Learning, or VS Tools for AI, and export to ONNX format.
- To learn how to train a model in the cloud using Custom Vision, check out Tutorial: Use an ONNX model from Custom Vision with Windows ML (preview).
- Convert models trained in other ML frameworks into ONNX format with WinMLTools converters or the ONNX tutorials.
Once you have an ONNX model, you’ll integrate the model into your app’s code, and then, you’ll be able use machine learning in your Windows apps and devices!
Ref
https://docs.microsoft.com