Wiki90: 90s Style Encyclopedia on the Web
In the article presented here, we are going to delve into Open Neural Network Exchange, a topic that has captured the attention of many people in recent years. Open Neural Network Exchange is a topic of great relevance and that impacts different aspects of daily life. Throughout our analysis, we will explore the different aspects related to Open Neural Network Exchange, from its origin and evolution, to its impact on current society. Additionally, we will try to understand how Open Neural Network Exchange has changed over time and what implications it has for the present. We hope that this article will be of great interest to all those who wish to expand their knowledge about Open Neural Network Exchange and understand its importance in today's world.
Original author(s) | Facebook, Microsoft |
---|---|
Developer(s) | Linux Foundation |
Initial release | September 2017 |
Stable release | |
Repository | |
Written in | C++, Python |
Operating system | Windows, Linux |
Type | Artificial intelligence ecosystem |
License | initially MIT License; later changed to Apache License 2.0 |
Website | onnx |
The Open Neural Network Exchange (ONNX) is an open-source artificial intelligence ecosystem of technology companies and research organizations that establish open standards for representing machine learning algorithms and software tools to promote innovation and collaboration in the AI sector. ONNX is available on GitHub.
ONNX was originally named Toffee and was developed by the PyTorch team at Facebook. In September 2017 it was renamed to ONNX and announced by Facebook and Microsoft. Later, IBM, Huawei, Intel, AMD, Arm and Qualcomm announced support for the initiative.
In October 2017, Microsoft announced that it would add its Cognitive Toolkit and Project Brainwave platform to the initiative.
In November 2019 ONNX was accepted as graduate project in Linux Foundation AI.
In October 2020 Zetane Systems became a member of the ONNX ecosystem.
The initiative targets:
Allow developers to more easily move between frameworks, some of which may be more desirable for specific phases of the development process, such as fast training, network architecture flexibility or inferencing on mobile devices.
Allow hardware vendors and others to improve the performance of artificial neural networks of multiple frameworks at once by targeting the ONNX representation.
ONNX provides definitions of an extensible computation graph model, built-in operators and standard data types, focused on inferencing (evaluation).
Each computation dataflow graph is a list of nodes that form an acyclic graph. Nodes have inputs and outputs. Each node is a call to an operator. Metadata documents the graph. Built-in operators are to be available on each ONNX-supporting framework.