pytorch1.0是pytorch和caffe2合并的结果,这到底意味着什么,贾扬清本人给出了答案。原文请参考:Caffe2 + PyTorch = PyTorch 1.0
正式宣告PyTorch1.0同时胜任与研究和生产场景
将AI开发从研究带向生产的道路一直需要很多步骤和工具,让新的方法的测试、部署以及迭代优化等操作常耗时和复杂。为了加速和优化这一过程,我们引入了PyTorch1.0,我们的下一代的开源的AI框架。
PyTorch1.0从Caffe2和ONNX获得了模块化、面向生产的能力,同时又结合了PyTorch本身的灵活,面向研究的设计特性;提供了一个快速的、无缝连接的从原型开发到生产部署的能力。With PyTorch 1.0, AI developers can both experiment rapidly and optimize performance through a hybrid front end that seamlessly transitions between imperative and declarative execution modes. PyTorch1.0的技术已经被大规模的用于Facebook的很多产品和服务,包括每天60亿条的文本翻译。
PyTorch 1.0 will be available in beta within the next few months, and will include a family of tools, libraries, pre-trained models, and datasets for each stage of development, enabling the community to quickly create and deploy new AI innovations at scale.
从研究到生产
PyTorch的imperative front end使得更加快速的原型开发和实验变得可能,因其灵活和高效的编程方式。PyTorch的第一个版本在一年多以前出现(文章写于2018.05),它的速度、效率和支持cutting-edge AI models的能力(如动态图)使得它迅速成为研究者的一个重要工具。它有超过110万的下载量和archive上过月第二高的引用率。
尽管当前版本的PyTorch为AI研究者提供了极大的灵活性和部署能力,在生产端的表现却不尽如人意。因为其与python的紧密联系。我们经常需要将研究用的代码转移到caffe2下以用于生产。Caffe2的基于图的执行器使得生产者能够利用很多先进的优化方法,如graph transformations,高效的内存重利用和高度集成的硬件接口。Caffe2项目于两年前启动以标准化Facebook的AI部署工具。现在再Facebook的各个服务器以及10台手机上运行着神经网络,横跨8代iPhone手机以及6代的安卓CPU。
从PyTorch到caffe2的迁移曾经是一个耗费体力和时间,容易出错的过程。为了解决这个问题,我们与许多硬件厂商合作创造了ONNX(Open Neural Network Exchange)。基于ONNX,开发者可以在不同的框架之间共享模型。
我们已经使用了这些工具(Pytorch,caffe2和ONNX)来建立和部署翻译器,一个大规模运行的翻译48中最为常见的语言的工具。
然而尽管这些工具的结合很搞笑,但仍然需要复杂并且耗时的人工劳动。
将研发和生产能力整合到一个框架中
Pytorch1.0融合了immediate and graph execution modes,提供了研究的灵活性以及生产部署的性能优化能力。More specifically, rather than force developers to do an entire code rewrite to optimize or migrate from Python, PyTorch 1.0 provides a hybrid front end enabling you to seamlessly share the majority of code between immediate mode for prototyping and graph execution mode for production.
In addition, ONNX is natively woven into PyTorch 1.0 as the model export format, making models from PyTorch 1.0 interoperable with other AI frameworks. ONNX also serves as the integration interface for accelerated runtimes or hardware-specific libraries. This gives developers full freedom to mix and match the best AI frameworks and tools without having to take on resource-intensive custom engineering. Facebook is committed to supporting new features and functionalities for ONNX, which continues to be a powerful open format as well as an important part of developing with PyTorch 1.0.
建立端到端的DL系统
Along with PyTorch 1.0, we’ll also open-source many of the AI tools we are using at scale today. These include Translate — a PyTorch Language Library — for fast, flexible neural machine translation, as well as the next generation of ELF, a comprehensive game platform for AI reasoning applications. Developers can also take advantage of tools like Glow, a machine learning compiler that accelerates framework performance on different hardware platforms, and Tensor Comprehensions, a tool that automatically generates efficient GPU code from high-level mathematical operations. We have also open-sourced other libraries, such as Detectron, which supports object-detection research, covering both bounding box and object instance segmentation outputs. Visit our AI developer site at facebook.ai/developers for the full list, and learn more about PyTorch on the PyTorch and Caffe2 blogs.
Over the coming months, we’re going to refactor and unify the codebases of both the Caffe2 and PyTorch 0.4 frameworks to deduplicate components and share abstractions. The result will be a unified framework that supports efficient graph-mode execution with profiling, mobile deployment, extensive vendor integrations, and more. As with other open AI initiatives like ONNX, we’re also partnering with other companies and the community to give more developers these accelerated research to production capabilities. To start, Microsoft plans to support PyTorch 1.0 in their Azure cloud and developer offerings, including Azure Machine Learning services and Data Science Virtual Machines, and Amazon Web Services currently supports the latest version of PyTorch, optimized for P3 GPU instances, and plans to make PyTorch 1.0 available shortly after release in their cloud offerings, including its Deep Learning AMI (Amazon Machine Image).
This is just the beginning, as we look to create and share better AI programming models, interfaces and automatic optimizations. AI is a foundational technology at Facebook today, making existing products better and powering entirely new experiences. By opening up our work via papers, code, and models, we can work with all AI researchers and practitioners to advance the state of the art faster and to help apply these techniques in new ways.