Facebook开源Torchnet加速深度学习,Caffe 、TensorFlow将可用


  • 用Lua编写
  • 支持代码重用,减少出bug概率
  • 简化异步、并行数据loading,提升多GPU效率
  • 可能不会一直局限于Torch,其抽象将有Caffe、TensorFlow实现
  • 方法有点类似于Theano框架的BlocksFuel
  • 已经应用于Facebook图像识别和NLP

Facebook工程师Laurens van der Maaten表示,Torchnet的核心不是让Torch更快(而是简化深度学习工作),举例而言它可以减少IO开销,这对大型神经网络尤其重要。

The goal of open-sourcing Torchnet is to empower the developer community, allowing it to rapidly build effective and reusable learning systems.

与Caffe、Chainer、TensorFlow和Theano等深度学习框架的一个很大的不同,是不注重深层网络中高效率的推理和梯度计算,Torchnet提供一个深度学习框架之上的框架(如torch/nn),使得rapid experimentation更容易。

Torchnet provides a collection of subpackages and implements five main types of abstractions:

  • Datasets — provide a size function that returns the number of samples in the data set, and a get(idx) function that returns the idx-th sample in the data set.
  • Dataset Iterators — a simple for loop that runs from one to the data set size and calls the get() function with loop value as input.
  • Engines — provides the boilerplate logic necessary for training and testing models.
  • Meter — used for performance measurements, such as the time needed to perform a training epoch or the value of the loss function averaged over all examples.

  • Logs — for logging experiments.

The most important subpackages provide implementations of boilerplate code that is relevant to machine-learning problems. These include computer vision, natural language processing, and speech processing.


想对作者说点什么? 我来说一句

Torchnet 安装后 文件2

2016年07月12日 18.7MB 下载