Torch (machine learning)

From Infogalactic: the planetary knowledge core
Jump to: navigation, search
Torch
Torch logo
Original author(s) Ronan Collobert, Koray Kavukcuoglu, Clement Farabet
Initial release October 2002; 22 years ago (2002-10)[1]
Stable release 7.0 / September 1, 2015; 9 years ago (2015-09-01)[2]
Written in Lua, LuaJIT, C, CUDA and C++
Operating system Linux, Android, Mac OS X, iOS
Type Library for machine learning and deep learning
License BSD License
Website torch.ch

Torch is an open source machine learning library, a scientific computing framework, and a script language based on the Lua programming language.[3] It provides a wide range of algorithms for deep machine learning, and uses an extremely fast scripting language LuaJIT, and an underlying C implementation.

torch

The core package of Torch is torch. It provides a flexible N-dimensional array or Tensor, which supports basic routines for indexing, slicing, transposing, type-casting, resizing, sharing storage and cloning. This object is used by most other packages and thus forms the core object of the library. The Tensor also supports mathematical operations like max, min, sum, statistical distributions like uniform, normal and multinomial, and BLAS operations like dot product, matrix-vector multiplication, matrix-matrix multiplication, matrix-vector product and matrix product.

The following exemplifies using torch via its REPL interpreter:

> a = torch.randn(3,4)

> =a
-0.2381 -0.3401 -1.7844 -0.2615
 0.1411  1.6249  0.1708  0.8299
-1.0434  2.2291  1.0525  0.8465
[torch.DoubleTensor of dimension 3x4]

> a[1][2]
-0.34010116549482
	
> a:narrow(1,1,2)
-0.2381 -0.3401 -1.7844 -0.2615
 0.1411  1.6249  0.1708  0.8299
[torch.DoubleTensor of dimension 2x4]

> a:index(1, torch.LongTensor{1,2})
-0.2381 -0.3401 -1.7844 -0.2615
 0.1411  1.6249  0.1708  0.8299
[torch.DoubleTensor of dimension 2x4]

> a:min()
-1.7844365427828

The torch package also simplifies object oriented programming and serialization by providing various convenience functions which are used throughout its packages. The torch.class(classname, parentclass) function can be used to create object factories (classes). When the constructor is called, torch initializes and sets a Lua table with the user-defined metatable, which makes the table an object.

Objects created with the torch factory can also be serialized, as long as they do not contain references to objects that cannot be serialized, such as Lua coroutines, and Lua userdata. However, userdata can be serialized if it is wrapped by a table (or metatable) that provides read() and write() methods.

nn

The nn package is used for building neural networks. It is divided into modular objects that share a common Module interface. Modules have a forward() and backward() method that allow them to feedforward and backpropagate, respectively. Modules can be joined together using module composites, like Sequential, Parallel and Concat to create complex task-tailored graphs. Simpler modules like Linear, Tanh and Max make up the basic component modules. This modular interface provides first-order automatic gradient differentiation. What follows is an example use-case for building a multilayer perceptron using Modules:

> mlp = nn.Sequential()
> mlp:add( nn.Linear(10, 25) ) -- 10 input, 25 hidden units
> mlp:add( nn.Tanh() ) -- some hyperbolic tangent transfer function
> mlp:add( nn.Linear(25, 1) ) -- 1 output
> =mlp:forward(torch.randn(10))
-0.1815
[torch.Tensor of dimension 1]

Loss functions are implemented as sub-classes of Criterion, which has a similar interface to Module. It also has forward() and backward methods for computing the loss and backpropagating gradients, respectively. Criteria are helpful to train neural network on classical tasks. Common criteria are the Mean Squared Error criterion implemented in MSECriterion and the cross-entropy criterion implemented in ClassNLLCriterion. What follows is an example of a Lua function that can be iteratively called to train an mlp Module on input Tensor x, target Tensor y with a scalar learningRate:

function gradUpdate(mlp,x,y,learningRate)
  local criterion = nn.ClassNLLCriterion()
  pred = mlp:forward(x)
  local err = criterion:forward(pred, y); 
  mlp:zeroGradParameters();
  local t = criterion:backward(pred, y);
  mlp:backward(x, t);
  mlp:updateParameters(learningRate);
end

It also has StochasticGradient class for training a neural network using Stochastic gradient descent, although the Optim package provides much more options in this respect, like momentum and weight decay regularization.

Other packages

Many packages other than the above official packages are used with Torch. These are listed in the torch cheatsheet. These extra packages provide a wide range of utilities such as parallelism, asynchronous input/output, image processing, and so on.

Applications

Torch is used by Google DeepMind,[4] the Facebook AI Research Group,[5] IBM,[6] Yandex[7] and the Idiap Research Institute.[8] Torch has been extended for use on Android[9] and iOS.[10] It has been used to build hardware implementations for data flows like those found in neural networks.[11]

Facebook has released a set of extension modules as open source software.[12]

Related libraries

See also

References

  1. Lua error in package.lua at line 80: module 'strict' not found.
  2. Lua error in package.lua at line 80: module 'strict' not found.
  3. Lua error in package.lua at line 80: module 'strict' not found.
  4. What is going on with DeepMind and Google?
  5. KDnuggets Interview with Yann LeCun, Deep Learning Expert, Director of Facebook AI Lab
  6. Hacker News
  7. Yann Lecun's Facebook Page
  8. IDIAP Research Institute : Torch
  9. Torch-android GitHub repository
  10. Torch-ios GitHub repository
  11. NeuFlow: A Runtime Reconfigurable Dataflow Processor for Vision
  12. Lua error in package.lua at line 80: module 'strict' not found.

External links

  • Official website
  • Lua error in package.lua at line 80: module 'strict' not found.