- 博客(1)
- 资源 (7)
- 收藏
- 关注
转载 ICCV 2013 Accepted Tutorials
ICCV 2013 Accepted Tutorials Organizers, please contact Ryan Farrell with needed updates.
2013-12-05 11:40:38 824
Python cuda-convnet
cuda-convnet
High-performance C++/CUDA implementation of convolutional neural networks
This is a fast C++/CUDA implementation of convolutional (or more generally, feed-forward) neural networks. It can model arbitrary layer connectivity and network depth. Any directed acyclic graph of layers will do. Training is done using the back-propagation algorithm.
2014-07-16
Shogun 3.0
The machine learning toolbox's focus is on large scale kernel methods and
especially on Support Vector Machines (SVM) [1]. It provides a generic SVM
object interfacing to several different SVM implementations, among them the
state of the art LibSVM [2] and SVMlight [3]. Each of the SVMs can be
combined with a variety of kernels. The toolbox not only provides efficient
implementations of the most common kernels, like the Linear, Polynomial,
Gaussian and Sigmoid Kernel but also comes with a number of recent string
kernels as e.g. the Locality Improved [4], Fischer [5], TOP [6], Spectrum [7],
Weighted Degree Kernel (with shifts) [8, 9, 10]. For the latter the efficient
LINADD [10] optimizations are implemented. Also SHOGUN offers the freedom of
working with custom pre-computed kernels. One of its key features is the
*combined kernel* which can be constructed by a weighted linear combination
of a number of sub-kernels, each of which not necessarily working on the same
domain. An optimal sub-kernel weighting can be learned using Multiple Kernel
Learning [11, 12, 16]. Currently SVM 2-class classification and regression problems can be dealt
with. However SHOGUN also implements a number of linear methods like Linear
Discriminant Analysis (LDA), Linear Programming Machine (LPM), (Kernel)
Perceptrons and features algorithms to train hidden markov models.
The input feature-objects can be dense, sparse or strings, and
of types int/short/double/char. In addition, they can be converted into different feature types.
Chains of *preprocessors* (e.g. substracting the mean) can be attached to
each feature object allowing for on-the-fly pre-processing.
2013-11-21
Robust PCA
Robust PCA是在Principle Component Analysis的基础上加上一个robust function去测量各个dimension以内的outliers,并且想办法修正的一个方法
2013-11-07
Pattern Recognition and Machine Learning
Pattern Recognition and Machine Learning.pdf
Christopher M. Bishop的模式识别与机器学习课程用书
2009-12-29
空空如也
TA创建的收藏夹 TA关注的收藏夹
TA关注的人