一种新的基于数学的神经网络分析框架,深度学

来源:http://www.lfzhongying.com 作者:盖世电竞竞猜 人气:149 发布时间:2019-09-06
摘要:All code can be find here. (点击尾部阅读原文前往) 本文为加拿大滑铁卢大学(作者:Anthony L. Caterini)的硕士论文,共102页。 This post is inspired by. 深度学习作为机器学习的一个分支,是近

图片 1

All code can be find here.

(点击尾部阅读原文前往)

本文为加拿大滑铁卢大学(作者:Anthony L. Caterini)的硕士论文,共102页。

This post is inspired by .

深度学习作为机器学习的一个分支,是近年来最热门同时也是发展最快的人工智能技术之一,相关学习资源包括免费公开教程和工具都极大丰富,同时这也为学习深度学习技术的IT人才带来选择上的困扰,Yerevann整理的这个深度学习完全指南,汇集了目前网络上最优秀的深度学习自学资源,而且会不定期更新,非常值得收藏关注,以下是IT经理网编译整理的指南内容:

在过去的十年中,深度神经网络(Deep Neural Networks,DNN)由于其在许多领域的成功应用而成为处理大量数据的非常流行的模型。这些模型是分层的,通常在网络中的每一层包含参数化的线性和非线性变换。然而,我们并不完全理解DNN为什么如此有效。在这篇论文中,我们探索了一种解决这个问题的方法:开发了一个表示神经网络的通用数学框架,并演示了如何使用这个框架来表示特定的神经网络结构。

In this post, we will implement a multiple layer neural network from scratch. You can regard the number of layers and dimension of each layer as parameter. For example, [2, 3, 2] represents inputs with 2 dimension, one hidden layer with 3 dimension and output with 2 dimension (binary classification) (using softmax as output).

自学基本要求(数学知识、编程知识)

在第一章中,我们首先探讨了数学对神经网络的贡献。我们可以严格地解释DNN的一些性质,但是这些结果并不能完全描述一般神经网络的机制。我们还注意到,描述神经网络的大多数方法依赖于将参数和输入分解成标量,而不是参考其底层向量空间,这增加了分析中的一些尴尬。我们的框架严格地在这些空间上进行操作,一旦我们使用的数学对象被很好地定义和理解,就提供了对DNN的更自然的描述。

We won’t derive all the math that’s required, but I will try to give an intuitive explanation of what we are doing. I will also point to resources for you read up on the details.

数学知识:学员需要具备普通大学数学知识,例如《Deep Learning》一书中若干章节提到的数学概念:

然后,我们在第三章中开发了通用框架。我们能够描述一种算法,用于直接计算在其中定义了参数的内积空间上的梯度下降的每一步。此外,我们可以用简洁紧凑的形式表示误差反向传播步骤。除了标准平方损失或交叉熵损失,我们还证明了我们的框架能够扩展到更复杂的涉及网络一阶导数的损失函数。

Let’s start by generating a dataset we can play with. Fortunately, scikit-learn has some useful dataset generators, so we don’t need to write the code ourselves. We will go with the make_moons function.

Deep Learning第二章:线性代数

在开发了通用框架之后,我们在第四章中将其应用于三个具体的神经网络实例。我们从多层感知器开始,它是最简单的DNN类型,并展示了如何为它生成梯度下降的步骤。然后,我们对卷积神经网络进行了表征,它包含更复杂的输入空间、参数空间和每一层的变换。然而,CNN仍然符合通用框架。最后我们考虑的结构是深度自动编码器,它在每层具有不完全独立的参数。我们还能够扩展通用框架来处理类似的这种情况。

# Generate a dataset and plot itnp.random.seedX, y = sklearn.datasets.make_moons(200, noise=0.20)plt.scatter(X[:,0], X[:,1], s=40, c=y, cmap=plt.cm.Spectral)

Deep Learning第三章:概率与信息理论

在第五章中,我们利用前几章的一些结果开发了一个递归神经网络的框架,即序列解析DNN体系结构。这些参数在网络的所有层上共享,因此我们需要一些额外的组织来描述RNN。我们首先描述了一个通用的RNN,然后描述了普通RNN的具体情况。我们再次直接在内积空间上计算梯度。

图片 2

Deep Learning第四章:数值计算

Over the past decade, Deep Neural Networks have become very popular models for processing large amounts of databecause of their successful application in a wide variety of fields. Thesemodels are layered, often containing parametrized linear and non-lineartransformations at each layer in the network. At this point, however, we do notrigorously understand why DNNs are so effective. In this thesis, we explore oneway to approach this problem: we develop a generic mathematical framework forrepresenting neural networks, and demonstrate how this framework can be used torepresent specific neural network architectures. In chapter 1, we start byexploring mathematical contributions to neural networks. We can rigorouslyexplain some properties of DNNs, but these results fail to fully describe themechanics of a generic neural network. We also note that most approaches todescribing neural networks rely upon breaking down the parameters and inputsinto scalars, as opposed to referencing their underlying vector spaces, whichadds some awkwardness into their analysis. Our framework strictly operates overthese spaces, affording a more natural description of DNNs once themathematical objects that we use are well-defined and understood. We thendevelop the generic framework in chapter 3. We are able to describe analgorithm for calculating one step of gradient descent directly over the innerproduct space in which the parameters are defined. Also, we can represent theerror backpropagation step in a concise and compact form. Besides a standardsquared loss or cross-entropy loss, we also demonstrate that our framework,including gradient calculation, extends to a more complex loss functioninvolving the first derivative of the network. After developing the genericframework, we apply it to three specific network examples in chapter 4. Westart with the Multilayer Perceptron , the simplest type of DNN, and showhow to generate a gradient descent step for it. We then represent theConvolutional Neural Network , which contains more complicated inputspaces, parameter spaces, and transformations at each layer. The CNN, however,still fits into the generic framework. The last structure that we consider isthe Deep Auto-Encoder , which has parameters that are not completelyindependent at each layer. We are able to extend the generic framework tohandle this case as well. In chapter 5, we use some of the results from theprevious chapters to develop a framework for Recurrent Neural Networks ,the sequence-parsing DNN architecture. The parameters are shared across alllayers of the network, and thus we require some additional machinery todescribe RNNs. We describe a generic RNN first, and then the specific case ofthe vanilla RNN. We again compute gradients directly over inner product spaces.

The dataset we generated has two classes, plotted as red and blue points. Our goal is to train a Machine Learning classifier that predicts the correct class given the x- and y- coordinates. Note that the data is not linearly separable, we can’t draw a straight line that separates the two classes. This means that linear classifiers, such as Logistic Regression, won’t be able to fit the data unless you hand-engineer non-linear features (such as polynomials) that work well for the given dataset.

编程知识:你需要懂得编程才能开发和测试深度学习模型,我们建议在机器学习领域首选Python。同时也要用到面向科学计算的NumPy/SciPy代码库。资源链接如下(本文出现的星标代表难度等级):

1引言与研究动机

In fact, that’s one of the major advantages of Neural Networks. You don’t need to worry about feature engineering. The hidden layer of a neural network will learn features for you.

Justin Johnson’s Python / NumPy / SciPy / Matplotlib tutorial for Stanford’s CS231n

2数学基础知识

Neural Network Architecture

You can read this tutorial () to learn the basic concepts of neural network. Like activation functions, feed-forward computation and so on.

Because we want our network to output probabilities the activation function for the output layer will be the softmax, which is simply a way to convert raw scores to probabilities. If you’re familiar with the logistic function you can think of softmax as its generalization to multiple classes.

When you choose softmax as output, you can use cross-entropy loss (also known as negative log likelihood) as loss function. More about Loss Function can be find in .

Scipy lecture notes – 涵盖了常用的各种库,介绍也比较详细,还涉及一些深入的技术话题

3神经网络的通用表达式

Learning the Parameters

Learning the parameters for our network means finding parameters (such as (W_1, b_1, W_2, b_2)) that minimize the error on our training data (loss function).

We can use gradient descent to find the minimum and I will implement the most vanilla version of gradient descent, also called batch gradient descent with a fixed learning rate. Variations such as SGD (stochastic gradient descent) or minibatch gradient descent typically perform better in practice. So if you are serious you’ll want to use one of these, and ideally you would also decay the learning rate over time.

The key of gradient descent method is how to calculate the gradient of loss function by the parameters. One approach is called Back Propagation. You can learn it more from and .

四大入门教程

4具体的神经网络描述

Implementation

We start by given the computation graph of neural network.

图片 3

In the computation graph, you can see that it contains three components (gate, layer and output), there is two kinds of gate (multiply and add), and you can use tanh layer and softmax output.

gate, layer and output can all be seen as operation unit of computation graph, so they will implement the inner derivatives of their inputs (we call it backward), and use chain rule according to the computation graph. You can see the following figure for nice explanation.

图片 4

gate.py

import numpy as npclass MultiplyGate: def forward(self,W, X): return np.dot def backward(self, W, X, dZ): dW = np.dot(np.transpose dX = np.dot(dZ, np.transpose return dW, dXclass AddGate: def forward(self, X, b): return X   b def backward(self, X, b, dZ): dX = dZ * np.ones_like db = np.dot(np.ones((1, dZ.shape[0]), dtype=np.float64), dZ) return db, dX

layer.py

import numpy as npclass Sigmoid: def forward: return 1.0 / (1.0   np.exp def backward(self, X, top_diff): output = self.forward return (1.0 - output) * output * top_diffclass Tanh: def forward: return np.tanh def backward(self, X, top_diff): output = self.forward return (1.0 - np.square * top_diff

output.py

import numpy as npclass Softmax: def predict: exp_scores = np.exp return exp_scores / np.sum(exp_scores, axis=1, keepdims=True) def loss(self, X, y): num_examples = X.shape[0] probs = self.predict corect_logprobs = -np.log(probs[range(num_examples), y]) data_loss = np.sum(corect_logprobs) return 1./num_examples * data_loss def diff(self, X, y): num_examples = X.shape[0] probs = self.predict probs[range(num_examples), y] -= 1 return probs

We can implement out neural network by a class Model and initialize the parameters in the __init__ function. You can pass the parameter layers_dim = [2, 3, 2], which represents inputs with 2 dimension, one hidden layer with 3 dimension and output with 2 dimension

class Model: def __init__(self, layers_dim): self.b = [] self.W = [] for i in range(len(layers_dim)-1): self.W.append(np.random.randn(layers_dim[i], layers_dim[i 1]) / np.sqrt(layers_dim[i])) self.b.append(np.random.randn(layers_dim[i 1]).reshape(1, layers_dim[i 1]))

First let’s implement the loss function we defined above. It is just a forward propagation computation of out neural network. We use this to evaluate how well our model is doing:

def calculate_loss(self, X, y): mulGate = MultiplyGate() addGate = AddGate() layer = Tanh() softmaxOutput = Softmax() input = X for i in range(len: mul = mulGate.forward(self.W[i], input) add = addGate.forward(mul, self.b[i]) input = layer.forward return softmaxOutput.loss

We also implement a helper function to calculate the output of the network. It does forward propagation as defined above and returns the class with the highest probability.

def predict: mulGate = MultiplyGate() addGate = AddGate() layer = Tanh() softmaxOutput = Softmax() input = X for i in range(len: mul = mulGate.forward(self.W[i], input) add = addGate.forward(mul, self.b[i]) input = layer.forward probs = softmaxOutput.predict return np.argmax(probs, axis=1)

Finally, here comes the function to train our Neural Network. It implements batch gradient descent using the backpropagation algorithms we have learned above.

def train(self, X, y, num_passes=20000, epsilon=0.01, reg_lambda=0.01, print_loss=False): mulGate = MultiplyGate() addGate = AddGate() layer = Tanh() softmaxOutput = Softmax() for epoch in range(num_passes): # Forward propagation input = X forward = [(None, None, input)] for i in range(len: mul = mulGate.forward(self.W[i], input) add = addGate.forward(mul, self.b[i]) input = layer.forward forward.append((mul, add, input)) # Back propagation dtanh = softmaxOutput.diff(forward[len-1][2], y) for i in range(len-1, 0, -1): dadd = layer.backward(forward[i][1], dtanh) db, dmul = addGate.backward(forward[i][0], self.b[i-1], dadd) dW, dtanh = mulGate.backward(self.W[i-1], forward[i-1][2], dmul) # Add regularization terms (b1 and b2 don't have regularization terms) dW  = reg_lambda * self.W[i-1] # Gradient descent parameter update self.b[i-1]  = -epsilon * db self.W[i-1]  = -epsilon * dW if print_loss and epoch % 1000 == 0: print("Loss after iteration %i: %f" %(epoch, self.calculate_loss

如果你具备以上自学基本要求技能,我们建议从以下四大入门在线教程中任选一项或多项组合学习(星标为难度等级):

5递归神经网络RNN

A network with a hidden layer of size 3

Let’s see what happens if we train a network with a hidden layer size of 3.

import matplotlib.pyplot as pltimport numpy as npimport sklearnimport sklearn.datasetsimport sklearn.linear_modelimport mlnnfrom utils import plot_decision_boundary# Generate a dataset and plot itnp.random.seedX, y = sklearn.datasets.make_moons(200, noise=0.20)plt.scatter(X[:,0], X[:,1], s=40, c=y, cmap=plt.cm.Spectral)plt.show()layers_dim = [2, 3, 2]model = mlnn.Model(layers_dim)model.train(X, y, num_passes=20000, epsilon=0.01, reg_lambda=0.01, print_loss=True)# Plot the decision boundaryplot_decision_boundary(lambda x: model.predictplt.title("Decision Boundary for hidden layer size 3")plt.show()

图片 5

This looks pretty good. Our neural networks was able to find a decision boundary that successfully separates the classes.

The plot_decision_boundary function is referenced by .

import matplotlib.pyplot as pltimport numpy as np# Helper function to plot a decision boundary.def plot_decision_boundary(pred_func, X, y): # Set min and max values and give it some padding x_min, x_max = X[:, 0].min() - .5, X[:, 0].max()   .5 y_min, y_max = X[:, 1].min() - .5, X[:, 1].max()   .5 h = 0.01 # Generate a grid of points with distance h between them xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) # Predict the function value for the whole gid Z = pred_func(np.c_[xx.ravel(), yy.ravel Z = Z.reshape # Plot the contour and training examples plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral) plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Spectral)
  1. Instead of batch gradient descent, use minibatch gradient to train the network. Minibatch gradient descent typically performs better in practice (more info).
  2. We used a fixed learning rate epsilon for gradient descent. Implement an annealing schedule for the gradient descent learning rate (more info).
  3. We used a tanh activation function for our hidden layer. Experiment with other activation functions (more info).
  4. Extend the network from two to three classes. You will need to generate an appropriate dataset for this.
  5. Try some other Parameter updates method, like Momentum update, Nesterov momentum, Adagrad, RMSprop and Adam(more info).
  6. Some other tricks of training neural network can be find and , like dropout reglarization, batch normazation, Gradient checks and Model Ensembles.

Hugo Larochelle’s video course 这是YouTube上很火的一个深度学习视频教程,录制于2013年,但今天看内容并不过时,很详细地阐释了神经网络背后的数学理论。 幻灯片和相关资料传送门 .

6结论与未来工作展望

更多精彩文章请关注微信号:图片 6

Stanford’s CS231n (应用于视觉识别的卷积神经网络) 由已经投奔Google的李飞飞教授和 Andrej Karpathy、Justin Johnson共同执教的课程,重点介绍了图像处理,同时也涵盖了深度学习领域的大多数重要概念。 视频 链接(2016) 、 讲义传送门 

Michael Nielsen的在线著作: Neural networks and deep learning 是目前学习神经网络最容易的教材,虽然该书并未涵盖所有重要议题,但是包含大量简明易懂的阐释,同时还为一些基础概念提供了实现代码。

Ian Goodfellow、Yoshua Bengio and Aaron Courville共同编著的 Deep learning是目前深度学习领域最全面的教程资源,比其他课程涵盖的范围都要广。

机器学习基础

机器学习是通过数据教计算机做事的科学,同时也是一门艺术,机器学习是计算机科学和数学交汇的一个相对成熟的领域,深度学习只是其中新兴的一小部分,因此,了解机器学习的概念和工具对我们学好深度学习非常重要。以下是机器学习的一些重要学习资源(以下课程介绍部分内容不再翻译):

Visual introduction to machine learning – decision trees

Andrew Ng’s course on machine learning, the most popular course on Coursera

Larochelle’s course doesn’t have separate introductory lectures for general machine learning, but all required concepts are defined and explained whenever needed.

1、Training and testing the models (kNN)

2、Linear classification (SVM)

3、Optimization (stochastic gradient descent)

4、Machine learning basics

5、Principal Component Analysis explained visually

6、How to Use t-SNE Effectively

机器学习的编程学习资料:大多数流行机器学习算法都部署在Scikit-learn 这个Python库中,从头部署算法能够帮我们更好地了解机器学习的工作原理,以下是相关学习资源:

1、Practical Machine Learning Tutorial with Python covers linear regression, k-nearest-neighbors and support vector machines. First it shows how to use them from scikit-learn, then implements the algorithms from scratch.

2、Andrew Ng’s course on Coursera has many assignments in Octave language. The same algorithms can be implemented in Python.

神经网络基础

神经网络是强大的机器学习算法,同时也是深度学习的基础:

A Visual and Interactive Guide to the Basics of Neural Networks – shows how simple neural networks can do linear regression

1、Feedforward neural network

2、Training neural networks (up to 2.7)

3、Backpropagation

4、Architecture of neural networks

5、Using neural nets to recognize handwritten digits

6、How the backpropagation algorithm works

7、A visual proof that neural nets can compute any function

8、Deep feedforward networks

Yes you should understand backprop explains why it is important to implement backpropagation once from scratch

Calculus on computational graphs: backpropagation

Play with neural networks!

神经网络实操教程

1、Implementing softmax classifier and a simple neural network in pure Python/NumPy–Jupyter notebook available

2、Andrej Karpathy implements backpropagation in Javascript in his Hacker’s guide to Neural Networks.

3、Implementing a neural network from scratch in Python

改进神经网络学习

神经网络的训练可不容易,很多时候机器压根不会学习(underfitting),有时候又“死学”,照本宣科你输入的知识,无法总结归纳出新的数据(overfitting),解决上述问题的方法有很多,如下是

推荐教程:

2.8-2.11. Regularization, parameter initialization etc.

7.5. Dropout

6 (first half). Setting up the data and loss

  1. Improving the way neural networks learn

  2. Why are deep neural networks hard to train?

  3. Regularization for deep learning

  4. Optimization for training deep models

  5. Practical methodology

ConvNetJS Trainer demo on MNIST – visualizes the performance of different optimization algorithms

An overview of gradient descent optimization algorithms

Neural Networks, Manifolds, and Topology

常用的主流框架

目前很多深度学习算法都对最新的计算机硬件进行了优化,大多数框架也提供Python接口(Torch除外,需要Lua)。当你了解基本的深度学习算法的部署后,是时候选择一个框架开工了(这部分还可CTOCIO文章:2016年人气最高的六款开源深度学习工具):

Theano provides low-level primitives for constructing all kinds of neural networks. It is maintained by a machine learning group at University of Montreal. See also: Speeding up your neural network with Theano and the GPU – Jupyter notebook available

TensorFlow is another low-level framework. Its architecture is similar to Theano. It is maintained by the Google Brain team.

Torch is a popular framework that uses Lua language. The main disadvantage is that Lua’s community is not as large as Python’s. Torch is mostly maintained by Facebook and Twitter.

There are also higher-level frameworks that run on top of these:

Lasagne is a higher level framework built on top of Theano. It provides simple functions to create large networks with few lines of code.

Keras is a higher level framework that works on top of either Theano or TensorFlow.

如果你有框架选择困难症,可以参考斯坦福课程Lecture 12 of Stanford’s CS231n.

卷积神经网络

卷积神经网络Convolutional networks (CNNs),是一种特定的神经网络,通过一些聪明的方法大大提高了学习速度和质量。卷积神经网络掀起了计算机视觉的革命,并广泛应用于语音识别和文本归类等领域,以下是

推荐教程:

  1. Computer vision (up to 9.9)

6 (second half). Intro to ConvNets

  1. Convolutional neural networks

  2. Localization and detection

  3. Visualization, Deep dream, Neural style, Adversarial examples

  4. Image segmentation (up to 38:00) includes upconvolutions

  5. Deep learning

  6. Convolutional networks

Image Kernels explained visually – shows how convolutional filters (also known as image kernels) transform the image

ConvNetJS MNIST demo – live visualization of a convolutional network right in the browser

Conv Nets: A Modular Perspective

Understanding Convolutions

Understanding Convolutional neural networks for NLP

卷积神经网络框架部署和应用

所有重要的框架都支持卷积神经网络的部署,通常使用高级函数库编写的代码的可读性要更好一些。

Theano: Convolutional Neural Networks (LeNet)

Using Lasagne for training Deep Neural Networks

Detecting diabetic retinopathy in eye images – a blog post by one of the best performers of Diabetic retinopathy detection contest in Kaggle. Includes a good example of data augmentation.

Face recognition for right whales using deep learning – the authors used different ConvNets for localization and classification. Code and models are available.

Tensorflow: Convolutional neural networks for image classification on CIFAR-10 dataset

Implementing a CNN for text classification in Tensorflow

DeepDream implementation in TensorFlow

92.45% on CIFAR-10 in Torch – implements famous VGGNet network with batch normalization layers in Torch

Training and investigating Residual Nets – Residual networks perform very well on image classification tasks. Two researchers from Facebook and CornellTech implemented these networks in Torch

ConvNets in practice – lots of practical tips on using convolutional networks including data augmentation, transfer learning, fast implementations of convolution operation

递归神经网络

递归神经网络Recurrent entworks(RNNs)被设计用来处理序列数据(例如文本、股票、基因组、传感器等)相关问题,通常应用于语句分类(例如情感分析)和语音识别,也适用于文本生成甚至图像生成。

教程如下:

The Unreasonable Effectiveness of Recurrent Neural Networks – describes how RNNs can generate text, math papers and C code

Hugo Larochelle’s course doesn’t cover recurrent neural networks (although it covers many topics that RNNs are used for). We suggest watching Recurrent Neural Nets and LSTMs by Nando de Freitas to fill the gap

  1. Recurrent Neural Networks, Image Captioning, LSTM

  2. Soft attention (starting at 38:00)

Michael Nielsen’s book stops at convolutional networks. In the Other approaches to deep neural nets section there is just a brief review of simple recurrent networks and LSTMs.

  1. Sequence Modeling: Recurrent and Recursive Nets

Recurrent neural networks from Stanford’s CS224d (2016) by Richard Socher

Understanding LSTM Networks

递归神经网络的框架部署与应用

Theano: Recurrent Neural Networks with Word Embeddings

Theano: LSTM Networks for Sentiment Analysis

Implementing a RNN with Python, Numpy and Theano

Lasagne implementation of Karpathy’s char-rnn

Combining CNN and RNN for spoken language identification in Lasagne

Automatic transliteration with LSTM using Lasagne

Tensorflow: Recurrent Neural Networks for language modeling

Recurrent Neural Networks in Tensorflow

Understanding and Implementing Deepmind’s DRAW Model

LSTM implementation explained

Torch implementation of Karpathy’s char-rnn

Autoencoders

Autoencoder是为非监督式学习设计的神经网络,例如当数据没有标记的情况。Autoencoder可以用来进行数据维度消减,以及为其他神经网络进行预训练,以及数据生成等。以下课程资源中,我们还收录了Autoencoder与概率图模型整合的一个autoencoders的变种,其背后的数学机理在下一章“概率图模型”中会介绍。

推荐教程:

  1. Autoencoder

7.6. Deep autoencoder

  1. Videos and unsupervised learning (from 32:29) – this video also touches an exciting topic of generative adversarial networks.

  2. Autoencoders

ConvNetJS Denoising Autoencoder demo

Karol Gregor on Variational Autoencoders and Image Generation

Autoencoder的部署

大多数autoencoders都非常容易部署,但我们还是建议您从简单的开始尝试。课程资源如下:

Theano: Denoising autoencoders

Diving Into TensorFlow With Stacked Autoencoders

Variational Autoencoder in TensorFlow

Training Autoencoders on ImageNet Using Torch 7

Building autoencoders in Keras

概率图模型

概率图模型(Probabilistic Graphical model,PGM)是统计学和机器学习交叉分支领域,关于概率图模型的书籍和课程非常多,以下我们收录的资源重点关注概率图模型在深度学习场景中的应用。其中Hugo Larochelles的课程介绍了一些非常著名的模型,而Deep Learning一书有整整四个章节专门介绍,并在最后一章介绍了十几个模型。本领域的学习需要读者掌握大量数学知识:

  1. Conditional Random Fields

  2. Training CRFs

  3. Restricted Boltzman machine

7.7-7.9. Deep Belief Networks

9.10. Convolutional RBM

  1. Linear Factor Models – first steps towards probabilistic models

  2. Structured Probabilistic Models for Deep Learning

  3. Monte Carlo Methods

  4. Confronting the Partition Function

  5. Approximate Inference

  6. Deep Generative Models – includes Boltzmann machines (RBM, DBN, …), variational autoencoders, generative adversarial networks, autoregressive models etc.

Generative models – a blog post on variational autoencoders, generative adversarial networks and their improvements by OpenAI.

The Neural Network Zoo attempts to organize lots of architectures using a single scheme.

概率图模型的部署

高级框架(Lasagne、Keras)不支持概率图模型的部署,但是Theano、Tensorflow和Torch有很多可用的代码。

Restricted Boltzmann Machines in Theano

Deep Belief Networks in Theano

Generating Large Images from Latent Vectors – uses a combination of variational autoencoders and generative adversarial networks.

Image Completion with Deep Learning in TensorFlow – another application of generative adversarial networks.

Generating Faces with Torch – Torch implementation of Generative Adversarial Networks

精华论文、视频与论坛汇总

Deep learning papers reading roadmap 深度学习重要论文的大清单。

Arxiv Sanity Preserver 为浏览 arXiv上的论文提供了一个漂亮的界面.

Videolectures.net 含有大量关于深度学习的高级议题视频

/r/MachineLearning 一个非常活跃的Reddit分支. 几乎所有重要的新论文这里都有讨论。

链接:www.ctocio.com/ccnews/23027.html(点击尾部阅读原文前往)

本文由盖世电竞发布于盖世电竞竞猜,转载请注明出处:一种新的基于数学的神经网络分析框架,深度学

关键词: 盖世电竞

最火资讯