<thead id="fflbj"><font id="fflbj"><cite id="fflbj"></cite></font></thead>
    <progress id="fflbj"><thead id="fflbj"><font id="fflbj"></font></thead></progress>

            課程目錄:Understanding Deep Neural Networks培訓
            4401 人關注
            (78637/99817)
            課程大綱:

                      Understanding Deep Neural Networks培訓

             

             

             

            Part 1 – Deep Learning and DNN Concepts

            Introduction AI, Machine Learning & Deep Learning

            History, basic concepts and usual applications of artificial intelligence far Of the fantasies carried by this domain

            Collective Intelligence: aggregating knowledge shared by many virtual agents

            Genetic algorithms: to evolve a population of virtual agents by selection

            Usual Learning Machine: definition.

            Types of tasks: supervised learning, unsupervised learning, reinforcement learning

            Types of actions: classification, regression, clustering, density estimation, reduction of dimensionality

            Examples of Machine Learning algorithms: Linear regression, Naive Bayes, Random Tree

            Machine learning VS Deep Learning: problems on which Machine Learning remains Today the state of the art (Random Forests & XGBoosts)

            Basic Concepts of a Neural Network (Application: multi-layer perceptron)

            Reminder of mathematical bases.

            Definition of a network of neurons: classical architecture, activation and

            Weighting of previous activations, depth of a network

            Definition of the learning of a network of neurons: functions of cost, back-propagation, Stochastic gradient descent, maximum likelihood.

            Modeling of a neural network: modeling input and output data according to The type of problem (regression, classification ...). Curse of dimensionality.

            Distinction between Multi-feature data and signal. Choice of a cost function according to the data.

            Approximation of a function by a network of neurons: presentation and examples

            Approximation of a distribution by a network of neurons: presentation and examples

            Data Augmentation: how to balance a dataset

            Generalization of the results of a network of neurons.

            Initialization and regularization of a neural network: L1 / L2 regularization, Batch Normalization

            Optimization and convergence algorithms

            Standard ML / DL Tools

            A simple presentation with advantages, disadvantages, position in the ecosystem and use is planned.

            Data management tools: Apache Spark, Apache Hadoop Tools

            Machine Learning: Numpy, Scipy, Sci-kit

            DL high level frameworks: PyTorch, Keras, Lasagne

            Low level DL frameworks: Theano, Torch, Caffe, Tensorflow

            Convolutional Neural Networks (CNN).

            Presentation of the CNNs: fundamental principles and applications

            Basic operation of a CNN: convolutional layer, use of a kernel,

            Padding & stride, feature map generation, pooling layers. Extensions 1D, 2D and 3D.

            Presentation of the different CNN architectures that brought the state of the art in classification

            Images: LeNet, VGG Networks, Network in Network, Inception, Resnet. Presentation of Innovations brought about by each architecture and their more global applications (Convolution 1x1 or residual connections)

            Use of an attention model.

            Application to a common classification case (text or image)

            CNNs for generation: super-resolution, pixel-to-pixel segmentation. Presentation of

            Main strategies for increasing feature maps for image generation.

            Recurrent Neural Networks (RNN).

            Presentation of RNNs: fundamental principles and applications.

            Basic operation of the RNN: hidden activation, back propagation through time, Unfolded version.

            Evolutions towards the Gated Recurrent Units (GRUs) and LSTM (Long Short Term Memory).

            Presentation of the different states and the evolutions brought by these architectures

            Convergence and vanising gradient problems

            Classical architectures: Prediction of a temporal series, classification ...

            RNN Encoder Decoder type architecture. Use of an attention model.

            NLP applications: word / character encoding, translation.

            Video Applications: prediction of the next generated image of a video sequence.

            Generational models: Variational AutoEncoder (VAE) and Generative Adversarial Networks (GAN).

            Presentation of the generational models, link with the CNNs

            Auto-encoder: reduction of dimensionality and limited generation

            Variational Auto-encoder: generational model and approximation of the distribution of a given. Definition and use of latent space. Reparameterization trick. Applications and Limits observed

            Generative Adversarial Networks: Fundamentals.

            Dual Network Architecture (Generator and discriminator) with alternate learning, cost functions available.

            Convergence of a GAN and difficulties encountered.

            Improved convergence: Wasserstein GAN, Began. Earth Moving Distance.

            Applications for the generation of images or photographs, text generation, super-resolution.

            Deep Reinforcement Learning.

            Presentation of reinforcement learning: control of an agent in a defined environment

            By a state and possible actions

            Use of a neural network to approximate the state function

            Deep Q Learning: experience replay, and application to the control of a video game.

            Optimization of learning policy. On-policy && off-policy. Actor critic architecture. A3C.

            Applications: control of a single video game or a digital system.

            Part 2 – Theano for Deep Learning

            Theano Basics
            Introduction

            Installation and Configuration

            Theano Functions

            inputs, outputs, updates, givens

            Training and Optimization of a neural network using Theano
            Neural Network Modeling

            Logistic Regression

            Hidden Layers

            Training a network

            Computing and Classification

            Optimization

            Log Loss

            Testing the model

            Part 3 – DNN using Tensorflow

            TensorFlow Basics
            Creation, Initializing, Saving, and Restoring TensorFlow variables

            Feeding, Reading and Preloading TensorFlow Data

            How to use TensorFlow infrastructure to train models at scale

            Visualizing and Evaluating models with TensorBoard

            TensorFlow Mechanics
            Prepare the Data

            Download

            Inputs and Placeholders

            Build the GraphS

            Inference

            Loss

            Training

            Train the Model

            The Graph

            The Session

            Train Loop

            Evaluate the Model

            Build the Eval Graph

            Eval Output

            The Perceptron
            Activation functions

            The perceptron learning algorithm

            Binary classification with the perceptron

            Document classification with the perceptron

            Limitations of the perceptron

            From the Perceptron to Support Vector Machines
            Kernels and the kernel trick

            Maximum margin classification and support vectors

            Artificial Neural Networks
            Nonlinear decision boundaries

            Feedforward and feedback artificial neural networks

            Multilayer perceptrons

            Minimizing the cost function

            Forward propagation

            Back propagation

            Improving the way neural networks learn

            Convolutional Neural Networks
            Goals

            Model Architecture

            Principles

            Code Organization

            Launching and Training the Model

            Evaluating a Model

            Basic Introductions to be given to the below modules(Brief Introduction to be provided based on time availability):

            Tensorflow - Advanced Usage

            Threading and Queues

            Distributed TensorFlow

            Writing Documentation and Sharing your Model

            Customizing Data Readers

            Manipulating TensorFlow Model Files

            TensorFlow Serving

            Introduction

            Basic Serving Tutorial

            Advanced Serving Tutorial

            Serving Inception Model Tutorial

            538在线视频二三区视视频