In this tutorial, we try to tackle three practical aspects of training deep learning models: using GPUs effectively, handling moderately large datasets, classifying videos.
We start by building an understanding of deep convolution neural networks (ConvNets) and stochastic gradient descent (SGD) optimization.
Then there would be a small introduction to Torch and Lua, with a walkthrough of the neural network and optimization packages. Then the session becomes completely hands-on (with instructor guidance). The data used is from the UCF Action Recognition dataset which contains videos of humans doing things like brushing their teeth, playing the flute etc.
By the end of the hands-on session, you will be able to build a deep convolutional neural network that can recognize human actions with reasonable accuracy.
The session concludes by showing how to use the same pipeline for audio classification, image classification and text classification with only a few minor changes.
The session is driven using Torch-7, Amazon EC2 GPU instances, and IPython notebooks.
In this session, we will introduce the Julia language by describing several ways to implement bandit algorithms in Julia. No prior experience with either Julia or bandit algorithms will be assumed, but participants will need to have a solid foundation in basic programming techniques. Over the course of the session, we'll see how Julia's unique type system and function call rules allow us to implement bandit algorithms that are both highly generic and highly efficient. Using Julia will make it easy to develop a Monte Carlo platform for testing bandit algorithms before we deploy them in production.
Theano is a Python library that lets you define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. It makes possible transparent use of GPU, efficient symbolic differentiation, expression optimization for speed and stability, and dynamic C code generation. Theano is a mature project that is well tested and contains self-verification mechanisms.
This tutorial will demonstrate how to use Theano, and how to implement some machine learning models: MLP and LSTM (for sentiment analysis).
In this session, we give a beginner-friendly explanation of deep neural networks--what it is, what it does, and how. The task of designing, training and fine-tuning deep networks can involve a lot of expert knowledge. To make it easier to get started, we demonstrate building deep learning classifiers using pre-trained models, which we call deep features. These features can be trained on one data set for one task and used to obtain good predictions on a different task, on a different data set. We demonstrate these tasks using GraphLab Create, a machine learning platform that lets you build predictive applications fast. Familiarity with Python is recommended for this session.
This workshop will cover ways of parallelizing your machine learning work by taking advantage of multi-core machines. We will start with a conceptual overview of parallel programming, to develop an intuition about when and how it can be applied. We'll cover general parallel programing capabilities in Julia, then demonstrate how to apply them to a variety of common machine learning techniques. We'll conclude with a brief overview of analogous capabilities in R and Python.
In this session, we introduce the H2O data science platform. We will explain its scalable in-memory architecture and design principles and focus on the implementation of distributed deep learning in H2O. Advanced features such as adaptive learning rates, various forms of regularization, automatic data transformations, checkpointing, grid-search, cross-validation and auto-tuning turn multi-layer neural networks of the past into powerful, easy-to-use predictive analytics tools accessible to everyone. We will present a broad range of use cases and live demos that include world-record deep learning models, anomaly detection tools and approaches for Kaggle data science competitions. We also demonstrate the applicability of H2O in enterprise environments for real-world customer production use cases.
By the end of the hands-on-session, attendees will have learned to perform end-to-end data science workflows with H2O using both the easy-to-use web interface and the flexible R interface. We will cover data ingest, basic feature engineering, feature selection, hyper-parameter optimization with N-fold cross-validation, multi-model scoring and taking models into production. We will train supervised and unsupervised methods on realistic datasets. With best-of-breed machine learning algorithms such as elastic net, random forest, gradient boosting and deep learning, you will be able to create your own smart applications.
Recurrent Neural Networks hold great promise as general sequence learning algorithms. As such, they are a very promising tool for text analysis. However, outside of very specific use cases such as handwriting recognition and recently, machine translation, they have not seen wide spread use. Why has this been the case?
In this workshop, we will first introduce RNNs as a concept. Then we will sketch how to implement them and cover the tricks necessary to make them work well. With the basics covered, we will investigate using RNNs as general text classification and regression models, examining where they succeed and where they fail compared to more traditional text analysis models. Finally, a simple Python and Theano library for training RNNs with a scikit-learn style interface will be introduced and we'll see how to use it through several hands-on tutorials on real world text datasets.
Lambda architecture is a data-processing architecture designed to handle massive quantities of data by taking advantage of both batch-and stream-processing methods. In Lambda architecture, the system involves three layers: batch processing, speed (or real-time) processing, and a serving layer for responding to queries, and each comes with its own set of requirements.
In batch layer, it aims at perfect accuracy by being able to process the all available big dataset which is an immutable, append-only set of raw data using distributed processing system. Output will be typically stored in a read-only database with result completely replacing existing precomputed views. Apache Hadoop, Pig, and HIVE are the de facto batch-processing system.
In speed layer, the data is processed in streaming fashion, and the real-time views are provided by the most recent data. As a result, the speed layer is responsible for filling the "gap" caused by the batch layer's lag in providing views based on the most recent data. This layer's views may not be as accurate as the views provided by batch layer's views created with full dataset, so they will be eventually replaced by the batch layer's views. Traditionally, Apache Storm is used in this layer.
In serving layer, the result from batch layer and speed layer will be stored here, and it responds to queries in a low-latency and ad-hoc way.
One of the lambda architecture examples in machine learning context is building the fraud detection system. In speed layer, the incoming streaming data can be used for online learning to update the model learnt in batch layer to incorporate the recent events. After a while, the model can be rebuilt using the full dataset.
Why Spark for lambda architecture? Traditionally, different technologies are used in batch layer and speed layer. If your batch system is implemented with Apache Pig, and your speed layer is implemented with Apache Storm, you have to write and maintain the same logics in SQL and in Java/Scala. This will very quickly becomes a maintenance nightmare. With Spark, we have an unified development framework for batch and speed layer at scale. In this talk, an end-to-end example implemented in Spark will be shown, and we will discuss about the development, testing, maintenance, and deployment of lambda architecture system with Apache Spark.
In this tutorial, we will look at the basics of deep learning on Hadoop using the Deeplearning4j framework. We will start with an overview of feed forward neural networks and basic deep learning nets in a distributed context. We will cover the challenges of working with distributed neural networks and the tradeoffs of those vs working with a single GPU. Subsequently, we will go in to some of the basics of Hadoop (relevant to the course) with an overview of the api and how to create and load neural nets in to a Hadoop cluster. Finally there will be instructor guided problem solving with setting up an RBM for feature detection (debugging via renders) and a Deep Belief Network for classification focusing on the labeled faces in the wild dataset.
Our program has arrived in Singapore, giving aspiring data scientists the opportunity to work on real-world project… https://t.co/tymjcWWttU