Document Type

Dissertation

Date of Degree

Spring 2016

Degree Name

PhD (Doctor of Philosophy)

Degree In

Industrial Engineering

First Advisor

Amaury Lendasse

Abstract

Extreme Learning Machine (ELM) is a recently discovered way of training Single Layer Feed-forward Neural Networks with an explicitly given solution, which exists because the input weights and biases are generated randomly and never change. The method in general achieves performance comparable to Error Back-Propagation, but the training time is up to 5 orders of magnitude smaller. Despite a random initialization, the regularization procedures explained in the thesis ensure consistently good results.

While the general methodology of ELMs is well developed, the sheer speed of the method enables its un-typical usage for state-of-the-art techniques based on repetitive model re-training and re-evaluation. Three of such techniques are explained in the third chapter: a way of visualizing high-dimensional data onto a provided fixed set of visualization points, an approach for detecting samples in a dataset with incorrect labels (mistakenly assigned, mistyped or a low confidence), and a way of computing confidence intervals for ELM predictions. All three methods prove useful, and allow even more applications in the future.

ELM method is a promising basis for dealing with Big Data, because it naturally deals with the problem of large data size. An adaptation of ELM to Big Data problems, and a corresponding toolbox (published and freely available) are described in chapter 4. An adaptation includes an iterative solution of ELM which satisfies a limited computer memory constraints and allows for a convenient parallelization. Other tools are GPU-accelerated computations and support for a convenient huge data storage format. The chapter also provides two real-world examples of dealing with Big Data using ELMs, which present other problems of Big Data such as veracity and velocity, and solutions to them in the particular problem context.

Public Abstract

Real world tasks can often be written mathematically as maximizing a number (like income) or minimizing another number (like expenses). Some of these tasks have exact mathematical solution. Exact solution of other tasks is unknown, but there are multiple correct examples. Machine Learning can learn approximate solution from examples, and this thesis is about state-of-the-art methods in Machine Learning.

Consider speech recognition problem: everyone of us can hear a word in a native language and easily scribe it with letters, but nobody can write down a mathematical algorithm of how he or she does that. Machine Learning has a method called Artificial Neural Network, which mimic the way how a brain works, and can learn from a set of prepared data (like sounds with the corresponding letters in the previous example) how to solve a problem without an exact algorithm ̶ not perfectly, but good enough.

This thesis tells about Extreme Learning Machines, a kind of Artificial Neural Network that runs very fast on a computer. It presents two directions of research. First, the Extreme Learning Machine is trained not once (like Machine Learning methods normally do) but millions of times, each time a bit differently. That way helps learn new things about the data, like are there any mistakes in the labelling, or what is the best way of showing a complex data on a piece of paper.

Another direction of research is how to make Extreme Learning Machines even faster, or run with even more data, which is the same thing. The method is improved by a better algorithm and a good program code. This allows running ELM on computers with small amount of memory, use graphics card to do the computations faster, read very large data piece-by-piece from a hard drive, and compute a large ELM in parts on many different computers at the same time to get results faster. An ELM method with all these improvements is published as a freely downloadable computer program, so anyone can use a fast ELM even if he/she is not an expert in Artificial Neural Networks. Also, this chapter presents two examples of solving large real-world problems with ELMs, where the main difficulty is not the ELM itself but the strange provided data and special requirements for a good solution.

Keywords

publicabstract, Artificial Neural Network, Big Data, Extreme Learning Machine

Pages

xix, 277

Bibliography

259-277

Copyright

Copyright 2016 Anton Akusok

Share

COinS