There are three classes of Artificial Neural Network that are Multilayer Perceptrons (MLP), Convolutional Neural Network (CNN), and Recurrent Neural Networks (RNN). Multilayer Perceptrons are the classical type of neural network. It consists of one or more layers of neurons. Data is fed to the input layer and hidden layers providing levels of abstraction. Predictions are may made on the output layer. It is also known as visible layer. MLPs are widely used for classification problems where inputs are assigned a class. MLPs are also suitable for regression problems where a real valued quantity is predicted given a set of inputs. Data is provided in a tabular format and classification and regression prediction problems. MLPs are flexible in nature and generally used to learn a mapping from inputs to outputs. The pixels of an image or document can be rearranged to one long row of pixels in a length of data and fed into a MLP. MLPs can solve the problems of image data, Text data, and Time series data and may more. Convolution Neural Networks were designed to map image data to an output variable. The ability of CNNs is to develop an internal representation of an image. It allows to model to learn position and scale invariant structures in the data. CNN are generally used for Image data, Classification and regression prediction problems. CNN also perform spatial calculations. The CNN input is traditionally a field of matrix but can also be changed and to develop an internal representation of particular sequence. Recurrent Neural Network (RNN) were initially designed and developed to work with sequence prediction problems. Sequence predictions problems come in many forms are one to many, many to one and many to many. RNN are difficult to train and Long Short Term Memory or LSTM is the most successful RNN. RNN is generally used for text data, speech data, and classification and regression prediction problems. RNN is not suitable for tabular datasets and it has been clarified that for time series.