Multi Layer Perceptrons in Python

You can see more about MLP in R here:
https://charleshsliao.wordpress.com/2017/04/10/tune-multi-layer-perceptron-mlp-in-r-with-mnist/

Generally speaking, a deep learning model means a neural network model with more than just one hidden layer.

Whether a deep learning model would be successful depends largely on the parameters tuned.

We use the data from sklearn library, and the IDE is sublime text3. Most of the code comes from the book: https://www.goodreads.com/book/show/32439431-introduction-to-machine-learning-with-python?from_search=true

###1. MLP
from sklearn.neural_network import MLPClassifier
from sklearn.datasets import load_breast_cancer
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split

cancer=load_breast_cancer()
X_train,X_test,y_train,y_test=train_test_split(cancer.data,cancer.target,random_state=0)
mlp=MLPClassifier()
mlp.fit(X_train,y_train)

print("accuracy on training set: %f" % mlp.score(X_train, y_train))
print('\n'"accuracy on test set: %f" % mlp.score(X_test, y_test))
###accuracy on training set: 0.906103
###accuracy on test set: 0.923077

###2. Rescale the data by hand and rebuild the model with parameters tuned
mean_on_train=X_train.mean(axis=0)
std_on_train=X_train.std(axis=0)
X_train_scaled=(X_train-mean_on_train)/std_on_train
X_test_scaled=(X_test-mean_on_train)/std_on_train
mlp01=MLPClassifier(max_iter=1000,solver='lbfgs',activation='tanh',
	random_state=1,hidden_layer_sizes=[10,10])
mlp01.fit(X_train_scaled,y_train)
print('\n'"accuracy on training set: %f" % mlp01.score(X_train_scaled, y_train))
print('\n'"accuracy on test set: %f" % mlp01.score(X_test_scaled, y_test))
###accuracy on training set: 1.000000
###accuracy on test set: 0.958042

###we might try to decrease the model complexity to get better generalization
###performance. Here, we choose to increase the alpha parameter, from 0.0001 to 1, to regularize the model
mlp02=MLPClassifier(max_iter=1000,alpha=0.1,solver='lbfgs',activation='tanh',
	random_state=1,hidden_layer_sizes=[10,10])
mlp02.fit(X_train_scaled,y_train)
print('\n'"accuracy on training set: %f" % mlp02.score(X_train_scaled, y_train))
print('\n'"accuracy on test set: %f" % mlp02.score(X_test_scaled, y_test))
###accuracy on training set: 1.000000
###accuracy on test set: 0.972028

###plot the hearmap. The plot below shows the weights that were learned connecting the input to the first hidden layer.
plt.figure(figsize=(20,5))
plt.imshow(mlp.coefs_[0],interpolation='none',cmap='viridis')
plt.yticks(range(30),cancer.feature_names)
plt.colorbar()
plt.show()
plt.figure(figsize=(20,5))
plt.imshow(mlp01.coefs_[0],interpolation='none',cmap='viridis')
plt.yticks(range(30),cancer.feature_names)
plt.colorbar()
plt.show()
plt.figure(figsize=(20,5))
plt.imshow(mlp02.coefs_[0],interpolation='none',cmap='viridis')
plt.yticks(range(30),cancer.feature_names)
plt.colorbar()
plt.show()
###Light green represents large positive values, while dark blue represents negative values

Very small weights for all of the hidden units are “less important”.

figure_1.png

figure_2.png

figure_3.png

###The most important parameters are the number of layers and the number of hidden units per layer.
###You should start with one or two hidden layers, and possibly expand from there.

###A common way to adjust parameters in a neural network is to first create a network that is
###large enough to overfit, making sure that the task can actually be learned by the network.
###Once you know the training data can be learned, either shrink the network or increase alpha to
###add regularization, which will improve generalization per‐ formance.

###There is also the question of how to learn the model, or the algorithm that is used for
###learning of the parameters, which is set using the algorithm parameter. “adam” is suitable for most
###situations, in MLP lbfgs is robust but time consuming, and ‘sgd’ with more parameters for dnn.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s