This is an example for MNIST Neural Network model(DNN) with TensorFlow in R with API.

Sys.setenv(TENSORFLOW_PYTHON="/usr/local/bin/python") # point to python 2.7 (self-installed, not the default one of OSX) library(tensorflow) ###1. load default mnist data datasets<-tf$contrib$learn$datasets mnist<-datasets$mnist$read_data_sets("MNIST-data",one_hot=TRUE) #Instead of running a single expensive operation independently from R, #TensorFlow lets us describe a graph of interacting operations that run entirely #outside R (Approaches like this can be seen in a few machine learning libraries.) ###2. we create a holder, a container to place the computation activities in tensorflow ###identifying format and tensor's r/c, null means any kind x<-tf$placeholder(tf$float32,shape(NULL,784L)) ###3. We identify weights and biases with tensor shape, start with 0 W <- tf$Variable(tf$zeros(shape(784L, 10L))) b <- tf$Variable(tf$zeros(shape(10L))) ###4. set up the softmax model and multiply x and W with matmul function model<-tf$nn$softmax(tf$matmul(x,W)+b) ###5. x is an holder, so should y, the final correct result y<-tf$placeholder(tf$float32,shape(NULL,10L)) ###6. build the loss function (cross entropy to evaluate the similarity/difference) cross_entropy<-tf$reduce_mean(-tf$reduce_sum(y * tf$log(model), reduction_indices = 1L)) ###7. ask TensorFlow to minimize cross_entropy with gradient descent algorithm with ###a learning rate of 0.5. This is the tune part optimizer<-tf$train$GradientDescentOptimizer(0.5) train_step<-optimizer$minimize(cross_entropy) ###8. create an operation to initialize the variables we created. Note that this ###defines the operation but does not run it yet: init<-tf$global_variables_initializer() ###9. set up a session to run from the init sess<-tf$Session() sess$run(init) ###10. run the training step 3k times with batch of 100 to replace the empty holder for (i in 1:3000) { batches<-mnist$train$next_batch(100L) batch_xs<-batches[[1]] batch_ys<-batches[[2]] sess$run(train_step, feed_dict=dict(x=batch_xs,y=batch_ys)) } ###11. tf$argmax is an extremely useful function which gives you the index ###of the highest entry in a tensor along some axis. ### use tf$equal to check if our prediction matches the truth and store in tensor as ### a vector of booleans. We r still running the evaluation process yet correction_p<-tf$equal(tf$argmax(model,1L),tf$argmax(y,1L)) ###12.cast correction_p to floating point numbers and then take the mean accuracy<-tf$reduce_mean(tf$cast(correction_p,tf$float32)) ###13. Ask for our accuracy on our test data/run the evaluation process. sess$run(accuracy,feed_dict=dict(x=mnist$test$images,y=mnist$test$labels)) ## [1] 0.9201

Most of the code comes from

https://rstudio.github.io/tensorflow/index.html

Advertisements

[…] built the simple model in last article, we will build a more sophisticated model with […]

LikeLike