How to save Tensorflow predictions to data frame? Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern) 2019 Moderator Election Q&A - Questionnaire 2019 Community Moderator Election ResultsTensorFlow: Regression using Deep Neural NetworkCategorical Variable Reduction using NNHow to improve accuracy of deep neural networksConfusion regarding epoch and accuracyLSTM future steps prediction with shifted y_train relatively to X_trainHow to use two different datasets as train and test sets?Confused about transpose convolution and tensor shapes in tensorflow GAN tuturialHow to avoid covariate shift in python and distribute classes in each train and test phase?Stop CNN model at high accuracy and low loss rate?

Is this Half-dragon Quaggoth boss monster balanced?

Is there a verb for listening stealthily?

3D Masyu - A Die

Twin's vs. Twins'

How to resize main filesystem

Understanding piped commands in GNU/Linux

What are some likely causes to domain member PC losing contact to domain controller?

Getting representations of the Lie group out of representations of its Lie algebra

Why does BitLocker not use RSA?

How to name indistinguishable henchmen in a screenplay?

French equivalents of おしゃれは足元から (Every good outfit starts with the shoes)

My mentor says to set image to Fine instead of RAW — how is this different from JPG?

When does a function NOT have an antiderivative?

Why not use the yoke to control yaw, as well as pitch and roll?

Where did Ptolemy compare the Earth to the distance of fixed stars?

Can gravitational waves pass through a black hole?

How many time has Arya actually used Needle?

New Order #6: Easter Egg

The Nth Gryphon Number

Sally's older brother

How do Java 8 default methods hеlp with lambdas?

In musical terms, what properties are varied by the human voice to produce different words / syllables?

How do you write "wild blueberries flavored"?

How to make an animal which can only breed for a certain number of generations?



How to save Tensorflow predictions to data frame?



Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern)
2019 Moderator Election Q&A - Questionnaire
2019 Community Moderator Election ResultsTensorFlow: Regression using Deep Neural NetworkCategorical Variable Reduction using NNHow to improve accuracy of deep neural networksConfusion regarding epoch and accuracyLSTM future steps prediction with shifted y_train relatively to X_trainHow to use two different datasets as train and test sets?Confused about transpose convolution and tensor shapes in tensorflow GAN tuturialHow to avoid covariate shift in python and distribute classes in each train and test phase?Stop CNN model at high accuracy and low loss rate?










1












$begingroup$


I am new to Tensorflow. I have trained Tensorflow model, but I need to take model predictions and add them to my original test set as a column. How can I do that?



def model(self, layers_dims, X_train, Y_train, X_test, Y_test, learning_rate=0.00001,
num_epochs=1000, print_cost=True):
"""
Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX.

Arguments:
X_train -- training set, of shape (input size = 12288, number of training examples = 1080)
Y_train -- test set, of shape (output size = 6, number of training examples = 1080)
X_test -- training set, of shape (input size = 12288, number of training examples = 120)
Y_test -- test set, of shape (output size = 6, number of test examples = 120)
learning_rate -- learning rate of the optimization
num_epochs -- number of epochs of the optimization loop
minibatch_size -- size of a minibatch
print_cost -- True to print the cost every 100 epochs

Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
"""

ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
tf.set_random_seed(1) # to keep consistent results
seed = 3 # to keep consistent results
(n_x, m) = X_train.shape # (n_x: input size, m : number of examples in the train set)

n_y = Y_train.shape[0] # n_y : output size
#print('Ytrain shape', Y_train.shape)
costs = [] # To keep track of the cost

# Create Placeholders of shape (n_x, n_y)
X, Y = self.create_placeholders(n_x, n_y)

# Initialize parameters
parameters = NN_predict_trading_decisions.initialize_parameters(layers_dims)

# Forward propagation: Build the forward propagation in the tensorflow graph
ZL = NN_predict_trading_decisions.forward_propagation(X, parameters)

# Cost function: Add cost function to tensorflow graph
cost = NN_predict_trading_decisions.compute_cost(ZL, Y)


# Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer.
#optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)

# Initialize all the variables
init = tf.global_variables_initializer()

# Start the session to compute the tensorflow graph
with tf.Session() as sess:

# Run the initialization
sess.run(init)

# Do the training loop
for epoch in range(num_epochs):

epoch_cost = 0. # Defines a cost related to an epoch
current_cost = sess.run([optimizer, cost], feed_dict= X: X_train, Y: Y_train)
epoch_cost += current_cost[1]

# Print the cost every epoch
if print_cost == True and epoch % 100 == 0:
print("Cost after epoch %i: %f" % (epoch, epoch_cost))
if print_cost == True and epoch % 5 == 0:
costs.append(epoch_cost)

# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()

# lets save the parameters in a variable
parameters = sess.run(parameters)
print("Parameters have been trained!")

# Calculate the correct predictions
correct_prediction = tf.equal(tf.argmax(ZL), tf.argmax(Y))



# Calculate accuracy on the test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))

print("Train Accuracy:", accuracy.eval(X: X_train, Y: Y_train))
print("Test Accuracy:", accuracy.eval(X: X_test, Y: Y_test))

return parameters









share|improve this question











$endgroup$











  • $begingroup$
    Save output in a list or numpy array.
    $endgroup$
    – Ankit Seth
    Sep 5 '18 at 10:29










  • $begingroup$
    How can I do that? Sorry, I am very new)
    $endgroup$
    – Myron Leskiv
    Sep 5 '18 at 10:34










  • $begingroup$
    Can you post some part of your code where you are getting predictions after training?
    $endgroup$
    – Ankit Seth
    Sep 5 '18 at 10:38










  • $begingroup$
    So, I did a little research and found this post, it would be useful for you. stackoverflow.com/questions/40002084/…
    $endgroup$
    – Ankit Seth
    Sep 5 '18 at 14:35










  • $begingroup$
    Are your predictions the variable ZL? Or are you talking about the correct_prediction variable? What do you want to put in a DataFrame and what type is it? You can run type(my_variable) on it.
    $endgroup$
    – n1k31t4
    Sep 5 '18 at 14:51















1












$begingroup$


I am new to Tensorflow. I have trained Tensorflow model, but I need to take model predictions and add them to my original test set as a column. How can I do that?



def model(self, layers_dims, X_train, Y_train, X_test, Y_test, learning_rate=0.00001,
num_epochs=1000, print_cost=True):
"""
Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX.

Arguments:
X_train -- training set, of shape (input size = 12288, number of training examples = 1080)
Y_train -- test set, of shape (output size = 6, number of training examples = 1080)
X_test -- training set, of shape (input size = 12288, number of training examples = 120)
Y_test -- test set, of shape (output size = 6, number of test examples = 120)
learning_rate -- learning rate of the optimization
num_epochs -- number of epochs of the optimization loop
minibatch_size -- size of a minibatch
print_cost -- True to print the cost every 100 epochs

Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
"""

ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
tf.set_random_seed(1) # to keep consistent results
seed = 3 # to keep consistent results
(n_x, m) = X_train.shape # (n_x: input size, m : number of examples in the train set)

n_y = Y_train.shape[0] # n_y : output size
#print('Ytrain shape', Y_train.shape)
costs = [] # To keep track of the cost

# Create Placeholders of shape (n_x, n_y)
X, Y = self.create_placeholders(n_x, n_y)

# Initialize parameters
parameters = NN_predict_trading_decisions.initialize_parameters(layers_dims)

# Forward propagation: Build the forward propagation in the tensorflow graph
ZL = NN_predict_trading_decisions.forward_propagation(X, parameters)

# Cost function: Add cost function to tensorflow graph
cost = NN_predict_trading_decisions.compute_cost(ZL, Y)


# Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer.
#optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)

# Initialize all the variables
init = tf.global_variables_initializer()

# Start the session to compute the tensorflow graph
with tf.Session() as sess:

# Run the initialization
sess.run(init)

# Do the training loop
for epoch in range(num_epochs):

epoch_cost = 0. # Defines a cost related to an epoch
current_cost = sess.run([optimizer, cost], feed_dict= X: X_train, Y: Y_train)
epoch_cost += current_cost[1]

# Print the cost every epoch
if print_cost == True and epoch % 100 == 0:
print("Cost after epoch %i: %f" % (epoch, epoch_cost))
if print_cost == True and epoch % 5 == 0:
costs.append(epoch_cost)

# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()

# lets save the parameters in a variable
parameters = sess.run(parameters)
print("Parameters have been trained!")

# Calculate the correct predictions
correct_prediction = tf.equal(tf.argmax(ZL), tf.argmax(Y))



# Calculate accuracy on the test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))

print("Train Accuracy:", accuracy.eval(X: X_train, Y: Y_train))
print("Test Accuracy:", accuracy.eval(X: X_test, Y: Y_test))

return parameters









share|improve this question











$endgroup$











  • $begingroup$
    Save output in a list or numpy array.
    $endgroup$
    – Ankit Seth
    Sep 5 '18 at 10:29










  • $begingroup$
    How can I do that? Sorry, I am very new)
    $endgroup$
    – Myron Leskiv
    Sep 5 '18 at 10:34










  • $begingroup$
    Can you post some part of your code where you are getting predictions after training?
    $endgroup$
    – Ankit Seth
    Sep 5 '18 at 10:38










  • $begingroup$
    So, I did a little research and found this post, it would be useful for you. stackoverflow.com/questions/40002084/…
    $endgroup$
    – Ankit Seth
    Sep 5 '18 at 14:35










  • $begingroup$
    Are your predictions the variable ZL? Or are you talking about the correct_prediction variable? What do you want to put in a DataFrame and what type is it? You can run type(my_variable) on it.
    $endgroup$
    – n1k31t4
    Sep 5 '18 at 14:51













1












1








1





$begingroup$


I am new to Tensorflow. I have trained Tensorflow model, but I need to take model predictions and add them to my original test set as a column. How can I do that?



def model(self, layers_dims, X_train, Y_train, X_test, Y_test, learning_rate=0.00001,
num_epochs=1000, print_cost=True):
"""
Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX.

Arguments:
X_train -- training set, of shape (input size = 12288, number of training examples = 1080)
Y_train -- test set, of shape (output size = 6, number of training examples = 1080)
X_test -- training set, of shape (input size = 12288, number of training examples = 120)
Y_test -- test set, of shape (output size = 6, number of test examples = 120)
learning_rate -- learning rate of the optimization
num_epochs -- number of epochs of the optimization loop
minibatch_size -- size of a minibatch
print_cost -- True to print the cost every 100 epochs

Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
"""

ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
tf.set_random_seed(1) # to keep consistent results
seed = 3 # to keep consistent results
(n_x, m) = X_train.shape # (n_x: input size, m : number of examples in the train set)

n_y = Y_train.shape[0] # n_y : output size
#print('Ytrain shape', Y_train.shape)
costs = [] # To keep track of the cost

# Create Placeholders of shape (n_x, n_y)
X, Y = self.create_placeholders(n_x, n_y)

# Initialize parameters
parameters = NN_predict_trading_decisions.initialize_parameters(layers_dims)

# Forward propagation: Build the forward propagation in the tensorflow graph
ZL = NN_predict_trading_decisions.forward_propagation(X, parameters)

# Cost function: Add cost function to tensorflow graph
cost = NN_predict_trading_decisions.compute_cost(ZL, Y)


# Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer.
#optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)

# Initialize all the variables
init = tf.global_variables_initializer()

# Start the session to compute the tensorflow graph
with tf.Session() as sess:

# Run the initialization
sess.run(init)

# Do the training loop
for epoch in range(num_epochs):

epoch_cost = 0. # Defines a cost related to an epoch
current_cost = sess.run([optimizer, cost], feed_dict= X: X_train, Y: Y_train)
epoch_cost += current_cost[1]

# Print the cost every epoch
if print_cost == True and epoch % 100 == 0:
print("Cost after epoch %i: %f" % (epoch, epoch_cost))
if print_cost == True and epoch % 5 == 0:
costs.append(epoch_cost)

# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()

# lets save the parameters in a variable
parameters = sess.run(parameters)
print("Parameters have been trained!")

# Calculate the correct predictions
correct_prediction = tf.equal(tf.argmax(ZL), tf.argmax(Y))



# Calculate accuracy on the test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))

print("Train Accuracy:", accuracy.eval(X: X_train, Y: Y_train))
print("Test Accuracy:", accuracy.eval(X: X_test, Y: Y_test))

return parameters









share|improve this question











$endgroup$




I am new to Tensorflow. I have trained Tensorflow model, but I need to take model predictions and add them to my original test set as a column. How can I do that?



def model(self, layers_dims, X_train, Y_train, X_test, Y_test, learning_rate=0.00001,
num_epochs=1000, print_cost=True):
"""
Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX.

Arguments:
X_train -- training set, of shape (input size = 12288, number of training examples = 1080)
Y_train -- test set, of shape (output size = 6, number of training examples = 1080)
X_test -- training set, of shape (input size = 12288, number of training examples = 120)
Y_test -- test set, of shape (output size = 6, number of test examples = 120)
learning_rate -- learning rate of the optimization
num_epochs -- number of epochs of the optimization loop
minibatch_size -- size of a minibatch
print_cost -- True to print the cost every 100 epochs

Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
"""

ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
tf.set_random_seed(1) # to keep consistent results
seed = 3 # to keep consistent results
(n_x, m) = X_train.shape # (n_x: input size, m : number of examples in the train set)

n_y = Y_train.shape[0] # n_y : output size
#print('Ytrain shape', Y_train.shape)
costs = [] # To keep track of the cost

# Create Placeholders of shape (n_x, n_y)
X, Y = self.create_placeholders(n_x, n_y)

# Initialize parameters
parameters = NN_predict_trading_decisions.initialize_parameters(layers_dims)

# Forward propagation: Build the forward propagation in the tensorflow graph
ZL = NN_predict_trading_decisions.forward_propagation(X, parameters)

# Cost function: Add cost function to tensorflow graph
cost = NN_predict_trading_decisions.compute_cost(ZL, Y)


# Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer.
#optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)

# Initialize all the variables
init = tf.global_variables_initializer()

# Start the session to compute the tensorflow graph
with tf.Session() as sess:

# Run the initialization
sess.run(init)

# Do the training loop
for epoch in range(num_epochs):

epoch_cost = 0. # Defines a cost related to an epoch
current_cost = sess.run([optimizer, cost], feed_dict= X: X_train, Y: Y_train)
epoch_cost += current_cost[1]

# Print the cost every epoch
if print_cost == True and epoch % 100 == 0:
print("Cost after epoch %i: %f" % (epoch, epoch_cost))
if print_cost == True and epoch % 5 == 0:
costs.append(epoch_cost)

# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()

# lets save the parameters in a variable
parameters = sess.run(parameters)
print("Parameters have been trained!")

# Calculate the correct predictions
correct_prediction = tf.equal(tf.argmax(ZL), tf.argmax(Y))



# Calculate accuracy on the test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))

print("Train Accuracy:", accuracy.eval(X: X_train, Y: Y_train))
print("Test Accuracy:", accuracy.eval(X: X_test, Y: Y_test))

return parameters






python tensorflow pandas






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Sep 5 '18 at 11:00







Myron Leskiv

















asked Sep 5 '18 at 10:25









Myron LeskivMyron Leskiv

566




566











  • $begingroup$
    Save output in a list or numpy array.
    $endgroup$
    – Ankit Seth
    Sep 5 '18 at 10:29










  • $begingroup$
    How can I do that? Sorry, I am very new)
    $endgroup$
    – Myron Leskiv
    Sep 5 '18 at 10:34










  • $begingroup$
    Can you post some part of your code where you are getting predictions after training?
    $endgroup$
    – Ankit Seth
    Sep 5 '18 at 10:38










  • $begingroup$
    So, I did a little research and found this post, it would be useful for you. stackoverflow.com/questions/40002084/…
    $endgroup$
    – Ankit Seth
    Sep 5 '18 at 14:35










  • $begingroup$
    Are your predictions the variable ZL? Or are you talking about the correct_prediction variable? What do you want to put in a DataFrame and what type is it? You can run type(my_variable) on it.
    $endgroup$
    – n1k31t4
    Sep 5 '18 at 14:51
















  • $begingroup$
    Save output in a list or numpy array.
    $endgroup$
    – Ankit Seth
    Sep 5 '18 at 10:29










  • $begingroup$
    How can I do that? Sorry, I am very new)
    $endgroup$
    – Myron Leskiv
    Sep 5 '18 at 10:34










  • $begingroup$
    Can you post some part of your code where you are getting predictions after training?
    $endgroup$
    – Ankit Seth
    Sep 5 '18 at 10:38










  • $begingroup$
    So, I did a little research and found this post, it would be useful for you. stackoverflow.com/questions/40002084/…
    $endgroup$
    – Ankit Seth
    Sep 5 '18 at 14:35










  • $begingroup$
    Are your predictions the variable ZL? Or are you talking about the correct_prediction variable? What do you want to put in a DataFrame and what type is it? You can run type(my_variable) on it.
    $endgroup$
    – n1k31t4
    Sep 5 '18 at 14:51















$begingroup$
Save output in a list or numpy array.
$endgroup$
– Ankit Seth
Sep 5 '18 at 10:29




$begingroup$
Save output in a list or numpy array.
$endgroup$
– Ankit Seth
Sep 5 '18 at 10:29












$begingroup$
How can I do that? Sorry, I am very new)
$endgroup$
– Myron Leskiv
Sep 5 '18 at 10:34




$begingroup$
How can I do that? Sorry, I am very new)
$endgroup$
– Myron Leskiv
Sep 5 '18 at 10:34












$begingroup$
Can you post some part of your code where you are getting predictions after training?
$endgroup$
– Ankit Seth
Sep 5 '18 at 10:38




$begingroup$
Can you post some part of your code where you are getting predictions after training?
$endgroup$
– Ankit Seth
Sep 5 '18 at 10:38












$begingroup$
So, I did a little research and found this post, it would be useful for you. stackoverflow.com/questions/40002084/…
$endgroup$
– Ankit Seth
Sep 5 '18 at 14:35




$begingroup$
So, I did a little research and found this post, it would be useful for you. stackoverflow.com/questions/40002084/…
$endgroup$
– Ankit Seth
Sep 5 '18 at 14:35












$begingroup$
Are your predictions the variable ZL? Or are you talking about the correct_prediction variable? What do you want to put in a DataFrame and what type is it? You can run type(my_variable) on it.
$endgroup$
– n1k31t4
Sep 5 '18 at 14:51




$begingroup$
Are your predictions the variable ZL? Or are you talking about the correct_prediction variable? What do you want to put in a DataFrame and what type is it? You can run type(my_variable) on it.
$endgroup$
– n1k31t4
Sep 5 '18 at 14:51










2 Answers
2






active

oldest

votes


















0












$begingroup$

Thanks to Ankit Seth I found the answer.



#this returns output layer of forward prop with trained parameters of the model.
pred = forward_propagation(X, parameters)
predictions_sigm = pred.eval(feed_dict = X: X_test)
predictions_sigm = sigmoid(predictions)

#pred.eval returns linear regression layer for each class separately, so we need to pick index of maximum of them. I don't use softmax layer, since the output should be the same.

#init list of classes
y_list = []

for i in range(predictions_sigm.shape[1]):
class_list = []
class_list.append(predictions_sigm[0][i])
class_list.append(predictions_sigm[1][i])
class_list.append(predictions_sigm[2][i])
class_list.append(predictions_sigm[3][i])

#get index of maximum value of sigmoid. it will correspond to class
y = np.argmax(class_list)
y_list.append(y)

#then append y_list to dataframe
df['predicted_y'] = y_list





share|improve this answer









$endgroup$




















    0












    $begingroup$

    Create a list of the inputs, run each input through your model and save the prediction into a list then you can run the following code.



    preds = YOUR_LIST_OF_PREDICTION_FROM_NN
    result = pd.DataFrame(data='Id': YOUR_TEST_DATAFRAME['Id'], 'PREDICTION_COLUM_NAME': preds)
    result.to_csv(path_or_buf='submittion.csv', index = False, header = True)


    Then extract the prediction from a tensor in Tensorflow.
    This will extract data from Tensor:



    pred = forward_propagation(X, parameters) 
    predictions = pred.eval(feed_dict = X: X_test)





    share|improve this answer











    $endgroup$








    • 1




      $begingroup$
      you copied my post from Kaggle)
      $endgroup$
      – Myron Leskiv
      Sep 6 '18 at 10:20











    Your Answer








    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "557"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f37831%2fhow-to-save-tensorflow-predictions-to-data-frame%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0












    $begingroup$

    Thanks to Ankit Seth I found the answer.



    #this returns output layer of forward prop with trained parameters of the model.
    pred = forward_propagation(X, parameters)
    predictions_sigm = pred.eval(feed_dict = X: X_test)
    predictions_sigm = sigmoid(predictions)

    #pred.eval returns linear regression layer for each class separately, so we need to pick index of maximum of them. I don't use softmax layer, since the output should be the same.

    #init list of classes
    y_list = []

    for i in range(predictions_sigm.shape[1]):
    class_list = []
    class_list.append(predictions_sigm[0][i])
    class_list.append(predictions_sigm[1][i])
    class_list.append(predictions_sigm[2][i])
    class_list.append(predictions_sigm[3][i])

    #get index of maximum value of sigmoid. it will correspond to class
    y = np.argmax(class_list)
    y_list.append(y)

    #then append y_list to dataframe
    df['predicted_y'] = y_list





    share|improve this answer









    $endgroup$

















      0












      $begingroup$

      Thanks to Ankit Seth I found the answer.



      #this returns output layer of forward prop with trained parameters of the model.
      pred = forward_propagation(X, parameters)
      predictions_sigm = pred.eval(feed_dict = X: X_test)
      predictions_sigm = sigmoid(predictions)

      #pred.eval returns linear regression layer for each class separately, so we need to pick index of maximum of them. I don't use softmax layer, since the output should be the same.

      #init list of classes
      y_list = []

      for i in range(predictions_sigm.shape[1]):
      class_list = []
      class_list.append(predictions_sigm[0][i])
      class_list.append(predictions_sigm[1][i])
      class_list.append(predictions_sigm[2][i])
      class_list.append(predictions_sigm[3][i])

      #get index of maximum value of sigmoid. it will correspond to class
      y = np.argmax(class_list)
      y_list.append(y)

      #then append y_list to dataframe
      df['predicted_y'] = y_list





      share|improve this answer









      $endgroup$















        0












        0








        0





        $begingroup$

        Thanks to Ankit Seth I found the answer.



        #this returns output layer of forward prop with trained parameters of the model.
        pred = forward_propagation(X, parameters)
        predictions_sigm = pred.eval(feed_dict = X: X_test)
        predictions_sigm = sigmoid(predictions)

        #pred.eval returns linear regression layer for each class separately, so we need to pick index of maximum of them. I don't use softmax layer, since the output should be the same.

        #init list of classes
        y_list = []

        for i in range(predictions_sigm.shape[1]):
        class_list = []
        class_list.append(predictions_sigm[0][i])
        class_list.append(predictions_sigm[1][i])
        class_list.append(predictions_sigm[2][i])
        class_list.append(predictions_sigm[3][i])

        #get index of maximum value of sigmoid. it will correspond to class
        y = np.argmax(class_list)
        y_list.append(y)

        #then append y_list to dataframe
        df['predicted_y'] = y_list





        share|improve this answer









        $endgroup$



        Thanks to Ankit Seth I found the answer.



        #this returns output layer of forward prop with trained parameters of the model.
        pred = forward_propagation(X, parameters)
        predictions_sigm = pred.eval(feed_dict = X: X_test)
        predictions_sigm = sigmoid(predictions)

        #pred.eval returns linear regression layer for each class separately, so we need to pick index of maximum of them. I don't use softmax layer, since the output should be the same.

        #init list of classes
        y_list = []

        for i in range(predictions_sigm.shape[1]):
        class_list = []
        class_list.append(predictions_sigm[0][i])
        class_list.append(predictions_sigm[1][i])
        class_list.append(predictions_sigm[2][i])
        class_list.append(predictions_sigm[3][i])

        #get index of maximum value of sigmoid. it will correspond to class
        y = np.argmax(class_list)
        y_list.append(y)

        #then append y_list to dataframe
        df['predicted_y'] = y_list






        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Sep 6 '18 at 7:07









        Myron LeskivMyron Leskiv

        566




        566





















            0












            $begingroup$

            Create a list of the inputs, run each input through your model and save the prediction into a list then you can run the following code.



            preds = YOUR_LIST_OF_PREDICTION_FROM_NN
            result = pd.DataFrame(data='Id': YOUR_TEST_DATAFRAME['Id'], 'PREDICTION_COLUM_NAME': preds)
            result.to_csv(path_or_buf='submittion.csv', index = False, header = True)


            Then extract the prediction from a tensor in Tensorflow.
            This will extract data from Tensor:



            pred = forward_propagation(X, parameters) 
            predictions = pred.eval(feed_dict = X: X_test)





            share|improve this answer











            $endgroup$








            • 1




              $begingroup$
              you copied my post from Kaggle)
              $endgroup$
              – Myron Leskiv
              Sep 6 '18 at 10:20















            0












            $begingroup$

            Create a list of the inputs, run each input through your model and save the prediction into a list then you can run the following code.



            preds = YOUR_LIST_OF_PREDICTION_FROM_NN
            result = pd.DataFrame(data='Id': YOUR_TEST_DATAFRAME['Id'], 'PREDICTION_COLUM_NAME': preds)
            result.to_csv(path_or_buf='submittion.csv', index = False, header = True)


            Then extract the prediction from a tensor in Tensorflow.
            This will extract data from Tensor:



            pred = forward_propagation(X, parameters) 
            predictions = pred.eval(feed_dict = X: X_test)





            share|improve this answer











            $endgroup$








            • 1




              $begingroup$
              you copied my post from Kaggle)
              $endgroup$
              – Myron Leskiv
              Sep 6 '18 at 10:20













            0












            0








            0





            $begingroup$

            Create a list of the inputs, run each input through your model and save the prediction into a list then you can run the following code.



            preds = YOUR_LIST_OF_PREDICTION_FROM_NN
            result = pd.DataFrame(data='Id': YOUR_TEST_DATAFRAME['Id'], 'PREDICTION_COLUM_NAME': preds)
            result.to_csv(path_or_buf='submittion.csv', index = False, header = True)


            Then extract the prediction from a tensor in Tensorflow.
            This will extract data from Tensor:



            pred = forward_propagation(X, parameters) 
            predictions = pred.eval(feed_dict = X: X_test)





            share|improve this answer











            $endgroup$



            Create a list of the inputs, run each input through your model and save the prediction into a list then you can run the following code.



            preds = YOUR_LIST_OF_PREDICTION_FROM_NN
            result = pd.DataFrame(data='Id': YOUR_TEST_DATAFRAME['Id'], 'PREDICTION_COLUM_NAME': preds)
            result.to_csv(path_or_buf='submittion.csv', index = False, header = True)


            Then extract the prediction from a tensor in Tensorflow.
            This will extract data from Tensor:



            pred = forward_propagation(X, parameters) 
            predictions = pred.eval(feed_dict = X: X_test)






            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Sep 6 '18 at 8:58









            ebrahimi

            76021022




            76021022










            answered Sep 6 '18 at 7:48









            Prasan KarunarathnaPrasan Karunarathna

            213




            213







            • 1




              $begingroup$
              you copied my post from Kaggle)
              $endgroup$
              – Myron Leskiv
              Sep 6 '18 at 10:20












            • 1




              $begingroup$
              you copied my post from Kaggle)
              $endgroup$
              – Myron Leskiv
              Sep 6 '18 at 10:20







            1




            1




            $begingroup$
            you copied my post from Kaggle)
            $endgroup$
            – Myron Leskiv
            Sep 6 '18 at 10:20




            $begingroup$
            you copied my post from Kaggle)
            $endgroup$
            – Myron Leskiv
            Sep 6 '18 at 10:20

















            draft saved

            draft discarded
















































            Thanks for contributing an answer to Data Science Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f37831%2fhow-to-save-tensorflow-predictions-to-data-frame%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

            Tähtien Talli Jäsenet | Lähteet | NavigointivalikkoSuomen Hippos – Tähtien Talli

            Do these cracks on my tires look bad? The Next CEO of Stack OverflowDry rot tire should I replace?Having to replace tiresFishtailed so easily? Bad tires? ABS?Filling the tires with something other than air, to avoid puncture hassles?Used Michelin tires safe to install?Do these tyre cracks necessitate replacement?Rumbling noise: tires or mechanicalIs it possible to fix noisy feathered tires?Are bad winter tires still better than summer tires in winter?Torque converter failure - Related to replacing only 2 tires?Why use snow tires on all 4 wheels on 2-wheel-drive cars?