Seq2seq model that gets as input a sentence and outputs the same sentence Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern) 2019 Moderator Election Q&A - Questionnaire 2019 Community Moderator Election ResultsNER at sentence level or document level?What is the proper train data format in LSTM?ValueError: Error when checking target: expected dense_2 to have shape (1,) but got array with shape (0,)Understanding dimensions of Keras LSTM targetWhat are the benefits and tradeoffs of a 1D conv vs a multi-input seq2seq LSTM model?Predict output sequence one at a time with feedbackHow to optimally train deep learning model using output as new inputDoes this encoder-decoder LSTM make sense for time series sequence to sequence?Architecture for linear regression with variable input where each input is n-sized one-hot encoded

Why limits give us the exact value of the slope of the tangent line?

Why does it sometimes sound good to play a grace note as a lead in to a note in a melody?

Would it be possible to dictate a bech32 address as a list of English words?

Amount of permutations on an NxNxN Rubik's Cube

What happened to Thoros of Myr's flaming sword?

Generate an RGB colour grid

Is there public access to the Meteor Crater in Arizona?

How can I prevent/balance waiting and turtling as a response to cooldown mechanics

Put R under double integral

A letter with no particular backstory

Converted a Scalar function to a TVF function for parallel execution-Still running in Serial mode

Co-worker has annoying ringtone

Belief In God or Knowledge Of God. Which is better?

Is there hard evidence that the grant peer review system performs significantly better than random?

Should there be a hyphen in the construction "IT affin"?

How do I find out the mythology and history of my Fortress?

Can you explain what "processes and tools" means in the first Agile principle?

What do you call the main part of a joke?

How to run automated tests after each commit?

How to play a character with a disability or mental disorder without being offensive?

Movie where a circus ringmaster turns people into animals

What is "gratricide"?

What initially awakened the Balrog?

How often does castling occur in grandmaster games?



Seq2seq model that gets as input a sentence and outputs the same sentence



Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern)
2019 Moderator Election Q&A - Questionnaire
2019 Community Moderator Election ResultsNER at sentence level or document level?What is the proper train data format in LSTM?ValueError: Error when checking target: expected dense_2 to have shape (1,) but got array with shape (0,)Understanding dimensions of Keras LSTM targetWhat are the benefits and tradeoffs of a 1D conv vs a multi-input seq2seq LSTM model?Predict output sequence one at a time with feedbackHow to optimally train deep learning model using output as new inputDoes this encoder-decoder LSTM make sense for time series sequence to sequence?Architecture for linear regression with variable input where each input is n-sized one-hot encoded










0












$begingroup$


I tried to implement a model that takes as input sentences, which are hate_tweets and outputs exactly the same sentences. For this reason, I gave Input to the encoder and decoder exactly the same sentences. I ran the model for 10000 samples (sentences) and for 100 epochs. A small sample of the results I have, are shown below. The model does not seem to be going well. The only time that it manages to output the same sentence is when the sentence is empty or consists of only one word.



What can be blamed for this? Do you think more epochs are needed?



Ignore the predicted class, it is irrelevant to the problem.



-
Input sentence: !!! RT @mayasolovely As a woman you shouldn't complain about cleaning up your house. & as a man you should always take the trash out...
Predicted class: neutral
Decoded sentence: funnt texted that bitch and show up the careland for the face that bitch ass niggas to me in the stay and still less niggas gonna smack
Predicted class: hate
-
Input sentence: !!!!! RT @mleew17 boy dats cold...tyga dwn bad for cuffin dat hoe in the 1st place!!
Predicted class: neutral
Decoded sentence: for a bitch that bitch ass nigga all niggas and call httpt.co8ynkynFWIF #heersh #fan
Predicted class: hate
-
Input sentence: !!!!!!! RT @UrKindOfBrand Dawg!!!! RT @80sbaby4life You ever fuck a bitch and she start to cry You be confused as shit
Predicted class: hate
Decoded sentence: for a bitch when a colored man i got a mine of the lost to the whole of the way to fuck where the mestions
Predicted class: hate
-
Input sentence: !!!!!!!!! RT @C_G_Anderson @viva_based she look like a tranny
Predicted class: neutral
Decoded sentence: & These hoes ain't loyal 😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂&#12851
Predicted class: neutral
-
Input sentence: !!!!!!!!!!!!! RT @ShenikaRoberts The shit you hear about me might be true or it might be faker than the bitch who told it to ya 
Predicted class: hate
Decoded sentence: & These hoes ain't loyal 😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂&#12851
Predicted class: neutral
-
Input sentence: !!!!!!!!!!!!!!!!!!@T_Madison_x The shit just blows me..claim you so faithful and down for somebody but still fucking with hoes! 😂😂😂
Predicted class: hate
Decoded sentence: & These hoes ain't loyal 😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂&#12851
Predicted class: neutral
-
Input sentence: !!!!!!@__BrighterDays I can not just sit up and HATE on another bitch .. I got too much shit going on!
Predicted class: hate
Decoded sentence: & These hoes ain't loyal for the most his start will can't have a good and the bitches they are ho
Predicted class: hate
-
Input sentence: !!!!“@selfiequeenbri cause I'm tired of you big bitches coming for us skinny girls!!”
Predicted class: hate
Decoded sentence: for a bitch that bitch ass nigga all niggas and call httpt.co8ynkynFWIF #heersh #fanges
Predicted class: hate
-
Input sentence: & you might not get ya bitch back & thats that
Predicted class: hate
Decoded sentence: it's trashet her and send this still like a pussy.
Predicted class: hate
-
Input sentence: @rhythmixx_ hobbies include fighting Mariam
Predicted class: neutral
Decoded sentence: ' I wonder it this bitch he still fucked.”
Predicted class: hate
-
Input sentence:
Predicted class: neutral
Decoded sentence:
Predicted class: neutral
-
Input sentence: bitch
Predicted class: hate
Decoded sentence: bitch
Predicted class: hate
-
Input sentence: Keeks is a bitch she curves everyone lol I walked into a conversation like this. Smh
Predicted class: hate
Decoded sentence: bitch who do you love me should vige of the words and say it out of them all u cant
Predicted class: hate


Following is the code I use :



from __future__ import print_function

import pickle

import numpy as np
from keras.layers import Input, LSTM, Dense
from keras.models import load_model

batch_size = 64 # Batch size for training.
epochs = 100 # Number of epochs to train for.
latent_dim = 256 # Latent dimensionality of the encoding space.
num_samples = 10000 # Number of samples to train on.
# Path to the data txt file on disk.
data_path = 'seq2seq_hate_tweets.txt'

# Vectorize the data.
input_texts = []
target_texts = []
input_characters = set()
target_characters = set()
with open(data_path, 'r', encoding='utf-8') as f:
lines = f.read().split('n')
for line in lines[: min(num_samples, len(lines) - 1)]:
input_text = line
target_text = line
# We use "tab" as the "start sequence" character
# for the targets, and "n" as "end sequence" character.
target_text = 't' + target_text + 'n'
input_texts.append(input_text)
target_texts.append(target_text)
for char in input_text:
if char not in input_characters:
input_characters.add(char)
for char in target_text:
if char not in target_characters:
target_characters.add(char)

input_characters = sorted(list(input_characters))
target_characters = sorted(list(target_characters))
num_encoder_tokens = len(input_characters)
num_decoder_tokens = len(target_characters)
max_encoder_seq_length = max([len(txt) for txt in input_texts])
max_decoder_seq_length = max([len(txt) for txt in target_texts])

input_token_index = dict(
[(char, i) for i, char in enumerate(input_characters)])
target_token_index = dict(
[(char, i) for i, char in enumerate(target_characters)])

encoder_input_data = np.zeros(
(len(input_texts), max_encoder_seq_length, num_encoder_tokens),
dtype='float32')
decoder_input_data = np.zeros(
(len(input_texts), max_decoder_seq_length, num_decoder_tokens),
dtype='float32')
decoder_target_data = np.zeros(
(len(input_texts), max_decoder_seq_length, num_decoder_tokens),
dtype='float32')

for i, (input_text, target_text) in enumerate(zip(input_texts, target_texts)):
for t, char in enumerate(input_text):
encoder_input_data[i, t, input_token_index[char]] = 1.
for t, char in enumerate(target_text):
# decoder_target_data is ahead of decoder_input_data by one timestep
decoder_input_data[i, t, target_token_index[char]] = 1.
if t > 0:
# decoder_target_data will be ahead by one timestep
# and will not include the start character.
decoder_target_data[i, t - 1, target_token_index[char]] = 1.

# Define an input sequence and process it.
encoder_inputs = Input(shape=(None, num_encoder_tokens))
encoder = LSTM(latent_dim, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_inputs)
# We discard `encoder_outputs` and only keep the states.
encoder_states = [state_h, state_c]

# Set up the decoder, using `encoder_states` as initial state.
decoder_inputs = Input(shape=(None, num_decoder_tokens))
# We set up our decoder to return full output sequences,
# and to return internal states as well. We don't use the
# return states in the training model, but we will use them in inference.
decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs,
initial_state=encoder_states)
decoder_dense = Dense(num_decoder_tokens, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)

# Define the model that will turn
# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
#model = Model([encoder_inputs, decoder_inputs], decoder_outputs)

# Run training
#model.compile(optimizer='rmsprop', loss='categorical_crossentropy')
#model.fit([encoder_input_data, decoder_input_data], decoder_target_data,
# batch_size=batch_size,
# epochs=epochs,
# validation_split=0.2)

#model.save('my_model.h5') # creates a HDF5 file 'my_model.h5'
#del model # deletes the existing model

# returns a compiled model
# identical to the previous one
model = load_model('my_model.h5')

# Next: inference mode (sampling).

# Define sampling models
#encoder_model = Model(encoder_inputs, encoder_states)

decoder_state_input_h = Input(shape=(latent_dim,))
decoder_state_input_c = Input(shape=(latent_dim,))
decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]
decoder_outputs, state_h, state_c = decoder_lstm(
decoder_inputs, initial_state=decoder_states_inputs)
decoder_states = [state_h, state_c]
decoder_outputs = decoder_dense(decoder_outputs)
#decoder_model = Model(
# [decoder_inputs] + decoder_states_inputs,
# [decoder_outputs] + decoder_states)

#encoder_model.save('my_encoder_model.h5')
#decoder_model.save('my_decoder_model.h5')
#del encoder_model
#del decoder_model

# returns a compiled model
# identical to the previous one
encoder_model = load_model('my_encoder_model.h5')
decoder_model = load_model('my_decoder_model.h5')

# Reverse-lookup token index to decode sequences back to
# something readable.
reverse_input_char_index = dict(
(i, char) for char, i in input_token_index.items())
reverse_target_char_index = dict(
(i, char) for char, i in target_token_index.items())


def decode_sequence(input_seq):
# Encode the input as state vectors.
states_value = encoder_model.predict(input_seq)

#print("states_value is : ")
#print(states_value)
# Generate empty target sequence of length 1.
target_seq = np.zeros((1, 1, num_decoder_tokens))
# Populate the first character of target sequence with the start character.
target_seq[0, 0, target_token_index['t']] = 1.

# Sampling loop for a batch of sequences
# (to simplify, here we assume a batch of size 1).
stop_condition = False
decoded_sentence = ''
while not stop_condition:
output_tokens, h, c = decoder_model.predict(
[target_seq] + states_value)

# Sample a token
sampled_token_index = np.argmax(output_tokens[0, -1, :])
sampled_char = reverse_target_char_index[sampled_token_index]
decoded_sentence += sampled_char

# Exit condition: either hit max length
# or find stop character.
if (sampled_char == 'n' or
len(decoded_sentence) > max_decoder_seq_length):
stop_condition = True

# Update the target sequence (of length 1).
target_seq = np.zeros((1, 1, num_decoder_tokens))
target_seq[0, 0, sampled_token_index] = 1.

# Update states
states_value = [h, c]

return decoded_sentence


for seq_index in range(100):
# Take one sequence (part of the training set)
# for trying out decoding.
input_seq = encoder_input_data[seq_index: seq_index + 1]
decoded_sentence = decode_sequence(input_seq)
print('-')
print('Input sentence:', input_texts[seq_index])
print('Decoded sentence:', decoded_sentence)









share|improve this question









$endgroup$
















    0












    $begingroup$


    I tried to implement a model that takes as input sentences, which are hate_tweets and outputs exactly the same sentences. For this reason, I gave Input to the encoder and decoder exactly the same sentences. I ran the model for 10000 samples (sentences) and for 100 epochs. A small sample of the results I have, are shown below. The model does not seem to be going well. The only time that it manages to output the same sentence is when the sentence is empty or consists of only one word.



    What can be blamed for this? Do you think more epochs are needed?



    Ignore the predicted class, it is irrelevant to the problem.



    -
    Input sentence: !!! RT @mayasolovely As a woman you shouldn't complain about cleaning up your house. & as a man you should always take the trash out...
    Predicted class: neutral
    Decoded sentence: funnt texted that bitch and show up the careland for the face that bitch ass niggas to me in the stay and still less niggas gonna smack
    Predicted class: hate
    -
    Input sentence: !!!!! RT @mleew17 boy dats cold...tyga dwn bad for cuffin dat hoe in the 1st place!!
    Predicted class: neutral
    Decoded sentence: for a bitch that bitch ass nigga all niggas and call httpt.co8ynkynFWIF #heersh #fan
    Predicted class: hate
    -
    Input sentence: !!!!!!! RT @UrKindOfBrand Dawg!!!! RT @80sbaby4life You ever fuck a bitch and she start to cry You be confused as shit
    Predicted class: hate
    Decoded sentence: for a bitch when a colored man i got a mine of the lost to the whole of the way to fuck where the mestions
    Predicted class: hate
    -
    Input sentence: !!!!!!!!! RT @C_G_Anderson @viva_based she look like a tranny
    Predicted class: neutral
    Decoded sentence: & These hoes ain't loyal 😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂&#12851
    Predicted class: neutral
    -
    Input sentence: !!!!!!!!!!!!! RT @ShenikaRoberts The shit you hear about me might be true or it might be faker than the bitch who told it to ya 
    Predicted class: hate
    Decoded sentence: & These hoes ain't loyal 😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂&#12851
    Predicted class: neutral
    -
    Input sentence: !!!!!!!!!!!!!!!!!!@T_Madison_x The shit just blows me..claim you so faithful and down for somebody but still fucking with hoes! 😂😂😂
    Predicted class: hate
    Decoded sentence: & These hoes ain't loyal 😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂&#12851
    Predicted class: neutral
    -
    Input sentence: !!!!!!@__BrighterDays I can not just sit up and HATE on another bitch .. I got too much shit going on!
    Predicted class: hate
    Decoded sentence: & These hoes ain't loyal for the most his start will can't have a good and the bitches they are ho
    Predicted class: hate
    -
    Input sentence: !!!!“@selfiequeenbri cause I'm tired of you big bitches coming for us skinny girls!!”
    Predicted class: hate
    Decoded sentence: for a bitch that bitch ass nigga all niggas and call httpt.co8ynkynFWIF #heersh #fanges
    Predicted class: hate
    -
    Input sentence: & you might not get ya bitch back & thats that
    Predicted class: hate
    Decoded sentence: it's trashet her and send this still like a pussy.
    Predicted class: hate
    -
    Input sentence: @rhythmixx_ hobbies include fighting Mariam
    Predicted class: neutral
    Decoded sentence: ' I wonder it this bitch he still fucked.”
    Predicted class: hate
    -
    Input sentence:
    Predicted class: neutral
    Decoded sentence:
    Predicted class: neutral
    -
    Input sentence: bitch
    Predicted class: hate
    Decoded sentence: bitch
    Predicted class: hate
    -
    Input sentence: Keeks is a bitch she curves everyone lol I walked into a conversation like this. Smh
    Predicted class: hate
    Decoded sentence: bitch who do you love me should vige of the words and say it out of them all u cant
    Predicted class: hate


    Following is the code I use :



    from __future__ import print_function

    import pickle

    import numpy as np
    from keras.layers import Input, LSTM, Dense
    from keras.models import load_model

    batch_size = 64 # Batch size for training.
    epochs = 100 # Number of epochs to train for.
    latent_dim = 256 # Latent dimensionality of the encoding space.
    num_samples = 10000 # Number of samples to train on.
    # Path to the data txt file on disk.
    data_path = 'seq2seq_hate_tweets.txt'

    # Vectorize the data.
    input_texts = []
    target_texts = []
    input_characters = set()
    target_characters = set()
    with open(data_path, 'r', encoding='utf-8') as f:
    lines = f.read().split('n')
    for line in lines[: min(num_samples, len(lines) - 1)]:
    input_text = line
    target_text = line
    # We use "tab" as the "start sequence" character
    # for the targets, and "n" as "end sequence" character.
    target_text = 't' + target_text + 'n'
    input_texts.append(input_text)
    target_texts.append(target_text)
    for char in input_text:
    if char not in input_characters:
    input_characters.add(char)
    for char in target_text:
    if char not in target_characters:
    target_characters.add(char)

    input_characters = sorted(list(input_characters))
    target_characters = sorted(list(target_characters))
    num_encoder_tokens = len(input_characters)
    num_decoder_tokens = len(target_characters)
    max_encoder_seq_length = max([len(txt) for txt in input_texts])
    max_decoder_seq_length = max([len(txt) for txt in target_texts])

    input_token_index = dict(
    [(char, i) for i, char in enumerate(input_characters)])
    target_token_index = dict(
    [(char, i) for i, char in enumerate(target_characters)])

    encoder_input_data = np.zeros(
    (len(input_texts), max_encoder_seq_length, num_encoder_tokens),
    dtype='float32')
    decoder_input_data = np.zeros(
    (len(input_texts), max_decoder_seq_length, num_decoder_tokens),
    dtype='float32')
    decoder_target_data = np.zeros(
    (len(input_texts), max_decoder_seq_length, num_decoder_tokens),
    dtype='float32')

    for i, (input_text, target_text) in enumerate(zip(input_texts, target_texts)):
    for t, char in enumerate(input_text):
    encoder_input_data[i, t, input_token_index[char]] = 1.
    for t, char in enumerate(target_text):
    # decoder_target_data is ahead of decoder_input_data by one timestep
    decoder_input_data[i, t, target_token_index[char]] = 1.
    if t > 0:
    # decoder_target_data will be ahead by one timestep
    # and will not include the start character.
    decoder_target_data[i, t - 1, target_token_index[char]] = 1.

    # Define an input sequence and process it.
    encoder_inputs = Input(shape=(None, num_encoder_tokens))
    encoder = LSTM(latent_dim, return_state=True)
    encoder_outputs, state_h, state_c = encoder(encoder_inputs)
    # We discard `encoder_outputs` and only keep the states.
    encoder_states = [state_h, state_c]

    # Set up the decoder, using `encoder_states` as initial state.
    decoder_inputs = Input(shape=(None, num_decoder_tokens))
    # We set up our decoder to return full output sequences,
    # and to return internal states as well. We don't use the
    # return states in the training model, but we will use them in inference.
    decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
    decoder_outputs, _, _ = decoder_lstm(decoder_inputs,
    initial_state=encoder_states)
    decoder_dense = Dense(num_decoder_tokens, activation='softmax')
    decoder_outputs = decoder_dense(decoder_outputs)

    # Define the model that will turn
    # `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
    #model = Model([encoder_inputs, decoder_inputs], decoder_outputs)

    # Run training
    #model.compile(optimizer='rmsprop', loss='categorical_crossentropy')
    #model.fit([encoder_input_data, decoder_input_data], decoder_target_data,
    # batch_size=batch_size,
    # epochs=epochs,
    # validation_split=0.2)

    #model.save('my_model.h5') # creates a HDF5 file 'my_model.h5'
    #del model # deletes the existing model

    # returns a compiled model
    # identical to the previous one
    model = load_model('my_model.h5')

    # Next: inference mode (sampling).

    # Define sampling models
    #encoder_model = Model(encoder_inputs, encoder_states)

    decoder_state_input_h = Input(shape=(latent_dim,))
    decoder_state_input_c = Input(shape=(latent_dim,))
    decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]
    decoder_outputs, state_h, state_c = decoder_lstm(
    decoder_inputs, initial_state=decoder_states_inputs)
    decoder_states = [state_h, state_c]
    decoder_outputs = decoder_dense(decoder_outputs)
    #decoder_model = Model(
    # [decoder_inputs] + decoder_states_inputs,
    # [decoder_outputs] + decoder_states)

    #encoder_model.save('my_encoder_model.h5')
    #decoder_model.save('my_decoder_model.h5')
    #del encoder_model
    #del decoder_model

    # returns a compiled model
    # identical to the previous one
    encoder_model = load_model('my_encoder_model.h5')
    decoder_model = load_model('my_decoder_model.h5')

    # Reverse-lookup token index to decode sequences back to
    # something readable.
    reverse_input_char_index = dict(
    (i, char) for char, i in input_token_index.items())
    reverse_target_char_index = dict(
    (i, char) for char, i in target_token_index.items())


    def decode_sequence(input_seq):
    # Encode the input as state vectors.
    states_value = encoder_model.predict(input_seq)

    #print("states_value is : ")
    #print(states_value)
    # Generate empty target sequence of length 1.
    target_seq = np.zeros((1, 1, num_decoder_tokens))
    # Populate the first character of target sequence with the start character.
    target_seq[0, 0, target_token_index['t']] = 1.

    # Sampling loop for a batch of sequences
    # (to simplify, here we assume a batch of size 1).
    stop_condition = False
    decoded_sentence = ''
    while not stop_condition:
    output_tokens, h, c = decoder_model.predict(
    [target_seq] + states_value)

    # Sample a token
    sampled_token_index = np.argmax(output_tokens[0, -1, :])
    sampled_char = reverse_target_char_index[sampled_token_index]
    decoded_sentence += sampled_char

    # Exit condition: either hit max length
    # or find stop character.
    if (sampled_char == 'n' or
    len(decoded_sentence) > max_decoder_seq_length):
    stop_condition = True

    # Update the target sequence (of length 1).
    target_seq = np.zeros((1, 1, num_decoder_tokens))
    target_seq[0, 0, sampled_token_index] = 1.

    # Update states
    states_value = [h, c]

    return decoded_sentence


    for seq_index in range(100):
    # Take one sequence (part of the training set)
    # for trying out decoding.
    input_seq = encoder_input_data[seq_index: seq_index + 1]
    decoded_sentence = decode_sequence(input_seq)
    print('-')
    print('Input sentence:', input_texts[seq_index])
    print('Decoded sentence:', decoded_sentence)









    share|improve this question









    $endgroup$














      0












      0








      0





      $begingroup$


      I tried to implement a model that takes as input sentences, which are hate_tweets and outputs exactly the same sentences. For this reason, I gave Input to the encoder and decoder exactly the same sentences. I ran the model for 10000 samples (sentences) and for 100 epochs. A small sample of the results I have, are shown below. The model does not seem to be going well. The only time that it manages to output the same sentence is when the sentence is empty or consists of only one word.



      What can be blamed for this? Do you think more epochs are needed?



      Ignore the predicted class, it is irrelevant to the problem.



      -
      Input sentence: !!! RT @mayasolovely As a woman you shouldn't complain about cleaning up your house. & as a man you should always take the trash out...
      Predicted class: neutral
      Decoded sentence: funnt texted that bitch and show up the careland for the face that bitch ass niggas to me in the stay and still less niggas gonna smack
      Predicted class: hate
      -
      Input sentence: !!!!! RT @mleew17 boy dats cold...tyga dwn bad for cuffin dat hoe in the 1st place!!
      Predicted class: neutral
      Decoded sentence: for a bitch that bitch ass nigga all niggas and call httpt.co8ynkynFWIF #heersh #fan
      Predicted class: hate
      -
      Input sentence: !!!!!!! RT @UrKindOfBrand Dawg!!!! RT @80sbaby4life You ever fuck a bitch and she start to cry You be confused as shit
      Predicted class: hate
      Decoded sentence: for a bitch when a colored man i got a mine of the lost to the whole of the way to fuck where the mestions
      Predicted class: hate
      -
      Input sentence: !!!!!!!!! RT @C_G_Anderson @viva_based she look like a tranny
      Predicted class: neutral
      Decoded sentence: & These hoes ain't loyal 😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂&#12851
      Predicted class: neutral
      -
      Input sentence: !!!!!!!!!!!!! RT @ShenikaRoberts The shit you hear about me might be true or it might be faker than the bitch who told it to ya 
      Predicted class: hate
      Decoded sentence: & These hoes ain't loyal 😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂&#12851
      Predicted class: neutral
      -
      Input sentence: !!!!!!!!!!!!!!!!!!@T_Madison_x The shit just blows me..claim you so faithful and down for somebody but still fucking with hoes! 😂😂😂
      Predicted class: hate
      Decoded sentence: & These hoes ain't loyal 😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂&#12851
      Predicted class: neutral
      -
      Input sentence: !!!!!!@__BrighterDays I can not just sit up and HATE on another bitch .. I got too much shit going on!
      Predicted class: hate
      Decoded sentence: & These hoes ain't loyal for the most his start will can't have a good and the bitches they are ho
      Predicted class: hate
      -
      Input sentence: !!!!“@selfiequeenbri cause I'm tired of you big bitches coming for us skinny girls!!”
      Predicted class: hate
      Decoded sentence: for a bitch that bitch ass nigga all niggas and call httpt.co8ynkynFWIF #heersh #fanges
      Predicted class: hate
      -
      Input sentence: & you might not get ya bitch back & thats that
      Predicted class: hate
      Decoded sentence: it's trashet her and send this still like a pussy.
      Predicted class: hate
      -
      Input sentence: @rhythmixx_ hobbies include fighting Mariam
      Predicted class: neutral
      Decoded sentence: ' I wonder it this bitch he still fucked.”
      Predicted class: hate
      -
      Input sentence:
      Predicted class: neutral
      Decoded sentence:
      Predicted class: neutral
      -
      Input sentence: bitch
      Predicted class: hate
      Decoded sentence: bitch
      Predicted class: hate
      -
      Input sentence: Keeks is a bitch she curves everyone lol I walked into a conversation like this. Smh
      Predicted class: hate
      Decoded sentence: bitch who do you love me should vige of the words and say it out of them all u cant
      Predicted class: hate


      Following is the code I use :



      from __future__ import print_function

      import pickle

      import numpy as np
      from keras.layers import Input, LSTM, Dense
      from keras.models import load_model

      batch_size = 64 # Batch size for training.
      epochs = 100 # Number of epochs to train for.
      latent_dim = 256 # Latent dimensionality of the encoding space.
      num_samples = 10000 # Number of samples to train on.
      # Path to the data txt file on disk.
      data_path = 'seq2seq_hate_tweets.txt'

      # Vectorize the data.
      input_texts = []
      target_texts = []
      input_characters = set()
      target_characters = set()
      with open(data_path, 'r', encoding='utf-8') as f:
      lines = f.read().split('n')
      for line in lines[: min(num_samples, len(lines) - 1)]:
      input_text = line
      target_text = line
      # We use "tab" as the "start sequence" character
      # for the targets, and "n" as "end sequence" character.
      target_text = 't' + target_text + 'n'
      input_texts.append(input_text)
      target_texts.append(target_text)
      for char in input_text:
      if char not in input_characters:
      input_characters.add(char)
      for char in target_text:
      if char not in target_characters:
      target_characters.add(char)

      input_characters = sorted(list(input_characters))
      target_characters = sorted(list(target_characters))
      num_encoder_tokens = len(input_characters)
      num_decoder_tokens = len(target_characters)
      max_encoder_seq_length = max([len(txt) for txt in input_texts])
      max_decoder_seq_length = max([len(txt) for txt in target_texts])

      input_token_index = dict(
      [(char, i) for i, char in enumerate(input_characters)])
      target_token_index = dict(
      [(char, i) for i, char in enumerate(target_characters)])

      encoder_input_data = np.zeros(
      (len(input_texts), max_encoder_seq_length, num_encoder_tokens),
      dtype='float32')
      decoder_input_data = np.zeros(
      (len(input_texts), max_decoder_seq_length, num_decoder_tokens),
      dtype='float32')
      decoder_target_data = np.zeros(
      (len(input_texts), max_decoder_seq_length, num_decoder_tokens),
      dtype='float32')

      for i, (input_text, target_text) in enumerate(zip(input_texts, target_texts)):
      for t, char in enumerate(input_text):
      encoder_input_data[i, t, input_token_index[char]] = 1.
      for t, char in enumerate(target_text):
      # decoder_target_data is ahead of decoder_input_data by one timestep
      decoder_input_data[i, t, target_token_index[char]] = 1.
      if t > 0:
      # decoder_target_data will be ahead by one timestep
      # and will not include the start character.
      decoder_target_data[i, t - 1, target_token_index[char]] = 1.

      # Define an input sequence and process it.
      encoder_inputs = Input(shape=(None, num_encoder_tokens))
      encoder = LSTM(latent_dim, return_state=True)
      encoder_outputs, state_h, state_c = encoder(encoder_inputs)
      # We discard `encoder_outputs` and only keep the states.
      encoder_states = [state_h, state_c]

      # Set up the decoder, using `encoder_states` as initial state.
      decoder_inputs = Input(shape=(None, num_decoder_tokens))
      # We set up our decoder to return full output sequences,
      # and to return internal states as well. We don't use the
      # return states in the training model, but we will use them in inference.
      decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
      decoder_outputs, _, _ = decoder_lstm(decoder_inputs,
      initial_state=encoder_states)
      decoder_dense = Dense(num_decoder_tokens, activation='softmax')
      decoder_outputs = decoder_dense(decoder_outputs)

      # Define the model that will turn
      # `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
      #model = Model([encoder_inputs, decoder_inputs], decoder_outputs)

      # Run training
      #model.compile(optimizer='rmsprop', loss='categorical_crossentropy')
      #model.fit([encoder_input_data, decoder_input_data], decoder_target_data,
      # batch_size=batch_size,
      # epochs=epochs,
      # validation_split=0.2)

      #model.save('my_model.h5') # creates a HDF5 file 'my_model.h5'
      #del model # deletes the existing model

      # returns a compiled model
      # identical to the previous one
      model = load_model('my_model.h5')

      # Next: inference mode (sampling).

      # Define sampling models
      #encoder_model = Model(encoder_inputs, encoder_states)

      decoder_state_input_h = Input(shape=(latent_dim,))
      decoder_state_input_c = Input(shape=(latent_dim,))
      decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]
      decoder_outputs, state_h, state_c = decoder_lstm(
      decoder_inputs, initial_state=decoder_states_inputs)
      decoder_states = [state_h, state_c]
      decoder_outputs = decoder_dense(decoder_outputs)
      #decoder_model = Model(
      # [decoder_inputs] + decoder_states_inputs,
      # [decoder_outputs] + decoder_states)

      #encoder_model.save('my_encoder_model.h5')
      #decoder_model.save('my_decoder_model.h5')
      #del encoder_model
      #del decoder_model

      # returns a compiled model
      # identical to the previous one
      encoder_model = load_model('my_encoder_model.h5')
      decoder_model = load_model('my_decoder_model.h5')

      # Reverse-lookup token index to decode sequences back to
      # something readable.
      reverse_input_char_index = dict(
      (i, char) for char, i in input_token_index.items())
      reverse_target_char_index = dict(
      (i, char) for char, i in target_token_index.items())


      def decode_sequence(input_seq):
      # Encode the input as state vectors.
      states_value = encoder_model.predict(input_seq)

      #print("states_value is : ")
      #print(states_value)
      # Generate empty target sequence of length 1.
      target_seq = np.zeros((1, 1, num_decoder_tokens))
      # Populate the first character of target sequence with the start character.
      target_seq[0, 0, target_token_index['t']] = 1.

      # Sampling loop for a batch of sequences
      # (to simplify, here we assume a batch of size 1).
      stop_condition = False
      decoded_sentence = ''
      while not stop_condition:
      output_tokens, h, c = decoder_model.predict(
      [target_seq] + states_value)

      # Sample a token
      sampled_token_index = np.argmax(output_tokens[0, -1, :])
      sampled_char = reverse_target_char_index[sampled_token_index]
      decoded_sentence += sampled_char

      # Exit condition: either hit max length
      # or find stop character.
      if (sampled_char == 'n' or
      len(decoded_sentence) > max_decoder_seq_length):
      stop_condition = True

      # Update the target sequence (of length 1).
      target_seq = np.zeros((1, 1, num_decoder_tokens))
      target_seq[0, 0, sampled_token_index] = 1.

      # Update states
      states_value = [h, c]

      return decoded_sentence


      for seq_index in range(100):
      # Take one sequence (part of the training set)
      # for trying out decoding.
      input_seq = encoder_input_data[seq_index: seq_index + 1]
      decoded_sentence = decode_sequence(input_seq)
      print('-')
      print('Input sentence:', input_texts[seq_index])
      print('Decoded sentence:', decoded_sentence)









      share|improve this question









      $endgroup$




      I tried to implement a model that takes as input sentences, which are hate_tweets and outputs exactly the same sentences. For this reason, I gave Input to the encoder and decoder exactly the same sentences. I ran the model for 10000 samples (sentences) and for 100 epochs. A small sample of the results I have, are shown below. The model does not seem to be going well. The only time that it manages to output the same sentence is when the sentence is empty or consists of only one word.



      What can be blamed for this? Do you think more epochs are needed?



      Ignore the predicted class, it is irrelevant to the problem.



      -
      Input sentence: !!! RT @mayasolovely As a woman you shouldn't complain about cleaning up your house. & as a man you should always take the trash out...
      Predicted class: neutral
      Decoded sentence: funnt texted that bitch and show up the careland for the face that bitch ass niggas to me in the stay and still less niggas gonna smack
      Predicted class: hate
      -
      Input sentence: !!!!! RT @mleew17 boy dats cold...tyga dwn bad for cuffin dat hoe in the 1st place!!
      Predicted class: neutral
      Decoded sentence: for a bitch that bitch ass nigga all niggas and call httpt.co8ynkynFWIF #heersh #fan
      Predicted class: hate
      -
      Input sentence: !!!!!!! RT @UrKindOfBrand Dawg!!!! RT @80sbaby4life You ever fuck a bitch and she start to cry You be confused as shit
      Predicted class: hate
      Decoded sentence: for a bitch when a colored man i got a mine of the lost to the whole of the way to fuck where the mestions
      Predicted class: hate
      -
      Input sentence: !!!!!!!!! RT @C_G_Anderson @viva_based she look like a tranny
      Predicted class: neutral
      Decoded sentence: & These hoes ain't loyal 😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂&#12851
      Predicted class: neutral
      -
      Input sentence: !!!!!!!!!!!!! RT @ShenikaRoberts The shit you hear about me might be true or it might be faker than the bitch who told it to ya 
      Predicted class: hate
      Decoded sentence: & These hoes ain't loyal 😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂&#12851
      Predicted class: neutral
      -
      Input sentence: !!!!!!!!!!!!!!!!!!@T_Madison_x The shit just blows me..claim you so faithful and down for somebody but still fucking with hoes! 😂😂😂
      Predicted class: hate
      Decoded sentence: & These hoes ain't loyal 😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂&#12851
      Predicted class: neutral
      -
      Input sentence: !!!!!!@__BrighterDays I can not just sit up and HATE on another bitch .. I got too much shit going on!
      Predicted class: hate
      Decoded sentence: & These hoes ain't loyal for the most his start will can't have a good and the bitches they are ho
      Predicted class: hate
      -
      Input sentence: !!!!“@selfiequeenbri cause I'm tired of you big bitches coming for us skinny girls!!”
      Predicted class: hate
      Decoded sentence: for a bitch that bitch ass nigga all niggas and call httpt.co8ynkynFWIF #heersh #fanges
      Predicted class: hate
      -
      Input sentence: & you might not get ya bitch back & thats that
      Predicted class: hate
      Decoded sentence: it's trashet her and send this still like a pussy.
      Predicted class: hate
      -
      Input sentence: @rhythmixx_ hobbies include fighting Mariam
      Predicted class: neutral
      Decoded sentence: ' I wonder it this bitch he still fucked.”
      Predicted class: hate
      -
      Input sentence:
      Predicted class: neutral
      Decoded sentence:
      Predicted class: neutral
      -
      Input sentence: bitch
      Predicted class: hate
      Decoded sentence: bitch
      Predicted class: hate
      -
      Input sentence: Keeks is a bitch she curves everyone lol I walked into a conversation like this. Smh
      Predicted class: hate
      Decoded sentence: bitch who do you love me should vige of the words and say it out of them all u cant
      Predicted class: hate


      Following is the code I use :



      from __future__ import print_function

      import pickle

      import numpy as np
      from keras.layers import Input, LSTM, Dense
      from keras.models import load_model

      batch_size = 64 # Batch size for training.
      epochs = 100 # Number of epochs to train for.
      latent_dim = 256 # Latent dimensionality of the encoding space.
      num_samples = 10000 # Number of samples to train on.
      # Path to the data txt file on disk.
      data_path = 'seq2seq_hate_tweets.txt'

      # Vectorize the data.
      input_texts = []
      target_texts = []
      input_characters = set()
      target_characters = set()
      with open(data_path, 'r', encoding='utf-8') as f:
      lines = f.read().split('n')
      for line in lines[: min(num_samples, len(lines) - 1)]:
      input_text = line
      target_text = line
      # We use "tab" as the "start sequence" character
      # for the targets, and "n" as "end sequence" character.
      target_text = 't' + target_text + 'n'
      input_texts.append(input_text)
      target_texts.append(target_text)
      for char in input_text:
      if char not in input_characters:
      input_characters.add(char)
      for char in target_text:
      if char not in target_characters:
      target_characters.add(char)

      input_characters = sorted(list(input_characters))
      target_characters = sorted(list(target_characters))
      num_encoder_tokens = len(input_characters)
      num_decoder_tokens = len(target_characters)
      max_encoder_seq_length = max([len(txt) for txt in input_texts])
      max_decoder_seq_length = max([len(txt) for txt in target_texts])

      input_token_index = dict(
      [(char, i) for i, char in enumerate(input_characters)])
      target_token_index = dict(
      [(char, i) for i, char in enumerate(target_characters)])

      encoder_input_data = np.zeros(
      (len(input_texts), max_encoder_seq_length, num_encoder_tokens),
      dtype='float32')
      decoder_input_data = np.zeros(
      (len(input_texts), max_decoder_seq_length, num_decoder_tokens),
      dtype='float32')
      decoder_target_data = np.zeros(
      (len(input_texts), max_decoder_seq_length, num_decoder_tokens),
      dtype='float32')

      for i, (input_text, target_text) in enumerate(zip(input_texts, target_texts)):
      for t, char in enumerate(input_text):
      encoder_input_data[i, t, input_token_index[char]] = 1.
      for t, char in enumerate(target_text):
      # decoder_target_data is ahead of decoder_input_data by one timestep
      decoder_input_data[i, t, target_token_index[char]] = 1.
      if t > 0:
      # decoder_target_data will be ahead by one timestep
      # and will not include the start character.
      decoder_target_data[i, t - 1, target_token_index[char]] = 1.

      # Define an input sequence and process it.
      encoder_inputs = Input(shape=(None, num_encoder_tokens))
      encoder = LSTM(latent_dim, return_state=True)
      encoder_outputs, state_h, state_c = encoder(encoder_inputs)
      # We discard `encoder_outputs` and only keep the states.
      encoder_states = [state_h, state_c]

      # Set up the decoder, using `encoder_states` as initial state.
      decoder_inputs = Input(shape=(None, num_decoder_tokens))
      # We set up our decoder to return full output sequences,
      # and to return internal states as well. We don't use the
      # return states in the training model, but we will use them in inference.
      decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
      decoder_outputs, _, _ = decoder_lstm(decoder_inputs,
      initial_state=encoder_states)
      decoder_dense = Dense(num_decoder_tokens, activation='softmax')
      decoder_outputs = decoder_dense(decoder_outputs)

      # Define the model that will turn
      # `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
      #model = Model([encoder_inputs, decoder_inputs], decoder_outputs)

      # Run training
      #model.compile(optimizer='rmsprop', loss='categorical_crossentropy')
      #model.fit([encoder_input_data, decoder_input_data], decoder_target_data,
      # batch_size=batch_size,
      # epochs=epochs,
      # validation_split=0.2)

      #model.save('my_model.h5') # creates a HDF5 file 'my_model.h5'
      #del model # deletes the existing model

      # returns a compiled model
      # identical to the previous one
      model = load_model('my_model.h5')

      # Next: inference mode (sampling).

      # Define sampling models
      #encoder_model = Model(encoder_inputs, encoder_states)

      decoder_state_input_h = Input(shape=(latent_dim,))
      decoder_state_input_c = Input(shape=(latent_dim,))
      decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]
      decoder_outputs, state_h, state_c = decoder_lstm(
      decoder_inputs, initial_state=decoder_states_inputs)
      decoder_states = [state_h, state_c]
      decoder_outputs = decoder_dense(decoder_outputs)
      #decoder_model = Model(
      # [decoder_inputs] + decoder_states_inputs,
      # [decoder_outputs] + decoder_states)

      #encoder_model.save('my_encoder_model.h5')
      #decoder_model.save('my_decoder_model.h5')
      #del encoder_model
      #del decoder_model

      # returns a compiled model
      # identical to the previous one
      encoder_model = load_model('my_encoder_model.h5')
      decoder_model = load_model('my_decoder_model.h5')

      # Reverse-lookup token index to decode sequences back to
      # something readable.
      reverse_input_char_index = dict(
      (i, char) for char, i in input_token_index.items())
      reverse_target_char_index = dict(
      (i, char) for char, i in target_token_index.items())


      def decode_sequence(input_seq):
      # Encode the input as state vectors.
      states_value = encoder_model.predict(input_seq)

      #print("states_value is : ")
      #print(states_value)
      # Generate empty target sequence of length 1.
      target_seq = np.zeros((1, 1, num_decoder_tokens))
      # Populate the first character of target sequence with the start character.
      target_seq[0, 0, target_token_index['t']] = 1.

      # Sampling loop for a batch of sequences
      # (to simplify, here we assume a batch of size 1).
      stop_condition = False
      decoded_sentence = ''
      while not stop_condition:
      output_tokens, h, c = decoder_model.predict(
      [target_seq] + states_value)

      # Sample a token
      sampled_token_index = np.argmax(output_tokens[0, -1, :])
      sampled_char = reverse_target_char_index[sampled_token_index]
      decoded_sentence += sampled_char

      # Exit condition: either hit max length
      # or find stop character.
      if (sampled_char == 'n' or
      len(decoded_sentence) > max_decoder_seq_length):
      stop_condition = True

      # Update the target sequence (of length 1).
      target_seq = np.zeros((1, 1, num_decoder_tokens))
      target_seq[0, 0, sampled_token_index] = 1.

      # Update states
      states_value = [h, c]

      return decoded_sentence


      for seq_index in range(100):
      # Take one sequence (part of the training set)
      # for trying out decoding.
      input_seq = encoder_input_data[seq_index: seq_index + 1]
      decoded_sentence = decode_sequence(input_seq)
      print('-')
      print('Input sentence:', input_texts[seq_index])
      print('Decoded sentence:', decoded_sentence)






      machine-learning nlp lstm rnn sequence-to-sequence






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Apr 2 at 18:33









      thelawthelaw

      83




      83




















          0






          active

          oldest

          votes












          Your Answer








          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "557"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48464%2fseq2seq-model-that-gets-as-input-a-sentence-and-outputs-the-same-sentence%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Data Science Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48464%2fseq2seq-model-that-gets-as-input-a-sentence-and-outputs-the-same-sentence%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

          Tähtien Talli Jäsenet | Lähteet | NavigointivalikkoSuomen Hippos – Tähtien Talli

          Do these cracks on my tires look bad? The Next CEO of Stack OverflowDry rot tire should I replace?Having to replace tiresFishtailed so easily? Bad tires? ABS?Filling the tires with something other than air, to avoid puncture hassles?Used Michelin tires safe to install?Do these tyre cracks necessitate replacement?Rumbling noise: tires or mechanicalIs it possible to fix noisy feathered tires?Are bad winter tires still better than summer tires in winter?Torque converter failure - Related to replacing only 2 tires?Why use snow tires on all 4 wheels on 2-wheel-drive cars?