How to feed LSTM with different input array sizes?Looking for a proper reinforcement learning solutionPadding sequences for neural sequence models (RNNs)LSTM input in KerasKeras: Built-In Multi-Layer ShortcutKeras- LSTM answers different sizeHow is PACF analysis output related to LSTM ?How to design a LSTM network with different number of input/output units?3 dimensional array as input with Embedding Layer and LSTM in KerasHow to design a many-to-many LSTM?Can I use an array as a model feature?LSTM cell input dimensionalityHow to fix setting an array element with a sequence error?

How do I reattach a shelf to the wall when it ripped out of the wall?

Why other Westeros houses don't use wildfire?

What is the difference between `command a[bc]d` and `command `ab,cd`

Does Gita support doctrine of eternal cycle of birth and death for evil people?

What does KSP mean?

Is there a way to get a compiler for the original B programming language?

What is the relationship between spectral sequences and obstruction theory?

A ​Note ​on ​N!

Why isn't the definition of absolute value applied when squaring a radical containing a variable?

Reducing vertical space in stackrel

Is it possible to determine the symmetric encryption method used by output size?

What is the strongest case that can be made in favour of the UK regaining some control over fishing policy after Brexit?

How to reduce LED flash rate (frequency)

Why does processed meat contain preservatives, while canned fish needs not?

Do I have an "anti-research" personality?

Is the claim "Employers won't employ people with no 'social media presence'" realistic?

How can I place the product on a social media post better?

Phrase for the opposite of "foolproof"

With a Canadian student visa, can I spend a night at Vancouver before continuing to Toronto?

How much cash can I safely carry into the USA and avoid civil forfeiture?

What is Niska's accent?

Pulling the rope with one hand is as heavy as with two hands?

Critique of timeline aesthetic

Can SQL Server create collisions in system generated constraint names?



How to feed LSTM with different input array sizes?


Looking for a proper reinforcement learning solutionPadding sequences for neural sequence models (RNNs)LSTM input in KerasKeras: Built-In Multi-Layer ShortcutKeras- LSTM answers different sizeHow is PACF analysis output related to LSTM ?How to design a LSTM network with different number of input/output units?3 dimensional array as input with Embedding Layer and LSTM in KerasHow to design a many-to-many LSTM?Can I use an array as a model feature?LSTM cell input dimensionalityHow to fix setting an array element with a sequence error?













4












$begingroup$


If I like to write a LSTM network and feed it by different input array sizes, how is it possible?



For example I want to get voice messages or text messages in a different language and translate them. So the first input maybe is "hello" but the second is "how are you doing". How can I design a LSTM that can handle different input array sizes?



I am using Keras implementation of LSTM.










share|improve this question









$endgroup$
















    4












    $begingroup$


    If I like to write a LSTM network and feed it by different input array sizes, how is it possible?



    For example I want to get voice messages or text messages in a different language and translate them. So the first input maybe is "hello" but the second is "how are you doing". How can I design a LSTM that can handle different input array sizes?



    I am using Keras implementation of LSTM.










    share|improve this question









    $endgroup$














      4












      4








      4


      3



      $begingroup$


      If I like to write a LSTM network and feed it by different input array sizes, how is it possible?



      For example I want to get voice messages or text messages in a different language and translate them. So the first input maybe is "hello" but the second is "how are you doing". How can I design a LSTM that can handle different input array sizes?



      I am using Keras implementation of LSTM.










      share|improve this question









      $endgroup$




      If I like to write a LSTM network and feed it by different input array sizes, how is it possible?



      For example I want to get voice messages or text messages in a different language and translate them. So the first input maybe is "hello" but the second is "how are you doing". How can I design a LSTM that can handle different input array sizes?



      I am using Keras implementation of LSTM.







      keras lstm






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Apr 7 at 8:04









      user145959user145959

      1579




      1579




















          2 Answers
          2






          active

          oldest

          votes


















          4












          $begingroup$

          The easiest way is to use Padding and Masking.



          There are three general ways to handle variable-length sequences:



          1. Padding and masking (which can be used for (3)),

          2. Batch size = 1, and

          3. Batch size > 1, with equi-length samples in each batch.

          Padding and masking



          In this approach, we pad the shorter sequences with a special value to be masked (skipped) later. For example, suppose each timestamp has dimension 2, and -10 is the special value, then



          X = [

          [[1, 1.1],
          [0.9, 0.95]], # sequence 1 (2 timestamps)

          [[2, 2.2],
          [1.9, 1.95],
          [1.8, 1.85]], # sequence 2 (3 timestamps)

          ]


          will be converted to



          X2 = [

          [[1, 1.1],
          [0.9, 0.95],
          [-10, -10]], # padded sequence 1 (3 timestamps)

          [[2, 2.2],
          [1.9, 1.95],
          [1.8, 1.85]], # sequence 2 (3 timestamps)
          ]


          This way, all sequences would have the same length. Then, we use a Masking layer that skips those special timestamps like they don't exist. A complete example is given at the end.



          For cases (2) and (3) you need to set the seq_len of LSTM to None, e.g.



          model.add(LSTM(units, input_shape=(None, dimension)))


          this way LSTM accepts batches with different lengths; although samples inside each batch must be the same length. Then, you need to feed a custom batch generator to model.fit_generator (instead of model.fit).



          I have provided a complete example for simple case (2) (batch size = 1) at the end. Based on this example and the link, you should be able to build a generator for case (3) (batch size > 1). Specifically, we either (a) return batch_size sequences with the same length, or (b) select sequences with almost the same length, and pad the shorter ones the same as case (1), and use a Masking layer before LSTM layer to ignore the padded timestamps, e.g.



          model.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
          model.add(LSTM(lstm_units))


          where first dimension of input_shape in Masking is again None to allow batches with different lengths.



          Here is the code for cases (1) and (2):



          from keras import Sequential
          from keras.utils import Sequence
          from keras.layers import LSTM, Dense, Masking
          import numpy as np


          class MyBatchGenerator(Sequence):
          'Generates data for Keras'
          def __init__(self, X, y, batch_size=1, shuffle=True):
          'Initialization'
          self.X = X
          self.y = y
          self.batch_size = batch_size
          self.shuffle = shuffle
          self.on_epoch_end()

          def __len__(self):
          'Denotes the number of batches per epoch'
          return int(np.floor(len(self.y)/self.batch_size))

          def __getitem__(self, index):
          return self.__data_generation(index)

          def on_epoch_end(self):
          'Shuffles indexes after each epoch'
          self.indexes = np.arange(len(self.y))
          if self.shuffle == True:
          np.random.shuffle(self.indexes)

          def __data_generation(self, index):
          Xb = np.empty((self.batch_size, *X[index].shape))
          yb = np.empty((self.batch_size, *y[index].shape))
          # naively use the same sample over and over again
          for s in range(0, self.batch_size):
          Xb[s] = X[index]
          yb[s] = y[index]
          return Xb, yb


          # Parameters
          N = 1000
          halfN = int(N/2)
          dimension = 2
          lstm_units = 3

          # Data
          np.random.seed(123) # to generate the same numbers
          # create sequence lengths between 1 to 10
          seq_lens = np.random.randint(1, 10, halfN)
          X_zero = np.array([np.random.normal(0, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
          y_zero = np.zeros((halfN, 1))
          X_one = np.array([np.random.normal(1, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
          y_one = np.ones((halfN, 1))
          p = np.random.permutation(N) # to shuffle zero and one classes
          X = np.concatenate((X_zero, X_one))[p]
          y = np.concatenate((y_zero, y_one))[p]

          # Batch = 1
          model = Sequential()
          model.add(LSTM(lstm_units, input_shape=(None, dimension)))
          model.add(Dense(1, activation='sigmoid'))
          model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
          print(model.summary())
          model.fit_generator(MyBatchGenerator(X, y, batch_size=1), epochs=2)

          # Padding and Masking
          special_value = -10.0
          max_seq_len = max(seq_lens)
          Xpad = np.full((N, max_seq_len, dimension), fill_value=special_value)
          for s, x in enumerate(X):
          seq_len = x.shape[0]
          Xpad[s, 0:seq_len, :] = x
          model2 = Sequential()
          model2.add(Masking(mask_value=special_value, input_shape=(max_seq_len, dimension)))
          model2.add(LSTM(lstm_units))
          model2.add(Dense(1, activation='sigmoid'))
          model2.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
          print(model2.summary())
          model2.fit(Xpad, y, epochs=50, batch_size=32)


          Extra notes



          1. Note that if we pad without masking, padded value will be regarded as actual value, thus, it becomes noise in data. For example, a padded temperature sequence [20, 21, 22, -10, -10] will be the same as a sensor report with two noisy (wrong) measurements at the end. Model may learn to ignore this noise completely or at least partially, but it is reasonable to clean the data first, i.e. use a mask.





          share|improve this answer











          $endgroup$












          • $begingroup$
            Thank you very much Esmailian for your complete example. Just one question: What is the difference between using padding+masking and only using padding(like what the other answer suggested)? Will we see a considerable effect on the final result?
            $endgroup$
            – user145959
            Apr 7 at 21:01










          • $begingroup$
            @user145959 my pleasure! I added a note at the end.
            $endgroup$
            – Esmailian
            Apr 7 at 23:13










          • $begingroup$
            Wow a great answer! It's called bucketing, right?
            $endgroup$
            – Aditya
            Apr 8 at 3:39







          • 1




            $begingroup$
            @Aditya Thanks Aditya! I think bucketing is the partitioning of a large sequence into smaller chunks, but sequences in each batch are not necessarily chunks of the same (larger) sequence, they can be independent data points.
            $endgroup$
            – Esmailian
            Apr 8 at 11:23


















          1












          $begingroup$

          We use LSTM layers with multiple input sizes. But, you need to process them before they are feed to the LSTM.



          Padding the sequences:



          You need the pad the sequences of varying length to a fixed length. For this preprocessing, you need to determine the max length of sequences in your dataset.



          The values are padded mostly by the value of 0. You can do this in Keras with :



          y = keras.preprocessing.sequence.pad_sequences( x , maxlen=10 )


          • If the sequence is shorter than the max length, then zeros will appended till it has a length equal to the max length.


          • If the sequence is longer than the max length then, the sequence will be trimmed to the max length.






          share|improve this answer









          $endgroup$








          • 1




            $begingroup$
            Padding everything to a fixed length is wastage of space.
            $endgroup$
            – Aditya
            Apr 8 at 3:39











          Your Answer








          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "557"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48796%2fhow-to-feed-lstm-with-different-input-array-sizes%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          2 Answers
          2






          active

          oldest

          votes








          2 Answers
          2






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          4












          $begingroup$

          The easiest way is to use Padding and Masking.



          There are three general ways to handle variable-length sequences:



          1. Padding and masking (which can be used for (3)),

          2. Batch size = 1, and

          3. Batch size > 1, with equi-length samples in each batch.

          Padding and masking



          In this approach, we pad the shorter sequences with a special value to be masked (skipped) later. For example, suppose each timestamp has dimension 2, and -10 is the special value, then



          X = [

          [[1, 1.1],
          [0.9, 0.95]], # sequence 1 (2 timestamps)

          [[2, 2.2],
          [1.9, 1.95],
          [1.8, 1.85]], # sequence 2 (3 timestamps)

          ]


          will be converted to



          X2 = [

          [[1, 1.1],
          [0.9, 0.95],
          [-10, -10]], # padded sequence 1 (3 timestamps)

          [[2, 2.2],
          [1.9, 1.95],
          [1.8, 1.85]], # sequence 2 (3 timestamps)
          ]


          This way, all sequences would have the same length. Then, we use a Masking layer that skips those special timestamps like they don't exist. A complete example is given at the end.



          For cases (2) and (3) you need to set the seq_len of LSTM to None, e.g.



          model.add(LSTM(units, input_shape=(None, dimension)))


          this way LSTM accepts batches with different lengths; although samples inside each batch must be the same length. Then, you need to feed a custom batch generator to model.fit_generator (instead of model.fit).



          I have provided a complete example for simple case (2) (batch size = 1) at the end. Based on this example and the link, you should be able to build a generator for case (3) (batch size > 1). Specifically, we either (a) return batch_size sequences with the same length, or (b) select sequences with almost the same length, and pad the shorter ones the same as case (1), and use a Masking layer before LSTM layer to ignore the padded timestamps, e.g.



          model.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
          model.add(LSTM(lstm_units))


          where first dimension of input_shape in Masking is again None to allow batches with different lengths.



          Here is the code for cases (1) and (2):



          from keras import Sequential
          from keras.utils import Sequence
          from keras.layers import LSTM, Dense, Masking
          import numpy as np


          class MyBatchGenerator(Sequence):
          'Generates data for Keras'
          def __init__(self, X, y, batch_size=1, shuffle=True):
          'Initialization'
          self.X = X
          self.y = y
          self.batch_size = batch_size
          self.shuffle = shuffle
          self.on_epoch_end()

          def __len__(self):
          'Denotes the number of batches per epoch'
          return int(np.floor(len(self.y)/self.batch_size))

          def __getitem__(self, index):
          return self.__data_generation(index)

          def on_epoch_end(self):
          'Shuffles indexes after each epoch'
          self.indexes = np.arange(len(self.y))
          if self.shuffle == True:
          np.random.shuffle(self.indexes)

          def __data_generation(self, index):
          Xb = np.empty((self.batch_size, *X[index].shape))
          yb = np.empty((self.batch_size, *y[index].shape))
          # naively use the same sample over and over again
          for s in range(0, self.batch_size):
          Xb[s] = X[index]
          yb[s] = y[index]
          return Xb, yb


          # Parameters
          N = 1000
          halfN = int(N/2)
          dimension = 2
          lstm_units = 3

          # Data
          np.random.seed(123) # to generate the same numbers
          # create sequence lengths between 1 to 10
          seq_lens = np.random.randint(1, 10, halfN)
          X_zero = np.array([np.random.normal(0, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
          y_zero = np.zeros((halfN, 1))
          X_one = np.array([np.random.normal(1, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
          y_one = np.ones((halfN, 1))
          p = np.random.permutation(N) # to shuffle zero and one classes
          X = np.concatenate((X_zero, X_one))[p]
          y = np.concatenate((y_zero, y_one))[p]

          # Batch = 1
          model = Sequential()
          model.add(LSTM(lstm_units, input_shape=(None, dimension)))
          model.add(Dense(1, activation='sigmoid'))
          model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
          print(model.summary())
          model.fit_generator(MyBatchGenerator(X, y, batch_size=1), epochs=2)

          # Padding and Masking
          special_value = -10.0
          max_seq_len = max(seq_lens)
          Xpad = np.full((N, max_seq_len, dimension), fill_value=special_value)
          for s, x in enumerate(X):
          seq_len = x.shape[0]
          Xpad[s, 0:seq_len, :] = x
          model2 = Sequential()
          model2.add(Masking(mask_value=special_value, input_shape=(max_seq_len, dimension)))
          model2.add(LSTM(lstm_units))
          model2.add(Dense(1, activation='sigmoid'))
          model2.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
          print(model2.summary())
          model2.fit(Xpad, y, epochs=50, batch_size=32)


          Extra notes



          1. Note that if we pad without masking, padded value will be regarded as actual value, thus, it becomes noise in data. For example, a padded temperature sequence [20, 21, 22, -10, -10] will be the same as a sensor report with two noisy (wrong) measurements at the end. Model may learn to ignore this noise completely or at least partially, but it is reasonable to clean the data first, i.e. use a mask.





          share|improve this answer











          $endgroup$












          • $begingroup$
            Thank you very much Esmailian for your complete example. Just one question: What is the difference between using padding+masking and only using padding(like what the other answer suggested)? Will we see a considerable effect on the final result?
            $endgroup$
            – user145959
            Apr 7 at 21:01










          • $begingroup$
            @user145959 my pleasure! I added a note at the end.
            $endgroup$
            – Esmailian
            Apr 7 at 23:13










          • $begingroup$
            Wow a great answer! It's called bucketing, right?
            $endgroup$
            – Aditya
            Apr 8 at 3:39







          • 1




            $begingroup$
            @Aditya Thanks Aditya! I think bucketing is the partitioning of a large sequence into smaller chunks, but sequences in each batch are not necessarily chunks of the same (larger) sequence, they can be independent data points.
            $endgroup$
            – Esmailian
            Apr 8 at 11:23















          4












          $begingroup$

          The easiest way is to use Padding and Masking.



          There are three general ways to handle variable-length sequences:



          1. Padding and masking (which can be used for (3)),

          2. Batch size = 1, and

          3. Batch size > 1, with equi-length samples in each batch.

          Padding and masking



          In this approach, we pad the shorter sequences with a special value to be masked (skipped) later. For example, suppose each timestamp has dimension 2, and -10 is the special value, then



          X = [

          [[1, 1.1],
          [0.9, 0.95]], # sequence 1 (2 timestamps)

          [[2, 2.2],
          [1.9, 1.95],
          [1.8, 1.85]], # sequence 2 (3 timestamps)

          ]


          will be converted to



          X2 = [

          [[1, 1.1],
          [0.9, 0.95],
          [-10, -10]], # padded sequence 1 (3 timestamps)

          [[2, 2.2],
          [1.9, 1.95],
          [1.8, 1.85]], # sequence 2 (3 timestamps)
          ]


          This way, all sequences would have the same length. Then, we use a Masking layer that skips those special timestamps like they don't exist. A complete example is given at the end.



          For cases (2) and (3) you need to set the seq_len of LSTM to None, e.g.



          model.add(LSTM(units, input_shape=(None, dimension)))


          this way LSTM accepts batches with different lengths; although samples inside each batch must be the same length. Then, you need to feed a custom batch generator to model.fit_generator (instead of model.fit).



          I have provided a complete example for simple case (2) (batch size = 1) at the end. Based on this example and the link, you should be able to build a generator for case (3) (batch size > 1). Specifically, we either (a) return batch_size sequences with the same length, or (b) select sequences with almost the same length, and pad the shorter ones the same as case (1), and use a Masking layer before LSTM layer to ignore the padded timestamps, e.g.



          model.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
          model.add(LSTM(lstm_units))


          where first dimension of input_shape in Masking is again None to allow batches with different lengths.



          Here is the code for cases (1) and (2):



          from keras import Sequential
          from keras.utils import Sequence
          from keras.layers import LSTM, Dense, Masking
          import numpy as np


          class MyBatchGenerator(Sequence):
          'Generates data for Keras'
          def __init__(self, X, y, batch_size=1, shuffle=True):
          'Initialization'
          self.X = X
          self.y = y
          self.batch_size = batch_size
          self.shuffle = shuffle
          self.on_epoch_end()

          def __len__(self):
          'Denotes the number of batches per epoch'
          return int(np.floor(len(self.y)/self.batch_size))

          def __getitem__(self, index):
          return self.__data_generation(index)

          def on_epoch_end(self):
          'Shuffles indexes after each epoch'
          self.indexes = np.arange(len(self.y))
          if self.shuffle == True:
          np.random.shuffle(self.indexes)

          def __data_generation(self, index):
          Xb = np.empty((self.batch_size, *X[index].shape))
          yb = np.empty((self.batch_size, *y[index].shape))
          # naively use the same sample over and over again
          for s in range(0, self.batch_size):
          Xb[s] = X[index]
          yb[s] = y[index]
          return Xb, yb


          # Parameters
          N = 1000
          halfN = int(N/2)
          dimension = 2
          lstm_units = 3

          # Data
          np.random.seed(123) # to generate the same numbers
          # create sequence lengths between 1 to 10
          seq_lens = np.random.randint(1, 10, halfN)
          X_zero = np.array([np.random.normal(0, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
          y_zero = np.zeros((halfN, 1))
          X_one = np.array([np.random.normal(1, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
          y_one = np.ones((halfN, 1))
          p = np.random.permutation(N) # to shuffle zero and one classes
          X = np.concatenate((X_zero, X_one))[p]
          y = np.concatenate((y_zero, y_one))[p]

          # Batch = 1
          model = Sequential()
          model.add(LSTM(lstm_units, input_shape=(None, dimension)))
          model.add(Dense(1, activation='sigmoid'))
          model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
          print(model.summary())
          model.fit_generator(MyBatchGenerator(X, y, batch_size=1), epochs=2)

          # Padding and Masking
          special_value = -10.0
          max_seq_len = max(seq_lens)
          Xpad = np.full((N, max_seq_len, dimension), fill_value=special_value)
          for s, x in enumerate(X):
          seq_len = x.shape[0]
          Xpad[s, 0:seq_len, :] = x
          model2 = Sequential()
          model2.add(Masking(mask_value=special_value, input_shape=(max_seq_len, dimension)))
          model2.add(LSTM(lstm_units))
          model2.add(Dense(1, activation='sigmoid'))
          model2.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
          print(model2.summary())
          model2.fit(Xpad, y, epochs=50, batch_size=32)


          Extra notes



          1. Note that if we pad without masking, padded value will be regarded as actual value, thus, it becomes noise in data. For example, a padded temperature sequence [20, 21, 22, -10, -10] will be the same as a sensor report with two noisy (wrong) measurements at the end. Model may learn to ignore this noise completely or at least partially, but it is reasonable to clean the data first, i.e. use a mask.





          share|improve this answer











          $endgroup$












          • $begingroup$
            Thank you very much Esmailian for your complete example. Just one question: What is the difference between using padding+masking and only using padding(like what the other answer suggested)? Will we see a considerable effect on the final result?
            $endgroup$
            – user145959
            Apr 7 at 21:01










          • $begingroup$
            @user145959 my pleasure! I added a note at the end.
            $endgroup$
            – Esmailian
            Apr 7 at 23:13










          • $begingroup$
            Wow a great answer! It's called bucketing, right?
            $endgroup$
            – Aditya
            Apr 8 at 3:39







          • 1




            $begingroup$
            @Aditya Thanks Aditya! I think bucketing is the partitioning of a large sequence into smaller chunks, but sequences in each batch are not necessarily chunks of the same (larger) sequence, they can be independent data points.
            $endgroup$
            – Esmailian
            Apr 8 at 11:23













          4












          4








          4





          $begingroup$

          The easiest way is to use Padding and Masking.



          There are three general ways to handle variable-length sequences:



          1. Padding and masking (which can be used for (3)),

          2. Batch size = 1, and

          3. Batch size > 1, with equi-length samples in each batch.

          Padding and masking



          In this approach, we pad the shorter sequences with a special value to be masked (skipped) later. For example, suppose each timestamp has dimension 2, and -10 is the special value, then



          X = [

          [[1, 1.1],
          [0.9, 0.95]], # sequence 1 (2 timestamps)

          [[2, 2.2],
          [1.9, 1.95],
          [1.8, 1.85]], # sequence 2 (3 timestamps)

          ]


          will be converted to



          X2 = [

          [[1, 1.1],
          [0.9, 0.95],
          [-10, -10]], # padded sequence 1 (3 timestamps)

          [[2, 2.2],
          [1.9, 1.95],
          [1.8, 1.85]], # sequence 2 (3 timestamps)
          ]


          This way, all sequences would have the same length. Then, we use a Masking layer that skips those special timestamps like they don't exist. A complete example is given at the end.



          For cases (2) and (3) you need to set the seq_len of LSTM to None, e.g.



          model.add(LSTM(units, input_shape=(None, dimension)))


          this way LSTM accepts batches with different lengths; although samples inside each batch must be the same length. Then, you need to feed a custom batch generator to model.fit_generator (instead of model.fit).



          I have provided a complete example for simple case (2) (batch size = 1) at the end. Based on this example and the link, you should be able to build a generator for case (3) (batch size > 1). Specifically, we either (a) return batch_size sequences with the same length, or (b) select sequences with almost the same length, and pad the shorter ones the same as case (1), and use a Masking layer before LSTM layer to ignore the padded timestamps, e.g.



          model.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
          model.add(LSTM(lstm_units))


          where first dimension of input_shape in Masking is again None to allow batches with different lengths.



          Here is the code for cases (1) and (2):



          from keras import Sequential
          from keras.utils import Sequence
          from keras.layers import LSTM, Dense, Masking
          import numpy as np


          class MyBatchGenerator(Sequence):
          'Generates data for Keras'
          def __init__(self, X, y, batch_size=1, shuffle=True):
          'Initialization'
          self.X = X
          self.y = y
          self.batch_size = batch_size
          self.shuffle = shuffle
          self.on_epoch_end()

          def __len__(self):
          'Denotes the number of batches per epoch'
          return int(np.floor(len(self.y)/self.batch_size))

          def __getitem__(self, index):
          return self.__data_generation(index)

          def on_epoch_end(self):
          'Shuffles indexes after each epoch'
          self.indexes = np.arange(len(self.y))
          if self.shuffle == True:
          np.random.shuffle(self.indexes)

          def __data_generation(self, index):
          Xb = np.empty((self.batch_size, *X[index].shape))
          yb = np.empty((self.batch_size, *y[index].shape))
          # naively use the same sample over and over again
          for s in range(0, self.batch_size):
          Xb[s] = X[index]
          yb[s] = y[index]
          return Xb, yb


          # Parameters
          N = 1000
          halfN = int(N/2)
          dimension = 2
          lstm_units = 3

          # Data
          np.random.seed(123) # to generate the same numbers
          # create sequence lengths between 1 to 10
          seq_lens = np.random.randint(1, 10, halfN)
          X_zero = np.array([np.random.normal(0, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
          y_zero = np.zeros((halfN, 1))
          X_one = np.array([np.random.normal(1, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
          y_one = np.ones((halfN, 1))
          p = np.random.permutation(N) # to shuffle zero and one classes
          X = np.concatenate((X_zero, X_one))[p]
          y = np.concatenate((y_zero, y_one))[p]

          # Batch = 1
          model = Sequential()
          model.add(LSTM(lstm_units, input_shape=(None, dimension)))
          model.add(Dense(1, activation='sigmoid'))
          model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
          print(model.summary())
          model.fit_generator(MyBatchGenerator(X, y, batch_size=1), epochs=2)

          # Padding and Masking
          special_value = -10.0
          max_seq_len = max(seq_lens)
          Xpad = np.full((N, max_seq_len, dimension), fill_value=special_value)
          for s, x in enumerate(X):
          seq_len = x.shape[0]
          Xpad[s, 0:seq_len, :] = x
          model2 = Sequential()
          model2.add(Masking(mask_value=special_value, input_shape=(max_seq_len, dimension)))
          model2.add(LSTM(lstm_units))
          model2.add(Dense(1, activation='sigmoid'))
          model2.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
          print(model2.summary())
          model2.fit(Xpad, y, epochs=50, batch_size=32)


          Extra notes



          1. Note that if we pad without masking, padded value will be regarded as actual value, thus, it becomes noise in data. For example, a padded temperature sequence [20, 21, 22, -10, -10] will be the same as a sensor report with two noisy (wrong) measurements at the end. Model may learn to ignore this noise completely or at least partially, but it is reasonable to clean the data first, i.e. use a mask.





          share|improve this answer











          $endgroup$



          The easiest way is to use Padding and Masking.



          There are three general ways to handle variable-length sequences:



          1. Padding and masking (which can be used for (3)),

          2. Batch size = 1, and

          3. Batch size > 1, with equi-length samples in each batch.

          Padding and masking



          In this approach, we pad the shorter sequences with a special value to be masked (skipped) later. For example, suppose each timestamp has dimension 2, and -10 is the special value, then



          X = [

          [[1, 1.1],
          [0.9, 0.95]], # sequence 1 (2 timestamps)

          [[2, 2.2],
          [1.9, 1.95],
          [1.8, 1.85]], # sequence 2 (3 timestamps)

          ]


          will be converted to



          X2 = [

          [[1, 1.1],
          [0.9, 0.95],
          [-10, -10]], # padded sequence 1 (3 timestamps)

          [[2, 2.2],
          [1.9, 1.95],
          [1.8, 1.85]], # sequence 2 (3 timestamps)
          ]


          This way, all sequences would have the same length. Then, we use a Masking layer that skips those special timestamps like they don't exist. A complete example is given at the end.



          For cases (2) and (3) you need to set the seq_len of LSTM to None, e.g.



          model.add(LSTM(units, input_shape=(None, dimension)))


          this way LSTM accepts batches with different lengths; although samples inside each batch must be the same length. Then, you need to feed a custom batch generator to model.fit_generator (instead of model.fit).



          I have provided a complete example for simple case (2) (batch size = 1) at the end. Based on this example and the link, you should be able to build a generator for case (3) (batch size > 1). Specifically, we either (a) return batch_size sequences with the same length, or (b) select sequences with almost the same length, and pad the shorter ones the same as case (1), and use a Masking layer before LSTM layer to ignore the padded timestamps, e.g.



          model.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
          model.add(LSTM(lstm_units))


          where first dimension of input_shape in Masking is again None to allow batches with different lengths.



          Here is the code for cases (1) and (2):



          from keras import Sequential
          from keras.utils import Sequence
          from keras.layers import LSTM, Dense, Masking
          import numpy as np


          class MyBatchGenerator(Sequence):
          'Generates data for Keras'
          def __init__(self, X, y, batch_size=1, shuffle=True):
          'Initialization'
          self.X = X
          self.y = y
          self.batch_size = batch_size
          self.shuffle = shuffle
          self.on_epoch_end()

          def __len__(self):
          'Denotes the number of batches per epoch'
          return int(np.floor(len(self.y)/self.batch_size))

          def __getitem__(self, index):
          return self.__data_generation(index)

          def on_epoch_end(self):
          'Shuffles indexes after each epoch'
          self.indexes = np.arange(len(self.y))
          if self.shuffle == True:
          np.random.shuffle(self.indexes)

          def __data_generation(self, index):
          Xb = np.empty((self.batch_size, *X[index].shape))
          yb = np.empty((self.batch_size, *y[index].shape))
          # naively use the same sample over and over again
          for s in range(0, self.batch_size):
          Xb[s] = X[index]
          yb[s] = y[index]
          return Xb, yb


          # Parameters
          N = 1000
          halfN = int(N/2)
          dimension = 2
          lstm_units = 3

          # Data
          np.random.seed(123) # to generate the same numbers
          # create sequence lengths between 1 to 10
          seq_lens = np.random.randint(1, 10, halfN)
          X_zero = np.array([np.random.normal(0, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
          y_zero = np.zeros((halfN, 1))
          X_one = np.array([np.random.normal(1, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
          y_one = np.ones((halfN, 1))
          p = np.random.permutation(N) # to shuffle zero and one classes
          X = np.concatenate((X_zero, X_one))[p]
          y = np.concatenate((y_zero, y_one))[p]

          # Batch = 1
          model = Sequential()
          model.add(LSTM(lstm_units, input_shape=(None, dimension)))
          model.add(Dense(1, activation='sigmoid'))
          model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
          print(model.summary())
          model.fit_generator(MyBatchGenerator(X, y, batch_size=1), epochs=2)

          # Padding and Masking
          special_value = -10.0
          max_seq_len = max(seq_lens)
          Xpad = np.full((N, max_seq_len, dimension), fill_value=special_value)
          for s, x in enumerate(X):
          seq_len = x.shape[0]
          Xpad[s, 0:seq_len, :] = x
          model2 = Sequential()
          model2.add(Masking(mask_value=special_value, input_shape=(max_seq_len, dimension)))
          model2.add(LSTM(lstm_units))
          model2.add(Dense(1, activation='sigmoid'))
          model2.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
          print(model2.summary())
          model2.fit(Xpad, y, epochs=50, batch_size=32)


          Extra notes



          1. Note that if we pad without masking, padded value will be regarded as actual value, thus, it becomes noise in data. For example, a padded temperature sequence [20, 21, 22, -10, -10] will be the same as a sensor report with two noisy (wrong) measurements at the end. Model may learn to ignore this noise completely or at least partially, but it is reasonable to clean the data first, i.e. use a mask.






          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Apr 7 at 23:08

























          answered Apr 7 at 11:18









          EsmailianEsmailian

          4,021422




          4,021422











          • $begingroup$
            Thank you very much Esmailian for your complete example. Just one question: What is the difference between using padding+masking and only using padding(like what the other answer suggested)? Will we see a considerable effect on the final result?
            $endgroup$
            – user145959
            Apr 7 at 21:01










          • $begingroup$
            @user145959 my pleasure! I added a note at the end.
            $endgroup$
            – Esmailian
            Apr 7 at 23:13










          • $begingroup$
            Wow a great answer! It's called bucketing, right?
            $endgroup$
            – Aditya
            Apr 8 at 3:39







          • 1




            $begingroup$
            @Aditya Thanks Aditya! I think bucketing is the partitioning of a large sequence into smaller chunks, but sequences in each batch are not necessarily chunks of the same (larger) sequence, they can be independent data points.
            $endgroup$
            – Esmailian
            Apr 8 at 11:23
















          • $begingroup$
            Thank you very much Esmailian for your complete example. Just one question: What is the difference between using padding+masking and only using padding(like what the other answer suggested)? Will we see a considerable effect on the final result?
            $endgroup$
            – user145959
            Apr 7 at 21:01










          • $begingroup$
            @user145959 my pleasure! I added a note at the end.
            $endgroup$
            – Esmailian
            Apr 7 at 23:13










          • $begingroup$
            Wow a great answer! It's called bucketing, right?
            $endgroup$
            – Aditya
            Apr 8 at 3:39







          • 1




            $begingroup$
            @Aditya Thanks Aditya! I think bucketing is the partitioning of a large sequence into smaller chunks, but sequences in each batch are not necessarily chunks of the same (larger) sequence, they can be independent data points.
            $endgroup$
            – Esmailian
            Apr 8 at 11:23















          $begingroup$
          Thank you very much Esmailian for your complete example. Just one question: What is the difference between using padding+masking and only using padding(like what the other answer suggested)? Will we see a considerable effect on the final result?
          $endgroup$
          – user145959
          Apr 7 at 21:01




          $begingroup$
          Thank you very much Esmailian for your complete example. Just one question: What is the difference between using padding+masking and only using padding(like what the other answer suggested)? Will we see a considerable effect on the final result?
          $endgroup$
          – user145959
          Apr 7 at 21:01












          $begingroup$
          @user145959 my pleasure! I added a note at the end.
          $endgroup$
          – Esmailian
          Apr 7 at 23:13




          $begingroup$
          @user145959 my pleasure! I added a note at the end.
          $endgroup$
          – Esmailian
          Apr 7 at 23:13












          $begingroup$
          Wow a great answer! It's called bucketing, right?
          $endgroup$
          – Aditya
          Apr 8 at 3:39





          $begingroup$
          Wow a great answer! It's called bucketing, right?
          $endgroup$
          – Aditya
          Apr 8 at 3:39





          1




          1




          $begingroup$
          @Aditya Thanks Aditya! I think bucketing is the partitioning of a large sequence into smaller chunks, but sequences in each batch are not necessarily chunks of the same (larger) sequence, they can be independent data points.
          $endgroup$
          – Esmailian
          Apr 8 at 11:23




          $begingroup$
          @Aditya Thanks Aditya! I think bucketing is the partitioning of a large sequence into smaller chunks, but sequences in each batch are not necessarily chunks of the same (larger) sequence, they can be independent data points.
          $endgroup$
          – Esmailian
          Apr 8 at 11:23











          1












          $begingroup$

          We use LSTM layers with multiple input sizes. But, you need to process them before they are feed to the LSTM.



          Padding the sequences:



          You need the pad the sequences of varying length to a fixed length. For this preprocessing, you need to determine the max length of sequences in your dataset.



          The values are padded mostly by the value of 0. You can do this in Keras with :



          y = keras.preprocessing.sequence.pad_sequences( x , maxlen=10 )


          • If the sequence is shorter than the max length, then zeros will appended till it has a length equal to the max length.


          • If the sequence is longer than the max length then, the sequence will be trimmed to the max length.






          share|improve this answer









          $endgroup$








          • 1




            $begingroup$
            Padding everything to a fixed length is wastage of space.
            $endgroup$
            – Aditya
            Apr 8 at 3:39















          1












          $begingroup$

          We use LSTM layers with multiple input sizes. But, you need to process them before they are feed to the LSTM.



          Padding the sequences:



          You need the pad the sequences of varying length to a fixed length. For this preprocessing, you need to determine the max length of sequences in your dataset.



          The values are padded mostly by the value of 0. You can do this in Keras with :



          y = keras.preprocessing.sequence.pad_sequences( x , maxlen=10 )


          • If the sequence is shorter than the max length, then zeros will appended till it has a length equal to the max length.


          • If the sequence is longer than the max length then, the sequence will be trimmed to the max length.






          share|improve this answer









          $endgroup$








          • 1




            $begingroup$
            Padding everything to a fixed length is wastage of space.
            $endgroup$
            – Aditya
            Apr 8 at 3:39













          1












          1








          1





          $begingroup$

          We use LSTM layers with multiple input sizes. But, you need to process them before they are feed to the LSTM.



          Padding the sequences:



          You need the pad the sequences of varying length to a fixed length. For this preprocessing, you need to determine the max length of sequences in your dataset.



          The values are padded mostly by the value of 0. You can do this in Keras with :



          y = keras.preprocessing.sequence.pad_sequences( x , maxlen=10 )


          • If the sequence is shorter than the max length, then zeros will appended till it has a length equal to the max length.


          • If the sequence is longer than the max length then, the sequence will be trimmed to the max length.






          share|improve this answer









          $endgroup$



          We use LSTM layers with multiple input sizes. But, you need to process them before they are feed to the LSTM.



          Padding the sequences:



          You need the pad the sequences of varying length to a fixed length. For this preprocessing, you need to determine the max length of sequences in your dataset.



          The values are padded mostly by the value of 0. You can do this in Keras with :



          y = keras.preprocessing.sequence.pad_sequences( x , maxlen=10 )


          • If the sequence is shorter than the max length, then zeros will appended till it has a length equal to the max length.


          • If the sequence is longer than the max length then, the sequence will be trimmed to the max length.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Apr 7 at 10:57









          Shubham PanchalShubham Panchal

          413110




          413110







          • 1




            $begingroup$
            Padding everything to a fixed length is wastage of space.
            $endgroup$
            – Aditya
            Apr 8 at 3:39












          • 1




            $begingroup$
            Padding everything to a fixed length is wastage of space.
            $endgroup$
            – Aditya
            Apr 8 at 3:39







          1




          1




          $begingroup$
          Padding everything to a fixed length is wastage of space.
          $endgroup$
          – Aditya
          Apr 8 at 3:39




          $begingroup$
          Padding everything to a fixed length is wastage of space.
          $endgroup$
          – Aditya
          Apr 8 at 3:39

















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Data Science Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48796%2fhow-to-feed-lstm-with-different-input-array-sizes%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

          Luettelo Yhdysvaltain laivaston lentotukialuksista Lähteet | Navigointivalikko

          Gary (muusikko) Sisällysluettelo Historia | Rockin' High | Lähteet | Aiheesta muualla | NavigointivalikkoInfobox OKTuomas "Gary" Keskinen Ancaran kitaristiksiProjekti Rockin' High