Architecture for multivariate multi-time-series model where some features are TS specific and some features are globalAdding Features To Time Series Model LSTMTime series forecasting with RNN(stateful LSTM) produces constant valuesMulti-dimentional and multivariate Time-Series forecast (RNN/LSTM) KerasPredict multi-steps for Multi-Time series output with KerasValueError: Error when checking target: expected dense_2 to have shape (1,) but got array with shape (0,)Multivariate and multi-series LSTMDataset containing spatial and temporal features (built on a CNN model)Shaping data for ConvLSTM for many-to-one image modelMulti-Step Forecast for Multivariate Time Series (LSTM) Keras

What happens if I start too many background jobs?

Was Hulk present at this event?

Accidentally deleted the "/usr/share" folder

Can commander tax be proliferated?

Why is Thanos so tough at the beginning of "Avengers: Endgame"?

How can I close a gap between my fence and my neighbor's that's on his side of the property line?

Python password manager

What word means "to make something obsolete"?

Did we get closer to another plane than we were supposed to, or was the pilot just protecting our delicate sensibilities?

Packet sniffer for MacOS Mojave and above

Why is the SNP putting so much emphasis on currency plans?

Why was the battle set up *outside* Winterfell?

Selecting a secure PIN for building access

Why do money exchangers give different rates to different bills?

Does hiding behind 5-ft-wide cover give full cover?

How to scale a verbatim environment on a minipage?

Showing the sample mean is a sufficient statistics from an exponential distribution

My ID is expired, can I fly to the Bahamas with my passport?

Catholic vs Protestant Support for Nazism in Germany

Can PCs use nonmagical armor and weapons looted from monsters?

Applying a function to a nested list

A non-technological, repeating, phenomenon in the sky, holding its position in the sky for hours

I caught several of my students plagiarizing. Could it be my fault as a teacher?

Transfer over $10k



Architecture for multivariate multi-time-series model where some features are TS specific and some features are global


Adding Features To Time Series Model LSTMTime series forecasting with RNN(stateful LSTM) produces constant valuesMulti-dimentional and multivariate Time-Series forecast (RNN/LSTM) KerasPredict multi-steps for Multi-Time series output with KerasValueError: Error when checking target: expected dense_2 to have shape (1,) but got array with shape (0,)Multivariate and multi-series LSTMDataset containing spatial and temporal features (built on a CNN model)Shaping data for ConvLSTM for many-to-one image modelMulti-Step Forecast for Multivariate Time Series (LSTM) Keras













0












$begingroup$


I'm looking to build a time series model (using a TCN or a LSTM) with $N$ different series, each of which has $P$ series-specific features $mathbfX$. My input array is of dimension $N times t times P$, where $t$ is the number of time steps.



I've also got features $G$, which are constant across all time series. For concreteness, imagine I'm predicting city-level ice cream sales with weather data, and I also want to use GDP growth as a predictor. GDP growth is national. A simple approach could be to augment $mathbfX$ with $G$, adding 1 to the dimension of $P$. Then my forecast output for the next period would be $N times 1 times P+1$, which is no good because there is a GDP forecast for each city, when in reality GDP growth is common across cities (when measured nationally). I suppose that I want two outputs -- one of shape $N times 1 times P$, and the other of shape $1 times 1 times 1$, a scalar (if $G$ is of dimension $t times 1$).



Here's a dummy example in which time is a global variable, but it is constant across all series. (Let's just assume for the moment that time isn't exogenous, but rather something to include in a multivariate forecast).



import numpy as np
import matplotlib.pyplot as plt
from keras.models import Model
from keras.layers import Input, Conv1D, Dense
from keras.optimizers import Adam

time = np.array(range(100))
brk = np.array((time>40) & (time < 60)).reshape(100,1)
B = np.array([5, -5]).reshape(1,2)
np.dot(brk, B)
y = np.c_[np.sin(time), np.sin(time)] + np.random.normal(scale = .2, size=(100,2))+ np.dot(brk, B)

plt.plot(time, y[:,0])
plt.plot(time, y[:,1])

# Temporal convolutional network
n_filters = 2
filter_width = 3
dilation_rates = [2**i for i in range(5)]
inp = Input(shape=(None, 2))
x = inp
for dilation_rate in dilation_rates:
x = Conv1D(filters=n_filters,
kernel_size=filter_width,
padding='causal',
activation = "relu",
dilation_rate=dilation_rate)(x)
x = Dense(2)(x)


model = Model(inputs = inp, outputs = x)
model.compile(optimizer = Adam(), loss='mean_squared_error')
model.summary()

def shift5(arr, num, fill_value=np.nan):
result = np.empty_like(arr)
if num > 0:
result[:num] = fill_value
result[num:] = arr[:-num]
elif num < 0:
result[num:] = fill_value
result[:num] = arr[-num:]
else:
result = arr
return result



X = y.reshape(2,100,1)
X = np.concatenate([X, np.concatenate([time.reshape(100,1),time.reshape(100,1)], axis = 1).reshape(2,100, 1)],
axis = 2)
X_tr = X[:,:95,:]
X_te = X[:,5:,:]

history = model.fit(X_tr, X_te,
batch_size=2,
epochs=10,
verbose = 1)


How would I modify this architecture to have two inputs and two outputs, with both input and output having local and global components?










share|improve this question









$endgroup$
















    0












    $begingroup$


    I'm looking to build a time series model (using a TCN or a LSTM) with $N$ different series, each of which has $P$ series-specific features $mathbfX$. My input array is of dimension $N times t times P$, where $t$ is the number of time steps.



    I've also got features $G$, which are constant across all time series. For concreteness, imagine I'm predicting city-level ice cream sales with weather data, and I also want to use GDP growth as a predictor. GDP growth is national. A simple approach could be to augment $mathbfX$ with $G$, adding 1 to the dimension of $P$. Then my forecast output for the next period would be $N times 1 times P+1$, which is no good because there is a GDP forecast for each city, when in reality GDP growth is common across cities (when measured nationally). I suppose that I want two outputs -- one of shape $N times 1 times P$, and the other of shape $1 times 1 times 1$, a scalar (if $G$ is of dimension $t times 1$).



    Here's a dummy example in which time is a global variable, but it is constant across all series. (Let's just assume for the moment that time isn't exogenous, but rather something to include in a multivariate forecast).



    import numpy as np
    import matplotlib.pyplot as plt
    from keras.models import Model
    from keras.layers import Input, Conv1D, Dense
    from keras.optimizers import Adam

    time = np.array(range(100))
    brk = np.array((time>40) & (time < 60)).reshape(100,1)
    B = np.array([5, -5]).reshape(1,2)
    np.dot(brk, B)
    y = np.c_[np.sin(time), np.sin(time)] + np.random.normal(scale = .2, size=(100,2))+ np.dot(brk, B)

    plt.plot(time, y[:,0])
    plt.plot(time, y[:,1])

    # Temporal convolutional network
    n_filters = 2
    filter_width = 3
    dilation_rates = [2**i for i in range(5)]
    inp = Input(shape=(None, 2))
    x = inp
    for dilation_rate in dilation_rates:
    x = Conv1D(filters=n_filters,
    kernel_size=filter_width,
    padding='causal',
    activation = "relu",
    dilation_rate=dilation_rate)(x)
    x = Dense(2)(x)


    model = Model(inputs = inp, outputs = x)
    model.compile(optimizer = Adam(), loss='mean_squared_error')
    model.summary()

    def shift5(arr, num, fill_value=np.nan):
    result = np.empty_like(arr)
    if num > 0:
    result[:num] = fill_value
    result[num:] = arr[:-num]
    elif num < 0:
    result[num:] = fill_value
    result[:num] = arr[-num:]
    else:
    result = arr
    return result



    X = y.reshape(2,100,1)
    X = np.concatenate([X, np.concatenate([time.reshape(100,1),time.reshape(100,1)], axis = 1).reshape(2,100, 1)],
    axis = 2)
    X_tr = X[:,:95,:]
    X_te = X[:,5:,:]

    history = model.fit(X_tr, X_te,
    batch_size=2,
    epochs=10,
    verbose = 1)


    How would I modify this architecture to have two inputs and two outputs, with both input and output having local and global components?










    share|improve this question









    $endgroup$














      0












      0








      0


      1



      $begingroup$


      I'm looking to build a time series model (using a TCN or a LSTM) with $N$ different series, each of which has $P$ series-specific features $mathbfX$. My input array is of dimension $N times t times P$, where $t$ is the number of time steps.



      I've also got features $G$, which are constant across all time series. For concreteness, imagine I'm predicting city-level ice cream sales with weather data, and I also want to use GDP growth as a predictor. GDP growth is national. A simple approach could be to augment $mathbfX$ with $G$, adding 1 to the dimension of $P$. Then my forecast output for the next period would be $N times 1 times P+1$, which is no good because there is a GDP forecast for each city, when in reality GDP growth is common across cities (when measured nationally). I suppose that I want two outputs -- one of shape $N times 1 times P$, and the other of shape $1 times 1 times 1$, a scalar (if $G$ is of dimension $t times 1$).



      Here's a dummy example in which time is a global variable, but it is constant across all series. (Let's just assume for the moment that time isn't exogenous, but rather something to include in a multivariate forecast).



      import numpy as np
      import matplotlib.pyplot as plt
      from keras.models import Model
      from keras.layers import Input, Conv1D, Dense
      from keras.optimizers import Adam

      time = np.array(range(100))
      brk = np.array((time>40) & (time < 60)).reshape(100,1)
      B = np.array([5, -5]).reshape(1,2)
      np.dot(brk, B)
      y = np.c_[np.sin(time), np.sin(time)] + np.random.normal(scale = .2, size=(100,2))+ np.dot(brk, B)

      plt.plot(time, y[:,0])
      plt.plot(time, y[:,1])

      # Temporal convolutional network
      n_filters = 2
      filter_width = 3
      dilation_rates = [2**i for i in range(5)]
      inp = Input(shape=(None, 2))
      x = inp
      for dilation_rate in dilation_rates:
      x = Conv1D(filters=n_filters,
      kernel_size=filter_width,
      padding='causal',
      activation = "relu",
      dilation_rate=dilation_rate)(x)
      x = Dense(2)(x)


      model = Model(inputs = inp, outputs = x)
      model.compile(optimizer = Adam(), loss='mean_squared_error')
      model.summary()

      def shift5(arr, num, fill_value=np.nan):
      result = np.empty_like(arr)
      if num > 0:
      result[:num] = fill_value
      result[num:] = arr[:-num]
      elif num < 0:
      result[num:] = fill_value
      result[:num] = arr[-num:]
      else:
      result = arr
      return result



      X = y.reshape(2,100,1)
      X = np.concatenate([X, np.concatenate([time.reshape(100,1),time.reshape(100,1)], axis = 1).reshape(2,100, 1)],
      axis = 2)
      X_tr = X[:,:95,:]
      X_te = X[:,5:,:]

      history = model.fit(X_tr, X_te,
      batch_size=2,
      epochs=10,
      verbose = 1)


      How would I modify this architecture to have two inputs and two outputs, with both input and output having local and global components?










      share|improve this question









      $endgroup$




      I'm looking to build a time series model (using a TCN or a LSTM) with $N$ different series, each of which has $P$ series-specific features $mathbfX$. My input array is of dimension $N times t times P$, where $t$ is the number of time steps.



      I've also got features $G$, which are constant across all time series. For concreteness, imagine I'm predicting city-level ice cream sales with weather data, and I also want to use GDP growth as a predictor. GDP growth is national. A simple approach could be to augment $mathbfX$ with $G$, adding 1 to the dimension of $P$. Then my forecast output for the next period would be $N times 1 times P+1$, which is no good because there is a GDP forecast for each city, when in reality GDP growth is common across cities (when measured nationally). I suppose that I want two outputs -- one of shape $N times 1 times P$, and the other of shape $1 times 1 times 1$, a scalar (if $G$ is of dimension $t times 1$).



      Here's a dummy example in which time is a global variable, but it is constant across all series. (Let's just assume for the moment that time isn't exogenous, but rather something to include in a multivariate forecast).



      import numpy as np
      import matplotlib.pyplot as plt
      from keras.models import Model
      from keras.layers import Input, Conv1D, Dense
      from keras.optimizers import Adam

      time = np.array(range(100))
      brk = np.array((time>40) & (time < 60)).reshape(100,1)
      B = np.array([5, -5]).reshape(1,2)
      np.dot(brk, B)
      y = np.c_[np.sin(time), np.sin(time)] + np.random.normal(scale = .2, size=(100,2))+ np.dot(brk, B)

      plt.plot(time, y[:,0])
      plt.plot(time, y[:,1])

      # Temporal convolutional network
      n_filters = 2
      filter_width = 3
      dilation_rates = [2**i for i in range(5)]
      inp = Input(shape=(None, 2))
      x = inp
      for dilation_rate in dilation_rates:
      x = Conv1D(filters=n_filters,
      kernel_size=filter_width,
      padding='causal',
      activation = "relu",
      dilation_rate=dilation_rate)(x)
      x = Dense(2)(x)


      model = Model(inputs = inp, outputs = x)
      model.compile(optimizer = Adam(), loss='mean_squared_error')
      model.summary()

      def shift5(arr, num, fill_value=np.nan):
      result = np.empty_like(arr)
      if num > 0:
      result[:num] = fill_value
      result[num:] = arr[:-num]
      elif num < 0:
      result[num:] = fill_value
      result[:num] = arr[-num:]
      else:
      result = arr
      return result



      X = y.reshape(2,100,1)
      X = np.concatenate([X, np.concatenate([time.reshape(100,1),time.reshape(100,1)], axis = 1).reshape(2,100, 1)],
      axis = 2)
      X_tr = X[:,:95,:]
      X_te = X[:,5:,:]

      history = model.fit(X_tr, X_te,
      batch_size=2,
      epochs=10,
      verbose = 1)


      How would I modify this architecture to have two inputs and two outputs, with both input and output having local and global components?







      neural-network deep-learning keras time-series multitask-learning






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Mar 9 at 18:15









      generic_usergeneric_user

      30418




      30418




















          1 Answer
          1






          active

          oldest

          votes


















          0












          $begingroup$

          Stacked LSTM is one option in this scenario



          enter image description here



          This assumes that First two LSTMs have different frequencies and City has static features (Like lat/long, one-hot-encoded value etc). If City is also time-series like series of population , mean income; it will be an LSTM as well.



          Code example for stacked LSTM : https://machinelearningmastery.com/stacked-long-short-term-memory-networks/






          share|improve this answer









          $endgroup$













            Your Answer








            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "557"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47000%2farchitecture-for-multivariate-multi-time-series-model-where-some-features-are-ts%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            0












            $begingroup$

            Stacked LSTM is one option in this scenario



            enter image description here



            This assumes that First two LSTMs have different frequencies and City has static features (Like lat/long, one-hot-encoded value etc). If City is also time-series like series of population , mean income; it will be an LSTM as well.



            Code example for stacked LSTM : https://machinelearningmastery.com/stacked-long-short-term-memory-networks/






            share|improve this answer









            $endgroup$

















              0












              $begingroup$

              Stacked LSTM is one option in this scenario



              enter image description here



              This assumes that First two LSTMs have different frequencies and City has static features (Like lat/long, one-hot-encoded value etc). If City is also time-series like series of population , mean income; it will be an LSTM as well.



              Code example for stacked LSTM : https://machinelearningmastery.com/stacked-long-short-term-memory-networks/






              share|improve this answer









              $endgroup$















                0












                0








                0





                $begingroup$

                Stacked LSTM is one option in this scenario



                enter image description here



                This assumes that First two LSTMs have different frequencies and City has static features (Like lat/long, one-hot-encoded value etc). If City is also time-series like series of population , mean income; it will be an LSTM as well.



                Code example for stacked LSTM : https://machinelearningmastery.com/stacked-long-short-term-memory-networks/






                share|improve this answer









                $endgroup$



                Stacked LSTM is one option in this scenario



                enter image description here



                This assumes that First two LSTMs have different frequencies and City has static features (Like lat/long, one-hot-encoded value etc). If City is also time-series like series of population , mean income; it will be an LSTM as well.



                Code example for stacked LSTM : https://machinelearningmastery.com/stacked-long-short-term-memory-networks/







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Mar 10 at 5:15









                Shamit VermaShamit Verma

                1,6891414




                1,6891414



























                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Data Science Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47000%2farchitecture-for-multivariate-multi-time-series-model-where-some-features-are-ts%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

                    Luettelo Yhdysvaltain laivaston lentotukialuksista Lähteet | Navigointivalikko

                    Gary (muusikko) Sisällysluettelo Historia | Rockin' High | Lähteet | Aiheesta muualla | NavigointivalikkoInfobox OKTuomas "Gary" Keskinen Ancaran kitaristiksiProjekti Rockin' High