Training a LSTM on a time serie containing multiple inputs for each timestep Unicorn Meta Zoo #1: Why another podcast? Announcing the arrival of Valued Associate #679: Cesar Manara 2019 Moderator Election Q&A - Questionnaire 2019 Community Moderator Election ResultsTime series prediction using ARIMA vs LSTMLSTM with multiple entries per time stepLSTM - How to prepare train from a dataset which contains multiple observations for different eventsHow to learn from time series with multiple values for each time pointsForecasting vs non-forecasting predition for time series anomaly detectionMulti-Step Forecast for Multivariate Time Series (LSTM) KerasIs there an R tutorial of using LSTM for multivariate time series forecasting?Train LSTM model with multiple time seriesHow to reshape data for LSTM training in multivariate sequence predictionLSTM Time series prediction for multiple multivariate series
Suing a Police Officer Instead of the Police Department
"Whatever a Russian does, they end up making the Kalashnikov gun"? Are there any similar proverbs in English?
Is there really no use for MD5 anymore?
Is accepting an invalid credit card number a security issue?
Is there metaphorical meaning of "aus der Haft entlassen"?
What was Apollo 13's "Little Jolt" after MECO?
Putting Ant-Man on house arrest
Has a Nobel Peace laureate ever been accused of war crimes?
Does Mathematica have an implementation of the Poisson binomial distribution?
tikz-feynman: edge labels
What is purpose of DB Browser(dbbrowser.aspx) under admin tool?
How does the mezzoloth's teleportation work?
Double-nominative constructions and “von”
What is it called when you ride around on your front wheel?
What makes accurate emulation of old systems a difficult task?
finding a tangent line to a parabola
Why does Arg'[1. + I] return -0.5?
What is this word supposed to be?
Drawing a german abacus as in the books of Adam Ries
How to find the stem of any word?
How to translate "red flag" into Spanish?
Raising a bilingual kid. When should we introduce the majority language?
Why doesn't the standard consider a template constructor as a copy constructor?
How to find if a column is referenced in a computed column?
Training a LSTM on a time serie containing multiple inputs for each timestep
Unicorn Meta Zoo #1: Why another podcast?
Announcing the arrival of Valued Associate #679: Cesar Manara
2019 Moderator Election Q&A - Questionnaire
2019 Community Moderator Election ResultsTime series prediction using ARIMA vs LSTMLSTM with multiple entries per time stepLSTM - How to prepare train from a dataset which contains multiple observations for different eventsHow to learn from time series with multiple values for each time pointsForecasting vs non-forecasting predition for time series anomaly detectionMulti-Step Forecast for Multivariate Time Series (LSTM) KerasIs there an R tutorial of using LSTM for multivariate time series forecasting?Train LSTM model with multiple time seriesHow to reshape data for LSTM training in multivariate sequence predictionLSTM Time series prediction for multiple multivariate series
$begingroup$
I am trying to train a LSTM in order to use it for forecasting : the problem is basically a multivariate multi-steps time series problem.
It is simply an experiment to see how statistical models (ARIMA, Holts-Winters, ...) and neural networks compare for a given problem.
As my dataset is perfectly fit for a statistical model, I am having trouble when trying to format it to train the LSTM as I have multiple entries for one timestep (corresponding to different entities) and I don't really know how to deal with it since the sequence is no longer tied by the time of observation. Let's say my dataset looks like the following example :
time | ent | obs
1 --- 1 ------ 5
2 --- 1 ------ 6
2 --- 5 ------ 1
3 --- 2 ------ 7
3 --- 5 ------ 4
As you can see, not every entity have an entry for any given time, and one timestep can have multiple entries.
I thought of training the LSTM for each entity but I would have too few data for most of them. Some threads gave me the idea to separate each entity into batches but the number of observations is not constant so it wouldn't work for me.
How do you think I am supposed to tackle this problem ?
time-series lstm preprocessing forecasting
$endgroup$
add a comment |
$begingroup$
I am trying to train a LSTM in order to use it for forecasting : the problem is basically a multivariate multi-steps time series problem.
It is simply an experiment to see how statistical models (ARIMA, Holts-Winters, ...) and neural networks compare for a given problem.
As my dataset is perfectly fit for a statistical model, I am having trouble when trying to format it to train the LSTM as I have multiple entries for one timestep (corresponding to different entities) and I don't really know how to deal with it since the sequence is no longer tied by the time of observation. Let's say my dataset looks like the following example :
time | ent | obs
1 --- 1 ------ 5
2 --- 1 ------ 6
2 --- 5 ------ 1
3 --- 2 ------ 7
3 --- 5 ------ 4
As you can see, not every entity have an entry for any given time, and one timestep can have multiple entries.
I thought of training the LSTM for each entity but I would have too few data for most of them. Some threads gave me the idea to separate each entity into batches but the number of observations is not constant so it wouldn't work for me.
How do you think I am supposed to tackle this problem ?
time-series lstm preprocessing forecasting
$endgroup$
add a comment |
$begingroup$
I am trying to train a LSTM in order to use it for forecasting : the problem is basically a multivariate multi-steps time series problem.
It is simply an experiment to see how statistical models (ARIMA, Holts-Winters, ...) and neural networks compare for a given problem.
As my dataset is perfectly fit for a statistical model, I am having trouble when trying to format it to train the LSTM as I have multiple entries for one timestep (corresponding to different entities) and I don't really know how to deal with it since the sequence is no longer tied by the time of observation. Let's say my dataset looks like the following example :
time | ent | obs
1 --- 1 ------ 5
2 --- 1 ------ 6
2 --- 5 ------ 1
3 --- 2 ------ 7
3 --- 5 ------ 4
As you can see, not every entity have an entry for any given time, and one timestep can have multiple entries.
I thought of training the LSTM for each entity but I would have too few data for most of them. Some threads gave me the idea to separate each entity into batches but the number of observations is not constant so it wouldn't work for me.
How do you think I am supposed to tackle this problem ?
time-series lstm preprocessing forecasting
$endgroup$
I am trying to train a LSTM in order to use it for forecasting : the problem is basically a multivariate multi-steps time series problem.
It is simply an experiment to see how statistical models (ARIMA, Holts-Winters, ...) and neural networks compare for a given problem.
As my dataset is perfectly fit for a statistical model, I am having trouble when trying to format it to train the LSTM as I have multiple entries for one timestep (corresponding to different entities) and I don't really know how to deal with it since the sequence is no longer tied by the time of observation. Let's say my dataset looks like the following example :
time | ent | obs
1 --- 1 ------ 5
2 --- 1 ------ 6
2 --- 5 ------ 1
3 --- 2 ------ 7
3 --- 5 ------ 4
As you can see, not every entity have an entry for any given time, and one timestep can have multiple entries.
I thought of training the LSTM for each entity but I would have too few data for most of them. Some threads gave me the idea to separate each entity into batches but the number of observations is not constant so it wouldn't work for me.
How do you think I am supposed to tackle this problem ?
time-series lstm preprocessing forecasting
time-series lstm preprocessing forecasting
asked Apr 6 at 20:20
naifmehnaifmeh
11
11
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
The answer to this question highly depends on what relationship between the variables you are interested in.
If you are interested in the relationship between time and observation-value, treating the entities as different batches could make sense, under the assumption that the role of individual entities doesn't really matter to you. In this case, you would, for example, add the mean of each entity (or the overall mean) to all entities with missing values to get a constant number of observations per entity. But you could also simply average all values in each timestamp and include other features as min & max. This would most probably deliver better results.
If you are interested in the relationship between entities and observation-value, this is a matter of missing data in time series. There are a lot of techniques that can help you with that from simply imputing the mean to more sophisticated methods like a Kalman filter. However, in the end, you will have to ask yourself why these observations are missing and choose the appropriate method. But since you are using time-dependent models in your experiment, I assume, this is not of interest to you.
If you are interested in the interrelationship of all three variables, you are dealing with panel data. In this case, I don't see a reasonable possibility to model this with an LSTM. Maybe another RNN-architecture could work, however, the only paper I found was Tensorial Recurrent Neural Networks for Longitudinal Data Analysis from Mingyuan et.al. But in the end, it would not matter, since an ARIMA-model also isn't appropriate for panel data. Usually, you use a Difference-In-Differences approach for that kind of data. In this case, I would suggest changing the dataset for your experiment.
$endgroup$
$begingroup$
Thanks for your answer! The observations of my data are actually sales, and I also have some exogenous data along, so I'm not truly certain I could just input the missing data for a batch as some of the entities (stores), may have opened on a later date and have little to no history available. So my interest lays in the interrelationship of all the possible variables I have. The paper you linked to seems interesting: it's implementation doesn't seem so easy but worth a try :)
$endgroup$
– naifmeh
Apr 7 at 10:08
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48775%2ftraining-a-lstm-on-a-time-serie-containing-multiple-inputs-for-each-timestep%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
The answer to this question highly depends on what relationship between the variables you are interested in.
If you are interested in the relationship between time and observation-value, treating the entities as different batches could make sense, under the assumption that the role of individual entities doesn't really matter to you. In this case, you would, for example, add the mean of each entity (or the overall mean) to all entities with missing values to get a constant number of observations per entity. But you could also simply average all values in each timestamp and include other features as min & max. This would most probably deliver better results.
If you are interested in the relationship between entities and observation-value, this is a matter of missing data in time series. There are a lot of techniques that can help you with that from simply imputing the mean to more sophisticated methods like a Kalman filter. However, in the end, you will have to ask yourself why these observations are missing and choose the appropriate method. But since you are using time-dependent models in your experiment, I assume, this is not of interest to you.
If you are interested in the interrelationship of all three variables, you are dealing with panel data. In this case, I don't see a reasonable possibility to model this with an LSTM. Maybe another RNN-architecture could work, however, the only paper I found was Tensorial Recurrent Neural Networks for Longitudinal Data Analysis from Mingyuan et.al. But in the end, it would not matter, since an ARIMA-model also isn't appropriate for panel data. Usually, you use a Difference-In-Differences approach for that kind of data. In this case, I would suggest changing the dataset for your experiment.
$endgroup$
$begingroup$
Thanks for your answer! The observations of my data are actually sales, and I also have some exogenous data along, so I'm not truly certain I could just input the missing data for a batch as some of the entities (stores), may have opened on a later date and have little to no history available. So my interest lays in the interrelationship of all the possible variables I have. The paper you linked to seems interesting: it's implementation doesn't seem so easy but worth a try :)
$endgroup$
– naifmeh
Apr 7 at 10:08
add a comment |
$begingroup$
The answer to this question highly depends on what relationship between the variables you are interested in.
If you are interested in the relationship between time and observation-value, treating the entities as different batches could make sense, under the assumption that the role of individual entities doesn't really matter to you. In this case, you would, for example, add the mean of each entity (or the overall mean) to all entities with missing values to get a constant number of observations per entity. But you could also simply average all values in each timestamp and include other features as min & max. This would most probably deliver better results.
If you are interested in the relationship between entities and observation-value, this is a matter of missing data in time series. There are a lot of techniques that can help you with that from simply imputing the mean to more sophisticated methods like a Kalman filter. However, in the end, you will have to ask yourself why these observations are missing and choose the appropriate method. But since you are using time-dependent models in your experiment, I assume, this is not of interest to you.
If you are interested in the interrelationship of all three variables, you are dealing with panel data. In this case, I don't see a reasonable possibility to model this with an LSTM. Maybe another RNN-architecture could work, however, the only paper I found was Tensorial Recurrent Neural Networks for Longitudinal Data Analysis from Mingyuan et.al. But in the end, it would not matter, since an ARIMA-model also isn't appropriate for panel data. Usually, you use a Difference-In-Differences approach for that kind of data. In this case, I would suggest changing the dataset for your experiment.
$endgroup$
$begingroup$
Thanks for your answer! The observations of my data are actually sales, and I also have some exogenous data along, so I'm not truly certain I could just input the missing data for a batch as some of the entities (stores), may have opened on a later date and have little to no history available. So my interest lays in the interrelationship of all the possible variables I have. The paper you linked to seems interesting: it's implementation doesn't seem so easy but worth a try :)
$endgroup$
– naifmeh
Apr 7 at 10:08
add a comment |
$begingroup$
The answer to this question highly depends on what relationship between the variables you are interested in.
If you are interested in the relationship between time and observation-value, treating the entities as different batches could make sense, under the assumption that the role of individual entities doesn't really matter to you. In this case, you would, for example, add the mean of each entity (or the overall mean) to all entities with missing values to get a constant number of observations per entity. But you could also simply average all values in each timestamp and include other features as min & max. This would most probably deliver better results.
If you are interested in the relationship between entities and observation-value, this is a matter of missing data in time series. There are a lot of techniques that can help you with that from simply imputing the mean to more sophisticated methods like a Kalman filter. However, in the end, you will have to ask yourself why these observations are missing and choose the appropriate method. But since you are using time-dependent models in your experiment, I assume, this is not of interest to you.
If you are interested in the interrelationship of all three variables, you are dealing with panel data. In this case, I don't see a reasonable possibility to model this with an LSTM. Maybe another RNN-architecture could work, however, the only paper I found was Tensorial Recurrent Neural Networks for Longitudinal Data Analysis from Mingyuan et.al. But in the end, it would not matter, since an ARIMA-model also isn't appropriate for panel data. Usually, you use a Difference-In-Differences approach for that kind of data. In this case, I would suggest changing the dataset for your experiment.
$endgroup$
The answer to this question highly depends on what relationship between the variables you are interested in.
If you are interested in the relationship between time and observation-value, treating the entities as different batches could make sense, under the assumption that the role of individual entities doesn't really matter to you. In this case, you would, for example, add the mean of each entity (or the overall mean) to all entities with missing values to get a constant number of observations per entity. But you could also simply average all values in each timestamp and include other features as min & max. This would most probably deliver better results.
If you are interested in the relationship between entities and observation-value, this is a matter of missing data in time series. There are a lot of techniques that can help you with that from simply imputing the mean to more sophisticated methods like a Kalman filter. However, in the end, you will have to ask yourself why these observations are missing and choose the appropriate method. But since you are using time-dependent models in your experiment, I assume, this is not of interest to you.
If you are interested in the interrelationship of all three variables, you are dealing with panel data. In this case, I don't see a reasonable possibility to model this with an LSTM. Maybe another RNN-architecture could work, however, the only paper I found was Tensorial Recurrent Neural Networks for Longitudinal Data Analysis from Mingyuan et.al. But in the end, it would not matter, since an ARIMA-model also isn't appropriate for panel data. Usually, you use a Difference-In-Differences approach for that kind of data. In this case, I would suggest changing the dataset for your experiment.
edited Apr 7 at 0:01
answered Apr 6 at 23:14
georg_ungeorg_un
318111
318111
$begingroup$
Thanks for your answer! The observations of my data are actually sales, and I also have some exogenous data along, so I'm not truly certain I could just input the missing data for a batch as some of the entities (stores), may have opened on a later date and have little to no history available. So my interest lays in the interrelationship of all the possible variables I have. The paper you linked to seems interesting: it's implementation doesn't seem so easy but worth a try :)
$endgroup$
– naifmeh
Apr 7 at 10:08
add a comment |
$begingroup$
Thanks for your answer! The observations of my data are actually sales, and I also have some exogenous data along, so I'm not truly certain I could just input the missing data for a batch as some of the entities (stores), may have opened on a later date and have little to no history available. So my interest lays in the interrelationship of all the possible variables I have. The paper you linked to seems interesting: it's implementation doesn't seem so easy but worth a try :)
$endgroup$
– naifmeh
Apr 7 at 10:08
$begingroup$
Thanks for your answer! The observations of my data are actually sales, and I also have some exogenous data along, so I'm not truly certain I could just input the missing data for a batch as some of the entities (stores), may have opened on a later date and have little to no history available. So my interest lays in the interrelationship of all the possible variables I have. The paper you linked to seems interesting: it's implementation doesn't seem so easy but worth a try :)
$endgroup$
– naifmeh
Apr 7 at 10:08
$begingroup$
Thanks for your answer! The observations of my data are actually sales, and I also have some exogenous data along, so I'm not truly certain I could just input the missing data for a batch as some of the entities (stores), may have opened on a later date and have little to no history available. So my interest lays in the interrelationship of all the possible variables I have. The paper you linked to seems interesting: it's implementation doesn't seem so easy but worth a try :)
$endgroup$
– naifmeh
Apr 7 at 10:08
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48775%2ftraining-a-lstm-on-a-time-serie-containing-multiple-inputs-for-each-timestep%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown