Keras + Tensorflow CNN with multiple image inputsCustom layer in keras with multiple input and multiple outputKeras shape error in applications Inception Resnet v2Keras CNN image input and outputMy Keras CNN doesn't learnCan't train input variable with Keras+TensorflowMultiple-input multiple-output CNN with custom loss functionKeras Attention Guided CNN problemwhat happens to the depth channels when convolved by multiple filters in a cnn (keras, tensorflow)Keras/TF: Making sure image training data shape is accurate for Time Distributed CNN+LSTMArchitecture help for multivariate input and output LSTM models
What's the polite way to say "I need to urinate"?
What is the strongest case that can be made in favour of the UK regaining some control over fishing policy after Brexit?
Normal Map bad shading in Rendered display
What does it mean to express a gate in Dirac notation?
How to pronounce 'C++' in Spanish
Fizzy, soft, pop and still drinks
Is there really no use for MD5 anymore?
What happened to Captain America in Endgame?
Apply MapThread to all but one variable
How would one muzzle a full grown polar bear in the 13th century?
Does a semiconductor follow Ohm's law?
Can I spend a night at Vancouver then take a flight to my college in Toronto as an international student?
Map of water taps to fill bottles
How did Captain America manage to do this?
Do I have to worry about players making “bad” choices on level up?
Will tsunami waves travel forever if there was no land?
Was there a Viking Exchange as well as a Columbian one?
How to verbalise code in Mathematica?
How much cash can I safely carry into the USA and avoid civil forfeiture?
A Strange Latex Symbol
Exchange,swap or switch
Do I have an "anti-research" personality?
Please, smoke with good manners
what is the sudo password for a --disabled-password user
Keras + Tensorflow CNN with multiple image inputs
Custom layer in keras with multiple input and multiple outputKeras shape error in applications Inception Resnet v2Keras CNN image input and outputMy Keras CNN doesn't learnCan't train input variable with Keras+TensorflowMultiple-input multiple-output CNN with custom loss functionKeras Attention Guided CNN problemwhat happens to the depth channels when convolved by multiple filters in a cnn (keras, tensorflow)Keras/TF: Making sure image training data shape is accurate for Time Distributed CNN+LSTMArchitecture help for multivariate input and output LSTM models
$begingroup$
I have a CNN that needs to take in 68 images that are all 59x59 pixels. The CNN should output 136 values on the output layer
My training data has shape (-1, 68, 59, 59, 1).
My current approach is to use concatenate to join multiple networks like so:
input_layer = [None] * 68
x = [None] * 68
for i in range(68):
input_layer[i] = tf.keras.layers.Input(shape=training_data.shape[1:][1:])
x[i] = Conv2D(64, (5,5))(input_layer[i])
x[i] = LeakyReLU(alpha=0.3)(x[i])
x[i] = MaxPooling2D(pool_size=(2,2))(x[i])
x[i] = Model(inputs=input_layer[i], outputs=x[i])
combined = concatenate(x)
However, this always gives the error:
ValueError: A `Concatenate` layer should be called on a list of at least 2 inputs
Is this approach a suitable approach or am I doing this completely wrong?
keras tensorflow cnn
$endgroup$
add a comment |
$begingroup$
I have a CNN that needs to take in 68 images that are all 59x59 pixels. The CNN should output 136 values on the output layer
My training data has shape (-1, 68, 59, 59, 1).
My current approach is to use concatenate to join multiple networks like so:
input_layer = [None] * 68
x = [None] * 68
for i in range(68):
input_layer[i] = tf.keras.layers.Input(shape=training_data.shape[1:][1:])
x[i] = Conv2D(64, (5,5))(input_layer[i])
x[i] = LeakyReLU(alpha=0.3)(x[i])
x[i] = MaxPooling2D(pool_size=(2,2))(x[i])
x[i] = Model(inputs=input_layer[i], outputs=x[i])
combined = concatenate(x)
However, this always gives the error:
ValueError: A `Concatenate` layer should be called on a list of at least 2 inputs
Is this approach a suitable approach or am I doing this completely wrong?
keras tensorflow cnn
$endgroup$
$begingroup$
Isn't this:shape=training_data.shape[1:][1:]
the same for each loop?
$endgroup$
– Stephen Rauch♦
Apr 7 at 23:53
add a comment |
$begingroup$
I have a CNN that needs to take in 68 images that are all 59x59 pixels. The CNN should output 136 values on the output layer
My training data has shape (-1, 68, 59, 59, 1).
My current approach is to use concatenate to join multiple networks like so:
input_layer = [None] * 68
x = [None] * 68
for i in range(68):
input_layer[i] = tf.keras.layers.Input(shape=training_data.shape[1:][1:])
x[i] = Conv2D(64, (5,5))(input_layer[i])
x[i] = LeakyReLU(alpha=0.3)(x[i])
x[i] = MaxPooling2D(pool_size=(2,2))(x[i])
x[i] = Model(inputs=input_layer[i], outputs=x[i])
combined = concatenate(x)
However, this always gives the error:
ValueError: A `Concatenate` layer should be called on a list of at least 2 inputs
Is this approach a suitable approach or am I doing this completely wrong?
keras tensorflow cnn
$endgroup$
I have a CNN that needs to take in 68 images that are all 59x59 pixels. The CNN should output 136 values on the output layer
My training data has shape (-1, 68, 59, 59, 1).
My current approach is to use concatenate to join multiple networks like so:
input_layer = [None] * 68
x = [None] * 68
for i in range(68):
input_layer[i] = tf.keras.layers.Input(shape=training_data.shape[1:][1:])
x[i] = Conv2D(64, (5,5))(input_layer[i])
x[i] = LeakyReLU(alpha=0.3)(x[i])
x[i] = MaxPooling2D(pool_size=(2,2))(x[i])
x[i] = Model(inputs=input_layer[i], outputs=x[i])
combined = concatenate(x)
However, this always gives the error:
ValueError: A `Concatenate` layer should be called on a list of at least 2 inputs
Is this approach a suitable approach or am I doing this completely wrong?
keras tensorflow cnn
keras tensorflow cnn
edited Apr 7 at 23:52
Stephen Rauch♦
1,53551330
1,53551330
asked Apr 7 at 22:11
Charley PearceCharley Pearce
132
132
$begingroup$
Isn't this:shape=training_data.shape[1:][1:]
the same for each loop?
$endgroup$
– Stephen Rauch♦
Apr 7 at 23:53
add a comment |
$begingroup$
Isn't this:shape=training_data.shape[1:][1:]
the same for each loop?
$endgroup$
– Stephen Rauch♦
Apr 7 at 23:53
$begingroup$
Isn't this:
shape=training_data.shape[1:][1:]
the same for each loop?$endgroup$
– Stephen Rauch♦
Apr 7 at 23:53
$begingroup$
Isn't this:
shape=training_data.shape[1:][1:]
the same for each loop?$endgroup$
– Stephen Rauch♦
Apr 7 at 23:53
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
Yes it is wrong, each (68, 59, 59)
input should go through one model not an array of them.
You can treat each of 68 images as a channel, for this, you need to squeeze your data axes from
(-1, 68, 59, 59, 1)
to(-1, 68, 59, 59)
to have a 59x59 image with 68 channels corresponding toInput((68, 59, 59))
, and setdata_format='channels_first'
in conv2D, to let the layer know that channels are in the first dimension (it expects them to be in the last dimension by default). This is similar to an RGB image that has 3 channels corresponding toInput((59, 59, 3))
. The rest is the same.If 68 images are consecutive frames from a movie, you can use conv3D to extract motion patterns across neighbor frames too; this is done by 3D kernels instead of 2D kernels. It requires
(-1, 68, 59, 59, 1)
data shape corresponding toInput((68, 59, 59, 1))
. Also, we should use the defaultdata_format='channels_last'
since now there is only one channel as the last dimension. Commonly, temporal axis is placed third, i.e.(-1, 59, 59, 68, 1)
, which can be accomplished by moving the axes.
$endgroup$
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48837%2fkeras-tensorflow-cnn-with-multiple-image-inputs%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Yes it is wrong, each (68, 59, 59)
input should go through one model not an array of them.
You can treat each of 68 images as a channel, for this, you need to squeeze your data axes from
(-1, 68, 59, 59, 1)
to(-1, 68, 59, 59)
to have a 59x59 image with 68 channels corresponding toInput((68, 59, 59))
, and setdata_format='channels_first'
in conv2D, to let the layer know that channels are in the first dimension (it expects them to be in the last dimension by default). This is similar to an RGB image that has 3 channels corresponding toInput((59, 59, 3))
. The rest is the same.If 68 images are consecutive frames from a movie, you can use conv3D to extract motion patterns across neighbor frames too; this is done by 3D kernels instead of 2D kernels. It requires
(-1, 68, 59, 59, 1)
data shape corresponding toInput((68, 59, 59, 1))
. Also, we should use the defaultdata_format='channels_last'
since now there is only one channel as the last dimension. Commonly, temporal axis is placed third, i.e.(-1, 59, 59, 68, 1)
, which can be accomplished by moving the axes.
$endgroup$
add a comment |
$begingroup$
Yes it is wrong, each (68, 59, 59)
input should go through one model not an array of them.
You can treat each of 68 images as a channel, for this, you need to squeeze your data axes from
(-1, 68, 59, 59, 1)
to(-1, 68, 59, 59)
to have a 59x59 image with 68 channels corresponding toInput((68, 59, 59))
, and setdata_format='channels_first'
in conv2D, to let the layer know that channels are in the first dimension (it expects them to be in the last dimension by default). This is similar to an RGB image that has 3 channels corresponding toInput((59, 59, 3))
. The rest is the same.If 68 images are consecutive frames from a movie, you can use conv3D to extract motion patterns across neighbor frames too; this is done by 3D kernels instead of 2D kernels. It requires
(-1, 68, 59, 59, 1)
data shape corresponding toInput((68, 59, 59, 1))
. Also, we should use the defaultdata_format='channels_last'
since now there is only one channel as the last dimension. Commonly, temporal axis is placed third, i.e.(-1, 59, 59, 68, 1)
, which can be accomplished by moving the axes.
$endgroup$
add a comment |
$begingroup$
Yes it is wrong, each (68, 59, 59)
input should go through one model not an array of them.
You can treat each of 68 images as a channel, for this, you need to squeeze your data axes from
(-1, 68, 59, 59, 1)
to(-1, 68, 59, 59)
to have a 59x59 image with 68 channels corresponding toInput((68, 59, 59))
, and setdata_format='channels_first'
in conv2D, to let the layer know that channels are in the first dimension (it expects them to be in the last dimension by default). This is similar to an RGB image that has 3 channels corresponding toInput((59, 59, 3))
. The rest is the same.If 68 images are consecutive frames from a movie, you can use conv3D to extract motion patterns across neighbor frames too; this is done by 3D kernels instead of 2D kernels. It requires
(-1, 68, 59, 59, 1)
data shape corresponding toInput((68, 59, 59, 1))
. Also, we should use the defaultdata_format='channels_last'
since now there is only one channel as the last dimension. Commonly, temporal axis is placed third, i.e.(-1, 59, 59, 68, 1)
, which can be accomplished by moving the axes.
$endgroup$
Yes it is wrong, each (68, 59, 59)
input should go through one model not an array of them.
You can treat each of 68 images as a channel, for this, you need to squeeze your data axes from
(-1, 68, 59, 59, 1)
to(-1, 68, 59, 59)
to have a 59x59 image with 68 channels corresponding toInput((68, 59, 59))
, and setdata_format='channels_first'
in conv2D, to let the layer know that channels are in the first dimension (it expects them to be in the last dimension by default). This is similar to an RGB image that has 3 channels corresponding toInput((59, 59, 3))
. The rest is the same.If 68 images are consecutive frames from a movie, you can use conv3D to extract motion patterns across neighbor frames too; this is done by 3D kernels instead of 2D kernels. It requires
(-1, 68, 59, 59, 1)
data shape corresponding toInput((68, 59, 59, 1))
. Also, we should use the defaultdata_format='channels_last'
since now there is only one channel as the last dimension. Commonly, temporal axis is placed third, i.e.(-1, 59, 59, 68, 1)
, which can be accomplished by moving the axes.
edited Apr 8 at 16:28
answered Apr 8 at 0:39
EsmailianEsmailian
4,021422
4,021422
add a comment |
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48837%2fkeras-tensorflow-cnn-with-multiple-image-inputs%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Isn't this:
shape=training_data.shape[1:][1:]
the same for each loop?$endgroup$
– Stephen Rauch♦
Apr 7 at 23:53