Comparison between addition and multiplication function in deep neural network?2019 Community Moderator ElectionHOW TO: Deep Neural Network weight initializationDepth of the first pooling layer outcome in tensorflow documentationAdding hand-crafted features to a convolutional neural network (CNN) in TensorFlowAmount of multiplications in a neural network modelDo Convolution Layers in a CNN Treat the Previous Layer Outputs as Channels?How to input & pre-process images for a Deep Convolutional Neural Network?Error in Neural NetworkHow can I combine images for Matlab deep learning?Numpy Python deep learning frameworkWhat is the motivation for row-wise convolution and folding in Kalchbrenner et al. (2014)?
How much RAM could one put in a typical 80386 setup?
I'm flying to France today and my passport expires in less than 2 months
Is it inappropriate for a student to attend their mentor's dissertation defense?
Why doesn't H₄O²⁺ exist?
Alternative to sending password over mail?
Decision tree nodes overlapping with Tikz
Can I ask the recruiters in my resume to put the reason why I am rejected?
Performing Transactions cleanup
Mutually beneficial digestive system symbiotes
Why does Kotter return in Welcome Back Kotter?
Can the number of solutions to a system of PDEs be bounded using the characteristic variety?
What does the "remote control" for a QF-4 look like?
Is it possible to run Internet Explorer on OS X El Capitan?
Arrow those variables!
Replacing matching entries in one column of a file by another column from a different file
Can you really stack all of this on an Opportunity Attack?
expand `ifthenelse` immediately
Why are electrically insulating heatsinks so rare? Is it just cost?
if condition in the past
How is it possible to have an ability score that is less than 3?
Is it possible to do 50 km distance without any previous training?
How is the claim "I am in New York only if I am in America" the same as "If I am in New York, then I am in America?
A case of the sniffles
Operational amplifier as comparator at high frequency
Comparison between addition and multiplication function in deep neural network?
2019 Community Moderator ElectionHOW TO: Deep Neural Network weight initializationDepth of the first pooling layer outcome in tensorflow documentationAdding hand-crafted features to a convolutional neural network (CNN) in TensorFlowAmount of multiplications in a neural network modelDo Convolution Layers in a CNN Treat the Previous Layer Outputs as Channels?How to input & pre-process images for a Deep Convolutional Neural Network?Error in Neural NetworkHow can I combine images for Matlab deep learning?Numpy Python deep learning frameworkWhat is the motivation for row-wise convolution and folding in Kalchbrenner et al. (2014)?
$begingroup$
I designed a specific Convolution Neural Network to study in the area of image processing. The network has a part that there are two tensors which have to be transformed into a tensor in order to be fed to the next layer. This situation happen in several points of the network. In fact, there are several operations such as addition, multiplication, etc. The results of the network are a bit better when I use the addition pyramid pooling module (the second image between two convolution) and multiply function (in last step of the network). I used tf.math.add and tf.math.multiply which perform the operation element-wisely. The whole network is shown in the first image.
The second image represents the pyramid pooling module which includes several scale images.
I am looking forward to the addition and multiplication function's attribute in a deep neural network.
The question is:
Why does the addition function (between conv1 and conv2) indicate better final performance in Accuracy (precision) and mean Intersection of Union(mIoU) compared to multiplication and concatenation when I unify two tensors into one tensor?
neural-network deep-learning convolution math image-preprocessing
$endgroup$
|
show 1 more comment
$begingroup$
I designed a specific Convolution Neural Network to study in the area of image processing. The network has a part that there are two tensors which have to be transformed into a tensor in order to be fed to the next layer. This situation happen in several points of the network. In fact, there are several operations such as addition, multiplication, etc. The results of the network are a bit better when I use the addition pyramid pooling module (the second image between two convolution) and multiply function (in last step of the network). I used tf.math.add and tf.math.multiply which perform the operation element-wisely. The whole network is shown in the first image.
The second image represents the pyramid pooling module which includes several scale images.
I am looking forward to the addition and multiplication function's attribute in a deep neural network.
The question is:
Why does the addition function (between conv1 and conv2) indicate better final performance in Accuracy (precision) and mean Intersection of Union(mIoU) compared to multiplication and concatenation when I unify two tensors into one tensor?
neural-network deep-learning convolution math image-preprocessing
$endgroup$
$begingroup$
Please ask only one question per post. Also, i'ts unclear what you mean by "the most important features of addition vs multiplication", and it's not clear how you are using addition or multiplication, so I don't think any of these questions are answerable in their current form. If you can edit your question to address this feedback, I encourage you to do so.
$endgroup$
– D.W.
Feb 16 at 21:46
$begingroup$
I don't quite understand this, but typically you're using dense layers for non-linear transformations. If all it does is sum combinations of inputs, it's a linear transformation. That almost surely defeats the purpose of what you're using it for.
$endgroup$
– Sean Owen♦
Feb 17 at 0:56
$begingroup$
dear @SeanOwen, I explained that in a specific part of the network there are two tensors which have to unify in order to feed into the next layer, in this case, there are several choices. one of these choices is basic mathematics operation such as addition, multiplication. we performed several experiments with each of these operations. I change the question and make it narrow. could u look at it again?
$endgroup$
– amir Maleki
Feb 17 at 8:14
$begingroup$
Dear @D.W., I change the question, I used basic addition which is provided by Tensorflow, tf.math.add which returns the a+b element-wise (each of a and b is equal tensors) and for multiplication I also used the tf.math.multiply function.
$endgroup$
– amir Maleki
Feb 17 at 8:26
1
$begingroup$
What do you mean 'unify'? there is no general answer to this. Which operation you use depends on what you are trying to do, and, practically, which one works better. If you mean to add things, you add them.
$endgroup$
– Sean Owen♦
Feb 17 at 15:36
|
show 1 more comment
$begingroup$
I designed a specific Convolution Neural Network to study in the area of image processing. The network has a part that there are two tensors which have to be transformed into a tensor in order to be fed to the next layer. This situation happen in several points of the network. In fact, there are several operations such as addition, multiplication, etc. The results of the network are a bit better when I use the addition pyramid pooling module (the second image between two convolution) and multiply function (in last step of the network). I used tf.math.add and tf.math.multiply which perform the operation element-wisely. The whole network is shown in the first image.
The second image represents the pyramid pooling module which includes several scale images.
I am looking forward to the addition and multiplication function's attribute in a deep neural network.
The question is:
Why does the addition function (between conv1 and conv2) indicate better final performance in Accuracy (precision) and mean Intersection of Union(mIoU) compared to multiplication and concatenation when I unify two tensors into one tensor?
neural-network deep-learning convolution math image-preprocessing
$endgroup$
I designed a specific Convolution Neural Network to study in the area of image processing. The network has a part that there are two tensors which have to be transformed into a tensor in order to be fed to the next layer. This situation happen in several points of the network. In fact, there are several operations such as addition, multiplication, etc. The results of the network are a bit better when I use the addition pyramid pooling module (the second image between two convolution) and multiply function (in last step of the network). I used tf.math.add and tf.math.multiply which perform the operation element-wisely. The whole network is shown in the first image.
The second image represents the pyramid pooling module which includes several scale images.
I am looking forward to the addition and multiplication function's attribute in a deep neural network.
The question is:
Why does the addition function (between conv1 and conv2) indicate better final performance in Accuracy (precision) and mean Intersection of Union(mIoU) compared to multiplication and concatenation when I unify two tensors into one tensor?
neural-network deep-learning convolution math image-preprocessing
neural-network deep-learning convolution math image-preprocessing
edited Feb 26 at 7:38
amir Maleki
asked Feb 16 at 9:24
amir Malekiamir Maleki
112
112
$begingroup$
Please ask only one question per post. Also, i'ts unclear what you mean by "the most important features of addition vs multiplication", and it's not clear how you are using addition or multiplication, so I don't think any of these questions are answerable in their current form. If you can edit your question to address this feedback, I encourage you to do so.
$endgroup$
– D.W.
Feb 16 at 21:46
$begingroup$
I don't quite understand this, but typically you're using dense layers for non-linear transformations. If all it does is sum combinations of inputs, it's a linear transformation. That almost surely defeats the purpose of what you're using it for.
$endgroup$
– Sean Owen♦
Feb 17 at 0:56
$begingroup$
dear @SeanOwen, I explained that in a specific part of the network there are two tensors which have to unify in order to feed into the next layer, in this case, there are several choices. one of these choices is basic mathematics operation such as addition, multiplication. we performed several experiments with each of these operations. I change the question and make it narrow. could u look at it again?
$endgroup$
– amir Maleki
Feb 17 at 8:14
$begingroup$
Dear @D.W., I change the question, I used basic addition which is provided by Tensorflow, tf.math.add which returns the a+b element-wise (each of a and b is equal tensors) and for multiplication I also used the tf.math.multiply function.
$endgroup$
– amir Maleki
Feb 17 at 8:26
1
$begingroup$
What do you mean 'unify'? there is no general answer to this. Which operation you use depends on what you are trying to do, and, practically, which one works better. If you mean to add things, you add them.
$endgroup$
– Sean Owen♦
Feb 17 at 15:36
|
show 1 more comment
$begingroup$
Please ask only one question per post. Also, i'ts unclear what you mean by "the most important features of addition vs multiplication", and it's not clear how you are using addition or multiplication, so I don't think any of these questions are answerable in their current form. If you can edit your question to address this feedback, I encourage you to do so.
$endgroup$
– D.W.
Feb 16 at 21:46
$begingroup$
I don't quite understand this, but typically you're using dense layers for non-linear transformations. If all it does is sum combinations of inputs, it's a linear transformation. That almost surely defeats the purpose of what you're using it for.
$endgroup$
– Sean Owen♦
Feb 17 at 0:56
$begingroup$
dear @SeanOwen, I explained that in a specific part of the network there are two tensors which have to unify in order to feed into the next layer, in this case, there are several choices. one of these choices is basic mathematics operation such as addition, multiplication. we performed several experiments with each of these operations. I change the question and make it narrow. could u look at it again?
$endgroup$
– amir Maleki
Feb 17 at 8:14
$begingroup$
Dear @D.W., I change the question, I used basic addition which is provided by Tensorflow, tf.math.add which returns the a+b element-wise (each of a and b is equal tensors) and for multiplication I also used the tf.math.multiply function.
$endgroup$
– amir Maleki
Feb 17 at 8:26
1
$begingroup$
What do you mean 'unify'? there is no general answer to this. Which operation you use depends on what you are trying to do, and, practically, which one works better. If you mean to add things, you add them.
$endgroup$
– Sean Owen♦
Feb 17 at 15:36
$begingroup$
Please ask only one question per post. Also, i'ts unclear what you mean by "the most important features of addition vs multiplication", and it's not clear how you are using addition or multiplication, so I don't think any of these questions are answerable in their current form. If you can edit your question to address this feedback, I encourage you to do so.
$endgroup$
– D.W.
Feb 16 at 21:46
$begingroup$
Please ask only one question per post. Also, i'ts unclear what you mean by "the most important features of addition vs multiplication", and it's not clear how you are using addition or multiplication, so I don't think any of these questions are answerable in their current form. If you can edit your question to address this feedback, I encourage you to do so.
$endgroup$
– D.W.
Feb 16 at 21:46
$begingroup$
I don't quite understand this, but typically you're using dense layers for non-linear transformations. If all it does is sum combinations of inputs, it's a linear transformation. That almost surely defeats the purpose of what you're using it for.
$endgroup$
– Sean Owen♦
Feb 17 at 0:56
$begingroup$
I don't quite understand this, but typically you're using dense layers for non-linear transformations. If all it does is sum combinations of inputs, it's a linear transformation. That almost surely defeats the purpose of what you're using it for.
$endgroup$
– Sean Owen♦
Feb 17 at 0:56
$begingroup$
dear @SeanOwen, I explained that in a specific part of the network there are two tensors which have to unify in order to feed into the next layer, in this case, there are several choices. one of these choices is basic mathematics operation such as addition, multiplication. we performed several experiments with each of these operations. I change the question and make it narrow. could u look at it again?
$endgroup$
– amir Maleki
Feb 17 at 8:14
$begingroup$
dear @SeanOwen, I explained that in a specific part of the network there are two tensors which have to unify in order to feed into the next layer, in this case, there are several choices. one of these choices is basic mathematics operation such as addition, multiplication. we performed several experiments with each of these operations. I change the question and make it narrow. could u look at it again?
$endgroup$
– amir Maleki
Feb 17 at 8:14
$begingroup$
Dear @D.W., I change the question, I used basic addition which is provided by Tensorflow, tf.math.add which returns the a+b element-wise (each of a and b is equal tensors) and for multiplication I also used the tf.math.multiply function.
$endgroup$
– amir Maleki
Feb 17 at 8:26
$begingroup$
Dear @D.W., I change the question, I used basic addition which is provided by Tensorflow, tf.math.add which returns the a+b element-wise (each of a and b is equal tensors) and for multiplication I also used the tf.math.multiply function.
$endgroup$
– amir Maleki
Feb 17 at 8:26
1
1
$begingroup$
What do you mean 'unify'? there is no general answer to this. Which operation you use depends on what you are trying to do, and, practically, which one works better. If you mean to add things, you add them.
$endgroup$
– Sean Owen♦
Feb 17 at 15:36
$begingroup$
What do you mean 'unify'? there is no general answer to this. Which operation you use depends on what you are trying to do, and, practically, which one works better. If you mean to add things, you add them.
$endgroup$
– Sean Owen♦
Feb 17 at 15:36
|
show 1 more comment
1 Answer
1
active
oldest
votes
$begingroup$
The observation is very interesting you report, since concatenation and addition are practically the same. A nice explanation can be found in https://distill.pub/2018/feature-wise-transformations/ .
$endgroup$
$begingroup$
dear @andreas-look, I read the link but I do not understand why you consider concatenation and addition equal. except for this, your answer is acceptable, and the link, which you sent, work for me. Could you explain or edit that part of the answer which I can accept it here.
$endgroup$
– amir Maleki
Feb 26 at 7:43
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f45678%2fcomparison-between-addition-and-multiplication-function-in-deep-neural-network%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
The observation is very interesting you report, since concatenation and addition are practically the same. A nice explanation can be found in https://distill.pub/2018/feature-wise-transformations/ .
$endgroup$
$begingroup$
dear @andreas-look, I read the link but I do not understand why you consider concatenation and addition equal. except for this, your answer is acceptable, and the link, which you sent, work for me. Could you explain or edit that part of the answer which I can accept it here.
$endgroup$
– amir Maleki
Feb 26 at 7:43
add a comment |
$begingroup$
The observation is very interesting you report, since concatenation and addition are practically the same. A nice explanation can be found in https://distill.pub/2018/feature-wise-transformations/ .
$endgroup$
$begingroup$
dear @andreas-look, I read the link but I do not understand why you consider concatenation and addition equal. except for this, your answer is acceptable, and the link, which you sent, work for me. Could you explain or edit that part of the answer which I can accept it here.
$endgroup$
– amir Maleki
Feb 26 at 7:43
add a comment |
$begingroup$
The observation is very interesting you report, since concatenation and addition are practically the same. A nice explanation can be found in https://distill.pub/2018/feature-wise-transformations/ .
$endgroup$
The observation is very interesting you report, since concatenation and addition are practically the same. A nice explanation can be found in https://distill.pub/2018/feature-wise-transformations/ .
answered Feb 19 at 7:39
Andreas LookAndreas Look
431110
431110
$begingroup$
dear @andreas-look, I read the link but I do not understand why you consider concatenation and addition equal. except for this, your answer is acceptable, and the link, which you sent, work for me. Could you explain or edit that part of the answer which I can accept it here.
$endgroup$
– amir Maleki
Feb 26 at 7:43
add a comment |
$begingroup$
dear @andreas-look, I read the link but I do not understand why you consider concatenation and addition equal. except for this, your answer is acceptable, and the link, which you sent, work for me. Could you explain or edit that part of the answer which I can accept it here.
$endgroup$
– amir Maleki
Feb 26 at 7:43
$begingroup$
dear @andreas-look, I read the link but I do not understand why you consider concatenation and addition equal. except for this, your answer is acceptable, and the link, which you sent, work for me. Could you explain or edit that part of the answer which I can accept it here.
$endgroup$
– amir Maleki
Feb 26 at 7:43
$begingroup$
dear @andreas-look, I read the link but I do not understand why you consider concatenation and addition equal. except for this, your answer is acceptable, and the link, which you sent, work for me. Could you explain or edit that part of the answer which I can accept it here.
$endgroup$
– amir Maleki
Feb 26 at 7:43
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f45678%2fcomparison-between-addition-and-multiplication-function-in-deep-neural-network%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Please ask only one question per post. Also, i'ts unclear what you mean by "the most important features of addition vs multiplication", and it's not clear how you are using addition or multiplication, so I don't think any of these questions are answerable in their current form. If you can edit your question to address this feedback, I encourage you to do so.
$endgroup$
– D.W.
Feb 16 at 21:46
$begingroup$
I don't quite understand this, but typically you're using dense layers for non-linear transformations. If all it does is sum combinations of inputs, it's a linear transformation. That almost surely defeats the purpose of what you're using it for.
$endgroup$
– Sean Owen♦
Feb 17 at 0:56
$begingroup$
dear @SeanOwen, I explained that in a specific part of the network there are two tensors which have to unify in order to feed into the next layer, in this case, there are several choices. one of these choices is basic mathematics operation such as addition, multiplication. we performed several experiments with each of these operations. I change the question and make it narrow. could u look at it again?
$endgroup$
– amir Maleki
Feb 17 at 8:14
$begingroup$
Dear @D.W., I change the question, I used basic addition which is provided by Tensorflow, tf.math.add which returns the a+b element-wise (each of a and b is equal tensors) and for multiplication I also used the tf.math.multiply function.
$endgroup$
– amir Maleki
Feb 17 at 8:26
1
$begingroup$
What do you mean 'unify'? there is no general answer to this. Which operation you use depends on what you are trying to do, and, practically, which one works better. If you mean to add things, you add them.
$endgroup$
– Sean Owen♦
Feb 17 at 15:36