Convolution and Deconvolution Image with Filter2019 Community Moderator ElectionHow do subsequent convolution layers work?Stuck on deconvolution in Theano and TensorFlowImage Classification, Convolution Network and Gamma Correction for imagesHow are weights represented in a convolution neural network?Relation between convolution in math and CNNWhat is the difference between Dilated Convolution and Deconvolution?Deconvolution vs Sub-pixel ConvolutionAdding bias in deconvolution (transposed convolution) layerunderstanding the filter function in convolution neural networkWhat is the motivation for row-wise convolution and folding in Kalchbrenner et al. (2014)?
Can a virus destroy the BIOS of a modern computer?
How to blend text to background so it looks burned in paint.net?
Why "Having chlorophyll without photosynthesis is actually very dangerous" and "like living with a bomb"?
How would I stat a creature to be immune to everything but the Magic Missile spell? (just for fun)
Why do bosons tend to occupy the same state?
Why is this clock signal connected to a capacitor to gnd?
table going outside the page
How badly should I try to prevent a user from XSSing themselves?
Where does SFDX store details about scratch orgs?
Why is the ratio of two extensive quantities always intensive?
I would say: "You are another teacher", but she is a woman and I am a man
Why is consensus so controversial in Britain?
What exploit are these user agents trying to use?
Forgetting the musical notes while performing in concert
Blender 2.8 I can't see vertices, edges or faces in edit mode
Aircraft with solar-panels?
If human space travel is limited by the G force vulnerability, is there a way to counter G forces?
A category-like structure without composition?
Why can't we play rap on piano?
How can saying a song's name be a copyright violation?
numexpr behavior in math mode and/or TikZ
How do I deal with an unproductive colleague in a small company?
Why is it a bad idea to hire a hitman to eliminate most corrupt politicians?
Saudi Arabia Transit Visa
Convolution and Deconvolution Image with Filter
2019 Community Moderator ElectionHow do subsequent convolution layers work?Stuck on deconvolution in Theano and TensorFlowImage Classification, Convolution Network and Gamma Correction for imagesHow are weights represented in a convolution neural network?Relation between convolution in math and CNNWhat is the difference between Dilated Convolution and Deconvolution?Deconvolution vs Sub-pixel ConvolutionAdding bias in deconvolution (transposed convolution) layerunderstanding the filter function in convolution neural networkWhat is the motivation for row-wise convolution and folding in Kalchbrenner et al. (2014)?
$begingroup$
I tried deconvolving the result of the convolution of an image 100x100x3 (hight, width, number of channels) with filter 3x3x3x5 (filter hight, filter width, number of channels, number of filters) then add a bias 1x1x1x5. The convolution works very well but the deconvolution doesn't work.
Can anyone help explain more about deconvolution with numpy.
Here is my code:
def conv(img, W, b, s, p):
"""
calculate the convolving betwen image and filter und save the result in array (feature_map)
Argument:
img-- input Image, list of array with shape (h, w, number_of_channels)
W-- Filter, array of shape (f, f, number_of_channels, c)
b-- Bias, a vector of the shape (1, 1, 1, c)
s-- Stride
p-- Padding
Returns:
feature_map -- the result of convolving betwenn image and filter, an array of the shape (m, new_h, new_w, c)
"""
#Image dimensions
(h, w, number_of_channels) = img.shape
#Filter dimensions
(f, f, number_of_channels, c) = W.shape
# Compute the dimensions of the CONV output volume using the formula given above. Hint: use int() to floor. (≈2 lines)
n_h = int((h - f + 2 * p) / s) + 1
n_w = int((w - f + 2 * p) / s) + 1
#create initial feature map
feature_map = np.zeros((n_h, n_w, c))
#add Padding to the input
img_pad = np.pad(img, ((p,p),(p,p), (0,0)) , 'constant', constant_values = 0)
#loop over the vertical of the output
for h_i in range (n_h):
#loop over the horizontal of the output
for w_i in range (n_w):
#loop over the Filter layers (output channels)
for c_i in range(c):
#defin the image region, that will convoliving with the filter
img_region = img_pad[h_i*s:h_i*s + f,w_i*s:w_i*s + f, :]
# convolve the image_region with the filter W and bias b to get the feature map
feature_map[h_i, w_i, c_i] = conv_opertation(img_region, W[...,c_i], b[...,c_i])
#activation function Relu
#return the original value in the feature map if it is larger than 0
feature_map[h_i, w_i, c_i] = np.max([feature_map[h_i, w_i, c_i], 0])
assert(feature_map.shape == (n_h, n_w, c))
return feature_map
def deConv (feature_map, W, b, s, p):
'''
calculate the decovolution between the feature map and the filter
Arguements:
feature_map-- the result of convolving the image and the filter W, array of the shape (new_h, new_w, c)
W-- filter, an array of the shape (f, f, number_of_channels, c)
b-- Bias, a vector of the shape (1, 1, 1, c)
s-- Stird
p-- Padding
Returns:
new_img-- reconstruktion image, an array of the shape (h, w, number of channels)
'''
#dimensions of the filter
(f, f, number_of_channels, c) = W.shape
#add Padding to the filter
feature_map_pad = np.pad(feature_map, ((p,p),(p,p), (0,0)) , 'constant', constant_values = 0)
#dimensions of the feature map
(n_h, n_w, c) = feature_map_pad.shape
#dimenstions of the new image
new_h= int((n_h - f + 2 * p)/s)-1
new_w= int((n_w - f + 2 * p)/s)-1
#create initial new image
image = np.zeros((new_h, new_w, number_of_channels))
#loop over the vertical axis of the filter
for h_i in range(new_h):
#loop over the vertical axis of the filter
for w_i in range(new_w):
#loop over the filter channels
for c_i in range(c):
#defin the feature map region, that will convoliving with the filter
feature_map_region = feature_map_pad[h_i*s:h_i*s+f, w_i*s:w_i*s+f,c_i]
#loop over the feature map layer
for i in range(number_of_channels):
# convolve the the feature map region with the filter W and bias b to get image
image[h_i, w_i, i] = conv_opertation(feature_map_region, W[:,:,i,c_i], b[:,:,0,c_i])
#activation function Relu
#return the original value in the feature map if it is larger than 0
image[h_i, w_i, c_i] = np.max([image[h_i,w_i,c_i], 0])
assert(image.shape == (new_h,new_w, number_of_channels))
return image
python neural-network convolution autoencoder numpy
$endgroup$
add a comment |
$begingroup$
I tried deconvolving the result of the convolution of an image 100x100x3 (hight, width, number of channels) with filter 3x3x3x5 (filter hight, filter width, number of channels, number of filters) then add a bias 1x1x1x5. The convolution works very well but the deconvolution doesn't work.
Can anyone help explain more about deconvolution with numpy.
Here is my code:
def conv(img, W, b, s, p):
"""
calculate the convolving betwen image and filter und save the result in array (feature_map)
Argument:
img-- input Image, list of array with shape (h, w, number_of_channels)
W-- Filter, array of shape (f, f, number_of_channels, c)
b-- Bias, a vector of the shape (1, 1, 1, c)
s-- Stride
p-- Padding
Returns:
feature_map -- the result of convolving betwenn image and filter, an array of the shape (m, new_h, new_w, c)
"""
#Image dimensions
(h, w, number_of_channels) = img.shape
#Filter dimensions
(f, f, number_of_channels, c) = W.shape
# Compute the dimensions of the CONV output volume using the formula given above. Hint: use int() to floor. (≈2 lines)
n_h = int((h - f + 2 * p) / s) + 1
n_w = int((w - f + 2 * p) / s) + 1
#create initial feature map
feature_map = np.zeros((n_h, n_w, c))
#add Padding to the input
img_pad = np.pad(img, ((p,p),(p,p), (0,0)) , 'constant', constant_values = 0)
#loop over the vertical of the output
for h_i in range (n_h):
#loop over the horizontal of the output
for w_i in range (n_w):
#loop over the Filter layers (output channels)
for c_i in range(c):
#defin the image region, that will convoliving with the filter
img_region = img_pad[h_i*s:h_i*s + f,w_i*s:w_i*s + f, :]
# convolve the image_region with the filter W and bias b to get the feature map
feature_map[h_i, w_i, c_i] = conv_opertation(img_region, W[...,c_i], b[...,c_i])
#activation function Relu
#return the original value in the feature map if it is larger than 0
feature_map[h_i, w_i, c_i] = np.max([feature_map[h_i, w_i, c_i], 0])
assert(feature_map.shape == (n_h, n_w, c))
return feature_map
def deConv (feature_map, W, b, s, p):
'''
calculate the decovolution between the feature map and the filter
Arguements:
feature_map-- the result of convolving the image and the filter W, array of the shape (new_h, new_w, c)
W-- filter, an array of the shape (f, f, number_of_channels, c)
b-- Bias, a vector of the shape (1, 1, 1, c)
s-- Stird
p-- Padding
Returns:
new_img-- reconstruktion image, an array of the shape (h, w, number of channels)
'''
#dimensions of the filter
(f, f, number_of_channels, c) = W.shape
#add Padding to the filter
feature_map_pad = np.pad(feature_map, ((p,p),(p,p), (0,0)) , 'constant', constant_values = 0)
#dimensions of the feature map
(n_h, n_w, c) = feature_map_pad.shape
#dimenstions of the new image
new_h= int((n_h - f + 2 * p)/s)-1
new_w= int((n_w - f + 2 * p)/s)-1
#create initial new image
image = np.zeros((new_h, new_w, number_of_channels))
#loop over the vertical axis of the filter
for h_i in range(new_h):
#loop over the vertical axis of the filter
for w_i in range(new_w):
#loop over the filter channels
for c_i in range(c):
#defin the feature map region, that will convoliving with the filter
feature_map_region = feature_map_pad[h_i*s:h_i*s+f, w_i*s:w_i*s+f,c_i]
#loop over the feature map layer
for i in range(number_of_channels):
# convolve the the feature map region with the filter W and bias b to get image
image[h_i, w_i, i] = conv_opertation(feature_map_region, W[:,:,i,c_i], b[:,:,0,c_i])
#activation function Relu
#return the original value in the feature map if it is larger than 0
image[h_i, w_i, c_i] = np.max([image[h_i,w_i,c_i], 0])
assert(image.shape == (new_h,new_w, number_of_channels))
return image
python neural-network convolution autoencoder numpy
$endgroup$
add a comment |
$begingroup$
I tried deconvolving the result of the convolution of an image 100x100x3 (hight, width, number of channels) with filter 3x3x3x5 (filter hight, filter width, number of channels, number of filters) then add a bias 1x1x1x5. The convolution works very well but the deconvolution doesn't work.
Can anyone help explain more about deconvolution with numpy.
Here is my code:
def conv(img, W, b, s, p):
"""
calculate the convolving betwen image and filter und save the result in array (feature_map)
Argument:
img-- input Image, list of array with shape (h, w, number_of_channels)
W-- Filter, array of shape (f, f, number_of_channels, c)
b-- Bias, a vector of the shape (1, 1, 1, c)
s-- Stride
p-- Padding
Returns:
feature_map -- the result of convolving betwenn image and filter, an array of the shape (m, new_h, new_w, c)
"""
#Image dimensions
(h, w, number_of_channels) = img.shape
#Filter dimensions
(f, f, number_of_channels, c) = W.shape
# Compute the dimensions of the CONV output volume using the formula given above. Hint: use int() to floor. (≈2 lines)
n_h = int((h - f + 2 * p) / s) + 1
n_w = int((w - f + 2 * p) / s) + 1
#create initial feature map
feature_map = np.zeros((n_h, n_w, c))
#add Padding to the input
img_pad = np.pad(img, ((p,p),(p,p), (0,0)) , 'constant', constant_values = 0)
#loop over the vertical of the output
for h_i in range (n_h):
#loop over the horizontal of the output
for w_i in range (n_w):
#loop over the Filter layers (output channels)
for c_i in range(c):
#defin the image region, that will convoliving with the filter
img_region = img_pad[h_i*s:h_i*s + f,w_i*s:w_i*s + f, :]
# convolve the image_region with the filter W and bias b to get the feature map
feature_map[h_i, w_i, c_i] = conv_opertation(img_region, W[...,c_i], b[...,c_i])
#activation function Relu
#return the original value in the feature map if it is larger than 0
feature_map[h_i, w_i, c_i] = np.max([feature_map[h_i, w_i, c_i], 0])
assert(feature_map.shape == (n_h, n_w, c))
return feature_map
def deConv (feature_map, W, b, s, p):
'''
calculate the decovolution between the feature map and the filter
Arguements:
feature_map-- the result of convolving the image and the filter W, array of the shape (new_h, new_w, c)
W-- filter, an array of the shape (f, f, number_of_channels, c)
b-- Bias, a vector of the shape (1, 1, 1, c)
s-- Stird
p-- Padding
Returns:
new_img-- reconstruktion image, an array of the shape (h, w, number of channels)
'''
#dimensions of the filter
(f, f, number_of_channels, c) = W.shape
#add Padding to the filter
feature_map_pad = np.pad(feature_map, ((p,p),(p,p), (0,0)) , 'constant', constant_values = 0)
#dimensions of the feature map
(n_h, n_w, c) = feature_map_pad.shape
#dimenstions of the new image
new_h= int((n_h - f + 2 * p)/s)-1
new_w= int((n_w - f + 2 * p)/s)-1
#create initial new image
image = np.zeros((new_h, new_w, number_of_channels))
#loop over the vertical axis of the filter
for h_i in range(new_h):
#loop over the vertical axis of the filter
for w_i in range(new_w):
#loop over the filter channels
for c_i in range(c):
#defin the feature map region, that will convoliving with the filter
feature_map_region = feature_map_pad[h_i*s:h_i*s+f, w_i*s:w_i*s+f,c_i]
#loop over the feature map layer
for i in range(number_of_channels):
# convolve the the feature map region with the filter W and bias b to get image
image[h_i, w_i, i] = conv_opertation(feature_map_region, W[:,:,i,c_i], b[:,:,0,c_i])
#activation function Relu
#return the original value in the feature map if it is larger than 0
image[h_i, w_i, c_i] = np.max([image[h_i,w_i,c_i], 0])
assert(image.shape == (new_h,new_w, number_of_channels))
return image
python neural-network convolution autoencoder numpy
$endgroup$
I tried deconvolving the result of the convolution of an image 100x100x3 (hight, width, number of channels) with filter 3x3x3x5 (filter hight, filter width, number of channels, number of filters) then add a bias 1x1x1x5. The convolution works very well but the deconvolution doesn't work.
Can anyone help explain more about deconvolution with numpy.
Here is my code:
def conv(img, W, b, s, p):
"""
calculate the convolving betwen image and filter und save the result in array (feature_map)
Argument:
img-- input Image, list of array with shape (h, w, number_of_channels)
W-- Filter, array of shape (f, f, number_of_channels, c)
b-- Bias, a vector of the shape (1, 1, 1, c)
s-- Stride
p-- Padding
Returns:
feature_map -- the result of convolving betwenn image and filter, an array of the shape (m, new_h, new_w, c)
"""
#Image dimensions
(h, w, number_of_channels) = img.shape
#Filter dimensions
(f, f, number_of_channels, c) = W.shape
# Compute the dimensions of the CONV output volume using the formula given above. Hint: use int() to floor. (≈2 lines)
n_h = int((h - f + 2 * p) / s) + 1
n_w = int((w - f + 2 * p) / s) + 1
#create initial feature map
feature_map = np.zeros((n_h, n_w, c))
#add Padding to the input
img_pad = np.pad(img, ((p,p),(p,p), (0,0)) , 'constant', constant_values = 0)
#loop over the vertical of the output
for h_i in range (n_h):
#loop over the horizontal of the output
for w_i in range (n_w):
#loop over the Filter layers (output channels)
for c_i in range(c):
#defin the image region, that will convoliving with the filter
img_region = img_pad[h_i*s:h_i*s + f,w_i*s:w_i*s + f, :]
# convolve the image_region with the filter W and bias b to get the feature map
feature_map[h_i, w_i, c_i] = conv_opertation(img_region, W[...,c_i], b[...,c_i])
#activation function Relu
#return the original value in the feature map if it is larger than 0
feature_map[h_i, w_i, c_i] = np.max([feature_map[h_i, w_i, c_i], 0])
assert(feature_map.shape == (n_h, n_w, c))
return feature_map
def deConv (feature_map, W, b, s, p):
'''
calculate the decovolution between the feature map and the filter
Arguements:
feature_map-- the result of convolving the image and the filter W, array of the shape (new_h, new_w, c)
W-- filter, an array of the shape (f, f, number_of_channels, c)
b-- Bias, a vector of the shape (1, 1, 1, c)
s-- Stird
p-- Padding
Returns:
new_img-- reconstruktion image, an array of the shape (h, w, number of channels)
'''
#dimensions of the filter
(f, f, number_of_channels, c) = W.shape
#add Padding to the filter
feature_map_pad = np.pad(feature_map, ((p,p),(p,p), (0,0)) , 'constant', constant_values = 0)
#dimensions of the feature map
(n_h, n_w, c) = feature_map_pad.shape
#dimenstions of the new image
new_h= int((n_h - f + 2 * p)/s)-1
new_w= int((n_w - f + 2 * p)/s)-1
#create initial new image
image = np.zeros((new_h, new_w, number_of_channels))
#loop over the vertical axis of the filter
for h_i in range(new_h):
#loop over the vertical axis of the filter
for w_i in range(new_w):
#loop over the filter channels
for c_i in range(c):
#defin the feature map region, that will convoliving with the filter
feature_map_region = feature_map_pad[h_i*s:h_i*s+f, w_i*s:w_i*s+f,c_i]
#loop over the feature map layer
for i in range(number_of_channels):
# convolve the the feature map region with the filter W and bias b to get image
image[h_i, w_i, i] = conv_opertation(feature_map_region, W[:,:,i,c_i], b[:,:,0,c_i])
#activation function Relu
#return the original value in the feature map if it is larger than 0
image[h_i, w_i, c_i] = np.max([image[h_i,w_i,c_i], 0])
assert(image.shape == (new_h,new_w, number_of_channels))
return image
python neural-network convolution autoencoder numpy
python neural-network convolution autoencoder numpy
edited Mar 27 at 3:24
Shamit Verma
1,3191214
1,3191214
asked Mar 26 at 14:10
Edward AlhanounEdward Alhanoun
12
12
add a comment |
add a comment |
0
active
oldest
votes
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48028%2fconvolution-and-deconvolution-image-with-filter%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48028%2fconvolution-and-deconvolution-image-with-filter%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown