How to improve accuracy of variational autoencoder for coordinate data? The Next CEO of Stack Overflow2019 Community Moderator ElectionLatent loss in variational autoencoder drowns generative lossHow to understand log-likelihood for generative image model?What is an intuitive explanation for the Importance Weighted Autoencoder?
Multi tool use
What benefits would be gained by using human laborers instead of drones in deep sea mining?
What was the first Unix version to run on a microcomputer?
Trouble understanding the speech of overseas colleagues
Why do airplanes bank sharply to the right after air-to-air refueling?
How to draw dotted circle in Inkscape?
Why didn't Theresa May consult with Parliament before negotiating a deal with the EU?
Does it take more energy to get to Venus or to Mars?
If/When UK leaves the EU, can a future goverment do a referendum to join EU
Why did we only see the N-1 starfighters in one film?
What does "Its cash flow is deeply negative" mean?
What flight has the highest ratio of time difference to flight time?
Opposite of a diet
Inappropriate reference requests from Journal reviewers
Extending anchors in TikZ
Grabbing quick drinks
How to be diplomatic in refusing to write code that breaches the privacy of our users
What's the best way to handle refactoring a big file?
Is there a difference between "Fahrstuhl" and "Aufzug"
How to write the block matrix in LaTex?
How long to clear the 'suck zone' of a turbofan after start is initiated?
A "random" question: usage of "random" as adjective in Spanish
Beyond letters and diaries - exercises to explore characters' personalities and motivation
On model categories where every object is bifibrant
How can I get through very long and very dry, but also very useful technical documents when learning a new tool?
How to improve accuracy of variational autoencoder for coordinate data?
The Next CEO of Stack Overflow2019 Community Moderator ElectionLatent loss in variational autoencoder drowns generative lossHow to understand log-likelihood for generative image model?What is an intuitive explanation for the Importance Weighted Autoencoder?
$begingroup$
I have data like that:
x_data = (300000,42), for each row, there are 42 features(x,y coordinates):
[297.425 341.30002 280.1 295.625 275.375 240.5 287.975 213.725 294.275 186.95 332.07498 254.675 355.69998 215.3 380.9 201.125 402.94998 188.52501 357.275 268.85 391.925 234.20001 412.4 215.3 432.875 202.7 380.9 287.75 410.82498 259.4 432.875 238.925 450.2 224.75 391.925 306.65 428.15 290.9 448.625 272 469.1 254.675
]
I want to generate new data from variational autoencoder. I add some noise to my input data, and feed to the network with (1,42) dimension input.
After training, when I predict noisy data, the accuracy is so bad and cannot reconstruct correct x,y coordinates.
What I should do now to improve my accuracy? Thank for your supports.
Edit:
I added some code for VAE.
x = Input(shape=(original_dim,))
h = Dense(int(original_dim/2), activation= 'relu')(x)
hh = Dense(int(original_dim/2), activation= 'relu')(h)
z_mean = Dense(20)(hh)
z_log_var = Dense(20)(hh)
z = Lambda(sampling, output_shape=(20,))([z_mean, z_log_var])
print('z',z)
#decoder
decoder_h = Dense(int(original_dim/2), activation='relu')
decoder_hh = Dense(int(original_dim/2), activation='relu')
decoder_mean = Dense(original_dim, activation=None)
h_decoded = decoder_h(z)
hh_decoded = decoder_hh(h_decoded)
x_decoded_mean = decoder_mean(hh_decoded)
autoencoder = Model(x, x_decoded_mean)
def sampling(args):
z_mean, z_log_var = args
epsilon = K.random_normal(shape=(K.shape(z_mean)[0], 20), mean=0., stddev = 1)
return z_mean + K.exp(z_log_var / 2) * epsilon
def vae_loss(x, x_decoded_mean):
xent_loss = original_dim * metrics.mse(x, x_decoded_mean)
#xent_loss = objectives.mse(x, x_decoded_mean)
print(xent_loss)
kl_loss = - 0.5 * K.sum(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis = -1)
kl_loss = 0.005*kl_loss
print(kl_loss)
loss = K.mean(xent_loss + kl_loss)
return loss
generative-models
$endgroup$
add a comment |
$begingroup$
I have data like that:
x_data = (300000,42), for each row, there are 42 features(x,y coordinates):
[297.425 341.30002 280.1 295.625 275.375 240.5 287.975 213.725 294.275 186.95 332.07498 254.675 355.69998 215.3 380.9 201.125 402.94998 188.52501 357.275 268.85 391.925 234.20001 412.4 215.3 432.875 202.7 380.9 287.75 410.82498 259.4 432.875 238.925 450.2 224.75 391.925 306.65 428.15 290.9 448.625 272 469.1 254.675
]
I want to generate new data from variational autoencoder. I add some noise to my input data, and feed to the network with (1,42) dimension input.
After training, when I predict noisy data, the accuracy is so bad and cannot reconstruct correct x,y coordinates.
What I should do now to improve my accuracy? Thank for your supports.
Edit:
I added some code for VAE.
x = Input(shape=(original_dim,))
h = Dense(int(original_dim/2), activation= 'relu')(x)
hh = Dense(int(original_dim/2), activation= 'relu')(h)
z_mean = Dense(20)(hh)
z_log_var = Dense(20)(hh)
z = Lambda(sampling, output_shape=(20,))([z_mean, z_log_var])
print('z',z)
#decoder
decoder_h = Dense(int(original_dim/2), activation='relu')
decoder_hh = Dense(int(original_dim/2), activation='relu')
decoder_mean = Dense(original_dim, activation=None)
h_decoded = decoder_h(z)
hh_decoded = decoder_hh(h_decoded)
x_decoded_mean = decoder_mean(hh_decoded)
autoencoder = Model(x, x_decoded_mean)
def sampling(args):
z_mean, z_log_var = args
epsilon = K.random_normal(shape=(K.shape(z_mean)[0], 20), mean=0., stddev = 1)
return z_mean + K.exp(z_log_var / 2) * epsilon
def vae_loss(x, x_decoded_mean):
xent_loss = original_dim * metrics.mse(x, x_decoded_mean)
#xent_loss = objectives.mse(x, x_decoded_mean)
print(xent_loss)
kl_loss = - 0.5 * K.sum(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis = -1)
kl_loss = 0.005*kl_loss
print(kl_loss)
loss = K.mean(xent_loss + kl_loss)
return loss
generative-models
$endgroup$
1
$begingroup$
It would be helpful if you could provide the code which you used to build the AE.
$endgroup$
– Shubham Panchal
Mar 22 at 13:21
$begingroup$
I added my piece of code
$endgroup$
– Dennis Thor
Mar 25 at 5:49
add a comment |
$begingroup$
I have data like that:
x_data = (300000,42), for each row, there are 42 features(x,y coordinates):
[297.425 341.30002 280.1 295.625 275.375 240.5 287.975 213.725 294.275 186.95 332.07498 254.675 355.69998 215.3 380.9 201.125 402.94998 188.52501 357.275 268.85 391.925 234.20001 412.4 215.3 432.875 202.7 380.9 287.75 410.82498 259.4 432.875 238.925 450.2 224.75 391.925 306.65 428.15 290.9 448.625 272 469.1 254.675
]
I want to generate new data from variational autoencoder. I add some noise to my input data, and feed to the network with (1,42) dimension input.
After training, when I predict noisy data, the accuracy is so bad and cannot reconstruct correct x,y coordinates.
What I should do now to improve my accuracy? Thank for your supports.
Edit:
I added some code for VAE.
x = Input(shape=(original_dim,))
h = Dense(int(original_dim/2), activation= 'relu')(x)
hh = Dense(int(original_dim/2), activation= 'relu')(h)
z_mean = Dense(20)(hh)
z_log_var = Dense(20)(hh)
z = Lambda(sampling, output_shape=(20,))([z_mean, z_log_var])
print('z',z)
#decoder
decoder_h = Dense(int(original_dim/2), activation='relu')
decoder_hh = Dense(int(original_dim/2), activation='relu')
decoder_mean = Dense(original_dim, activation=None)
h_decoded = decoder_h(z)
hh_decoded = decoder_hh(h_decoded)
x_decoded_mean = decoder_mean(hh_decoded)
autoencoder = Model(x, x_decoded_mean)
def sampling(args):
z_mean, z_log_var = args
epsilon = K.random_normal(shape=(K.shape(z_mean)[0], 20), mean=0., stddev = 1)
return z_mean + K.exp(z_log_var / 2) * epsilon
def vae_loss(x, x_decoded_mean):
xent_loss = original_dim * metrics.mse(x, x_decoded_mean)
#xent_loss = objectives.mse(x, x_decoded_mean)
print(xent_loss)
kl_loss = - 0.5 * K.sum(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis = -1)
kl_loss = 0.005*kl_loss
print(kl_loss)
loss = K.mean(xent_loss + kl_loss)
return loss
generative-models
$endgroup$
I have data like that:
x_data = (300000,42), for each row, there are 42 features(x,y coordinates):
[297.425 341.30002 280.1 295.625 275.375 240.5 287.975 213.725 294.275 186.95 332.07498 254.675 355.69998 215.3 380.9 201.125 402.94998 188.52501 357.275 268.85 391.925 234.20001 412.4 215.3 432.875 202.7 380.9 287.75 410.82498 259.4 432.875 238.925 450.2 224.75 391.925 306.65 428.15 290.9 448.625 272 469.1 254.675
]
I want to generate new data from variational autoencoder. I add some noise to my input data, and feed to the network with (1,42) dimension input.
After training, when I predict noisy data, the accuracy is so bad and cannot reconstruct correct x,y coordinates.
What I should do now to improve my accuracy? Thank for your supports.
Edit:
I added some code for VAE.
x = Input(shape=(original_dim,))
h = Dense(int(original_dim/2), activation= 'relu')(x)
hh = Dense(int(original_dim/2), activation= 'relu')(h)
z_mean = Dense(20)(hh)
z_log_var = Dense(20)(hh)
z = Lambda(sampling, output_shape=(20,))([z_mean, z_log_var])
print('z',z)
#decoder
decoder_h = Dense(int(original_dim/2), activation='relu')
decoder_hh = Dense(int(original_dim/2), activation='relu')
decoder_mean = Dense(original_dim, activation=None)
h_decoded = decoder_h(z)
hh_decoded = decoder_hh(h_decoded)
x_decoded_mean = decoder_mean(hh_decoded)
autoencoder = Model(x, x_decoded_mean)
def sampling(args):
z_mean, z_log_var = args
epsilon = K.random_normal(shape=(K.shape(z_mean)[0], 20), mean=0., stddev = 1)
return z_mean + K.exp(z_log_var / 2) * epsilon
def vae_loss(x, x_decoded_mean):
xent_loss = original_dim * metrics.mse(x, x_decoded_mean)
#xent_loss = objectives.mse(x, x_decoded_mean)
print(xent_loss)
kl_loss = - 0.5 * K.sum(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis = -1)
kl_loss = 0.005*kl_loss
print(kl_loss)
loss = K.mean(xent_loss + kl_loss)
return loss
generative-models
generative-models
edited Mar 25 at 5:49
Dennis Thor
asked Mar 22 at 12:55
Dennis ThorDennis Thor
11
11
1
$begingroup$
It would be helpful if you could provide the code which you used to build the AE.
$endgroup$
– Shubham Panchal
Mar 22 at 13:21
$begingroup$
I added my piece of code
$endgroup$
– Dennis Thor
Mar 25 at 5:49
add a comment |
1
$begingroup$
It would be helpful if you could provide the code which you used to build the AE.
$endgroup$
– Shubham Panchal
Mar 22 at 13:21
$begingroup$
I added my piece of code
$endgroup$
– Dennis Thor
Mar 25 at 5:49
1
1
$begingroup$
It would be helpful if you could provide the code which you used to build the AE.
$endgroup$
– Shubham Panchal
Mar 22 at 13:21
$begingroup$
It would be helpful if you could provide the code which you used to build the AE.
$endgroup$
– Shubham Panchal
Mar 22 at 13:21
$begingroup$
I added my piece of code
$endgroup$
– Dennis Thor
Mar 25 at 5:49
$begingroup$
I added my piece of code
$endgroup$
– Dennis Thor
Mar 25 at 5:49
add a comment |
0
active
oldest
votes
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47786%2fhow-to-improve-accuracy-of-variational-autoencoder-for-coordinate-data%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47786%2fhow-to-improve-accuracy-of-variational-autoencoder-for-coordinate-data%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
GVbW3t oP5CPP,PR9r,qKSv 80FYb,vtT,Ta,bRiZ1BU
1
$begingroup$
It would be helpful if you could provide the code which you used to build the AE.
$endgroup$
– Shubham Panchal
Mar 22 at 13:21
$begingroup$
I added my piece of code
$endgroup$
– Dennis Thor
Mar 25 at 5:49