Can I use an ANN to translate image output from one sensor to simulate output from another sensor? Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern) 2019 Moderator Election Q&A - Questionnaire 2019 Community Moderator Election ResultsDetecting cats visually by means of anomaly detectionResearch in high-dimensional statistics vs. machine learning?Why does applying PCA on targets causes underfitting?Can I train two stacked models end-to-end on different resolutions?Regression problem as predicting a delta from another algorithm's outputMulti-image superresolution using CNNshow can I recognize multiple faces from one image in pythonUnsupervised Anomaly Detection in ImagesMatch a uploaded image with another image using machine learning?Robustness of ML Model in question

When speaking, how do you change your mind mid-sentence?

Why doesn't the university give past final exams' answers?

Why does BitLocker not use RSA?

enable https on private network

Proving inequality for positive definite matrix

Im stuck and having trouble with ¬P ∨ Q Prove: P → Q

Knights and Knaves question

Can a Wizard take the Magic Initiate feat and select spells from the Wizard list?

How was Lagrange appointed professor of mathematics so early?

Why did Europeans not widely domesticate foxes?

false 'Security alert' from Google - every login generates mails from 'no-reply@accounts.google.com'

Why do C and C++ allow the expression (int) + 4*5?

/bin/ls sorts differently than just ls

Why aren't road bike wheels tiny?

When does Bran Stark remember Jamie pushing him?

Sorting the characters in a utf-16 string in java

Fourier Transform of Airy Equation

Why do people think Winterfell crypts is the safest place for women, children & old people?

What's the connection between Mr. Nancy and fried chicken?

What's the difference between using dependency injection with a container and using a service locator?

Does using the Inspiration rules for character defects encourage My Guy Syndrome?

Lights are flickering on and off after accidentally bumping into light switch

How to make an animal which can only breed for a certain number of generations?

How to keep bees out of canned beverages?



Can I use an ANN to translate image output from one sensor to simulate output from another sensor?



Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern)
2019 Moderator Election Q&A - Questionnaire
2019 Community Moderator Election ResultsDetecting cats visually by means of anomaly detectionResearch in high-dimensional statistics vs. machine learning?Why does applying PCA on targets causes underfitting?Can I train two stacked models end-to-end on different resolutions?Regression problem as predicting a delta from another algorithm's outputMulti-image superresolution using CNNshow can I recognize multiple faces from one image in pythonUnsupervised Anomaly Detection in ImagesMatch a uploaded image with another image using machine learning?Robustness of ML Model in question










1












$begingroup$


Say, for instance, if I had image data from one high resolution digital camera and wanted to make it look like it was taken from another, lower resolution, digital camera? Would training input/output pairs of overlapping images be a good way to do this? What is this technique called?



For example, say I wanted to be able to count benches in parks in LOW resolution imagery. Could I go through these sample images and create an appropriate dataset of high and low resolution pairs to train a network to learn what a low resolution bench looked like? Would I be able to discern low resolution benches if my training set was incredibly diverse (image chips if entire city parks vs individual objects like fountains, trees and statues)?



Lower resolution satellite imagery



Higher resolution aerial imagery



I like this example because the images come from different sensors as well as being different resolutions. Some of my research has led me to super resolution, which is kind of the opposite of what I'm trying to do.



As for the amount of data, it would be painstaking but not technically difficult to get overlapping high and low resolution imagery.










share|improve this question











$endgroup$







  • 1




    $begingroup$
    Certainly, this could be achieved but I won't suggest wasting the computation over what could be achieved using simpler image processing techniques.
    $endgroup$
    – thanatoz
    Mar 6 at 6:57










  • $begingroup$
    If your goal is to do classification on low resolution images, then learning a high res to low res transformation seems unnecessary. Either just use low resolution images directly (if you don't have them why are you building a classifier for it??), or just use standard image downsampling in order to get some from high resolution images.
    $endgroup$
    – jonnor
    Apr 6 at 12:00















1












$begingroup$


Say, for instance, if I had image data from one high resolution digital camera and wanted to make it look like it was taken from another, lower resolution, digital camera? Would training input/output pairs of overlapping images be a good way to do this? What is this technique called?



For example, say I wanted to be able to count benches in parks in LOW resolution imagery. Could I go through these sample images and create an appropriate dataset of high and low resolution pairs to train a network to learn what a low resolution bench looked like? Would I be able to discern low resolution benches if my training set was incredibly diverse (image chips if entire city parks vs individual objects like fountains, trees and statues)?



Lower resolution satellite imagery



Higher resolution aerial imagery



I like this example because the images come from different sensors as well as being different resolutions. Some of my research has led me to super resolution, which is kind of the opposite of what I'm trying to do.



As for the amount of data, it would be painstaking but not technically difficult to get overlapping high and low resolution imagery.










share|improve this question











$endgroup$







  • 1




    $begingroup$
    Certainly, this could be achieved but I won't suggest wasting the computation over what could be achieved using simpler image processing techniques.
    $endgroup$
    – thanatoz
    Mar 6 at 6:57










  • $begingroup$
    If your goal is to do classification on low resolution images, then learning a high res to low res transformation seems unnecessary. Either just use low resolution images directly (if you don't have them why are you building a classifier for it??), or just use standard image downsampling in order to get some from high resolution images.
    $endgroup$
    – jonnor
    Apr 6 at 12:00













1












1








1





$begingroup$


Say, for instance, if I had image data from one high resolution digital camera and wanted to make it look like it was taken from another, lower resolution, digital camera? Would training input/output pairs of overlapping images be a good way to do this? What is this technique called?



For example, say I wanted to be able to count benches in parks in LOW resolution imagery. Could I go through these sample images and create an appropriate dataset of high and low resolution pairs to train a network to learn what a low resolution bench looked like? Would I be able to discern low resolution benches if my training set was incredibly diverse (image chips if entire city parks vs individual objects like fountains, trees and statues)?



Lower resolution satellite imagery



Higher resolution aerial imagery



I like this example because the images come from different sensors as well as being different resolutions. Some of my research has led me to super resolution, which is kind of the opposite of what I'm trying to do.



As for the amount of data, it would be painstaking but not technically difficult to get overlapping high and low resolution imagery.










share|improve this question











$endgroup$




Say, for instance, if I had image data from one high resolution digital camera and wanted to make it look like it was taken from another, lower resolution, digital camera? Would training input/output pairs of overlapping images be a good way to do this? What is this technique called?



For example, say I wanted to be able to count benches in parks in LOW resolution imagery. Could I go through these sample images and create an appropriate dataset of high and low resolution pairs to train a network to learn what a low resolution bench looked like? Would I be able to discern low resolution benches if my training set was incredibly diverse (image chips if entire city parks vs individual objects like fountains, trees and statues)?



Lower resolution satellite imagery



Higher resolution aerial imagery



I like this example because the images come from different sensors as well as being different resolutions. Some of my research has led me to super resolution, which is kind of the opposite of what I'm trying to do.



As for the amount of data, it would be painstaking but not technically difficult to get overlapping high and low resolution imagery.







machine-learning neural-style-transfer






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Mar 6 at 15:55







Karsten Chu

















asked Mar 5 at 20:42









Karsten ChuKarsten Chu

63




63







  • 1




    $begingroup$
    Certainly, this could be achieved but I won't suggest wasting the computation over what could be achieved using simpler image processing techniques.
    $endgroup$
    – thanatoz
    Mar 6 at 6:57










  • $begingroup$
    If your goal is to do classification on low resolution images, then learning a high res to low res transformation seems unnecessary. Either just use low resolution images directly (if you don't have them why are you building a classifier for it??), or just use standard image downsampling in order to get some from high resolution images.
    $endgroup$
    – jonnor
    Apr 6 at 12:00












  • 1




    $begingroup$
    Certainly, this could be achieved but I won't suggest wasting the computation over what could be achieved using simpler image processing techniques.
    $endgroup$
    – thanatoz
    Mar 6 at 6:57










  • $begingroup$
    If your goal is to do classification on low resolution images, then learning a high res to low res transformation seems unnecessary. Either just use low resolution images directly (if you don't have them why are you building a classifier for it??), or just use standard image downsampling in order to get some from high resolution images.
    $endgroup$
    – jonnor
    Apr 6 at 12:00







1




1




$begingroup$
Certainly, this could be achieved but I won't suggest wasting the computation over what could be achieved using simpler image processing techniques.
$endgroup$
– thanatoz
Mar 6 at 6:57




$begingroup$
Certainly, this could be achieved but I won't suggest wasting the computation over what could be achieved using simpler image processing techniques.
$endgroup$
– thanatoz
Mar 6 at 6:57












$begingroup$
If your goal is to do classification on low resolution images, then learning a high res to low res transformation seems unnecessary. Either just use low resolution images directly (if you don't have them why are you building a classifier for it??), or just use standard image downsampling in order to get some from high resolution images.
$endgroup$
– jonnor
Apr 6 at 12:00




$begingroup$
If your goal is to do classification on low resolution images, then learning a high res to low res transformation seems unnecessary. Either just use low resolution images directly (if you don't have them why are you building a classifier for it??), or just use standard image downsampling in order to get some from high resolution images.
$endgroup$
– jonnor
Apr 6 at 12:00










2 Answers
2






active

oldest

votes


















1












$begingroup$

This is very much possible. There is a function which can map the images from the higher resolution pictures to the lower dimensional ones; and a neural network can be trained to learn that function.



However, to train a neural network to do this you will need thousands of images from both cameras. Then you can feed the pictures taken with your higher resolution camera as the input to the network and then compute the loss of the network at the output with the corresponding lower resolution images.



If you do not have so many images, there has been work on taking images and applying some sort of filter to change their appearance. These techniques are often called style transfer, you can find some tutorial here and code which I have tried and can confirm works here. It might be hard to get a representative image to use as the style image using your old camera. You can try an average of a few pictures, or a picture of a white background, you would have to try things out, I do not know what would work in this case.



If you share examples of your data we can help you more.






share|improve this answer









$endgroup$




















    0












    $begingroup$

    You noted that there is superresolution, which is a kind of "information adding" to images. The opposite is quite possible but not very useful since lowering resolution can be achieved by many non-machine learning techniques.



    You can try:



    • Get your high resolution images and camera specifications to use basic image processing to transform images to a result similar to the one of another camera.

      • Camera Resolution: Is easy to do with proper image resizing, try different interpolation algorithms.

      • Sensor Specifications: how sensitive to light is the sensor? what is the bit depth for color/intensity? Those are things to consider.

      • Sensor Amplifier and Other Lightning Condition: Basically, ISO, White Balance and such.


    • Try changing these conditions to achieve the desired result

    Notes:



    • If there is difference is sensors construction (For Example one is CMOS and the other is CCD) it might be useful to use "underresolution" that you want to create, since there is large difference in response to light saturation and such.


    • When training, check for image alignment since this can yield absurd differences for least-square image similarity (you should try using SIMD)






    share|improve this answer









    $endgroup$













      Your Answer








      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "557"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader:
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      ,
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );













      draft saved

      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f46723%2fcan-i-use-an-ann-to-translate-image-output-from-one-sensor-to-simulate-output-fr%23new-answer', 'question_page');

      );

      Post as a guest















      Required, but never shown

























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      1












      $begingroup$

      This is very much possible. There is a function which can map the images from the higher resolution pictures to the lower dimensional ones; and a neural network can be trained to learn that function.



      However, to train a neural network to do this you will need thousands of images from both cameras. Then you can feed the pictures taken with your higher resolution camera as the input to the network and then compute the loss of the network at the output with the corresponding lower resolution images.



      If you do not have so many images, there has been work on taking images and applying some sort of filter to change their appearance. These techniques are often called style transfer, you can find some tutorial here and code which I have tried and can confirm works here. It might be hard to get a representative image to use as the style image using your old camera. You can try an average of a few pictures, or a picture of a white background, you would have to try things out, I do not know what would work in this case.



      If you share examples of your data we can help you more.






      share|improve this answer









      $endgroup$

















        1












        $begingroup$

        This is very much possible. There is a function which can map the images from the higher resolution pictures to the lower dimensional ones; and a neural network can be trained to learn that function.



        However, to train a neural network to do this you will need thousands of images from both cameras. Then you can feed the pictures taken with your higher resolution camera as the input to the network and then compute the loss of the network at the output with the corresponding lower resolution images.



        If you do not have so many images, there has been work on taking images and applying some sort of filter to change their appearance. These techniques are often called style transfer, you can find some tutorial here and code which I have tried and can confirm works here. It might be hard to get a representative image to use as the style image using your old camera. You can try an average of a few pictures, or a picture of a white background, you would have to try things out, I do not know what would work in this case.



        If you share examples of your data we can help you more.






        share|improve this answer









        $endgroup$















          1












          1








          1





          $begingroup$

          This is very much possible. There is a function which can map the images from the higher resolution pictures to the lower dimensional ones; and a neural network can be trained to learn that function.



          However, to train a neural network to do this you will need thousands of images from both cameras. Then you can feed the pictures taken with your higher resolution camera as the input to the network and then compute the loss of the network at the output with the corresponding lower resolution images.



          If you do not have so many images, there has been work on taking images and applying some sort of filter to change their appearance. These techniques are often called style transfer, you can find some tutorial here and code which I have tried and can confirm works here. It might be hard to get a representative image to use as the style image using your old camera. You can try an average of a few pictures, or a picture of a white background, you would have to try things out, I do not know what would work in this case.



          If you share examples of your data we can help you more.






          share|improve this answer









          $endgroup$



          This is very much possible. There is a function which can map the images from the higher resolution pictures to the lower dimensional ones; and a neural network can be trained to learn that function.



          However, to train a neural network to do this you will need thousands of images from both cameras. Then you can feed the pictures taken with your higher resolution camera as the input to the network and then compute the loss of the network at the output with the corresponding lower resolution images.



          If you do not have so many images, there has been work on taking images and applying some sort of filter to change their appearance. These techniques are often called style transfer, you can find some tutorial here and code which I have tried and can confirm works here. It might be hard to get a representative image to use as the style image using your old camera. You can try an average of a few pictures, or a picture of a white background, you would have to try things out, I do not know what would work in this case.



          If you share examples of your data we can help you more.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Mar 6 at 1:15









          JahKnowsJahKnows

          5,317727




          5,317727





















              0












              $begingroup$

              You noted that there is superresolution, which is a kind of "information adding" to images. The opposite is quite possible but not very useful since lowering resolution can be achieved by many non-machine learning techniques.



              You can try:



              • Get your high resolution images and camera specifications to use basic image processing to transform images to a result similar to the one of another camera.

                • Camera Resolution: Is easy to do with proper image resizing, try different interpolation algorithms.

                • Sensor Specifications: how sensitive to light is the sensor? what is the bit depth for color/intensity? Those are things to consider.

                • Sensor Amplifier and Other Lightning Condition: Basically, ISO, White Balance and such.


              • Try changing these conditions to achieve the desired result

              Notes:



              • If there is difference is sensors construction (For Example one is CMOS and the other is CCD) it might be useful to use "underresolution" that you want to create, since there is large difference in response to light saturation and such.


              • When training, check for image alignment since this can yield absurd differences for least-square image similarity (you should try using SIMD)






              share|improve this answer









              $endgroup$

















                0












                $begingroup$

                You noted that there is superresolution, which is a kind of "information adding" to images. The opposite is quite possible but not very useful since lowering resolution can be achieved by many non-machine learning techniques.



                You can try:



                • Get your high resolution images and camera specifications to use basic image processing to transform images to a result similar to the one of another camera.

                  • Camera Resolution: Is easy to do with proper image resizing, try different interpolation algorithms.

                  • Sensor Specifications: how sensitive to light is the sensor? what is the bit depth for color/intensity? Those are things to consider.

                  • Sensor Amplifier and Other Lightning Condition: Basically, ISO, White Balance and such.


                • Try changing these conditions to achieve the desired result

                Notes:



                • If there is difference is sensors construction (For Example one is CMOS and the other is CCD) it might be useful to use "underresolution" that you want to create, since there is large difference in response to light saturation and such.


                • When training, check for image alignment since this can yield absurd differences for least-square image similarity (you should try using SIMD)






                share|improve this answer









                $endgroup$















                  0












                  0








                  0





                  $begingroup$

                  You noted that there is superresolution, which is a kind of "information adding" to images. The opposite is quite possible but not very useful since lowering resolution can be achieved by many non-machine learning techniques.



                  You can try:



                  • Get your high resolution images and camera specifications to use basic image processing to transform images to a result similar to the one of another camera.

                    • Camera Resolution: Is easy to do with proper image resizing, try different interpolation algorithms.

                    • Sensor Specifications: how sensitive to light is the sensor? what is the bit depth for color/intensity? Those are things to consider.

                    • Sensor Amplifier and Other Lightning Condition: Basically, ISO, White Balance and such.


                  • Try changing these conditions to achieve the desired result

                  Notes:



                  • If there is difference is sensors construction (For Example one is CMOS and the other is CCD) it might be useful to use "underresolution" that you want to create, since there is large difference in response to light saturation and such.


                  • When training, check for image alignment since this can yield absurd differences for least-square image similarity (you should try using SIMD)






                  share|improve this answer









                  $endgroup$



                  You noted that there is superresolution, which is a kind of "information adding" to images. The opposite is quite possible but not very useful since lowering resolution can be achieved by many non-machine learning techniques.



                  You can try:



                  • Get your high resolution images and camera specifications to use basic image processing to transform images to a result similar to the one of another camera.

                    • Camera Resolution: Is easy to do with proper image resizing, try different interpolation algorithms.

                    • Sensor Specifications: how sensitive to light is the sensor? what is the bit depth for color/intensity? Those are things to consider.

                    • Sensor Amplifier and Other Lightning Condition: Basically, ISO, White Balance and such.


                  • Try changing these conditions to achieve the desired result

                  Notes:



                  • If there is difference is sensors construction (For Example one is CMOS and the other is CCD) it might be useful to use "underresolution" that you want to create, since there is large difference in response to light saturation and such.


                  • When training, check for image alignment since this can yield absurd differences for least-square image similarity (you should try using SIMD)







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Apr 5 at 19:20









                  Pedro Henrique MonfortePedro Henrique Monforte

                  559218




                  559218



























                      draft saved

                      draft discarded
















































                      Thanks for contributing an answer to Data Science Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid


                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.

                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f46723%2fcan-i-use-an-ann-to-translate-image-output-from-one-sensor-to-simulate-output-fr%23new-answer', 'question_page');

                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

                      Luettelo Yhdysvaltain laivaston lentotukialuksista Lähteet | Navigointivalikko

                      Gary (muusikko) Sisällysluettelo Historia | Rockin' High | Lähteet | Aiheesta muualla | NavigointivalikkoInfobox OKTuomas "Gary" Keskinen Ancaran kitaristiksiProjekti Rockin' High