How to calculate Average Precision for Image Segmentation?What does the notation mAP@[.5:.95] mean?Convolution Neural Network Loss and performanceHow to calculate mAP for detection task for the PASCAL VOC Challenge?Unsupervised Anomaly Detection in ImagesUnsupervised image segmentationWhat is the difference between tensorflow saved_model.pb and frozen_inference_graph.pb?mean average precision - pseudo codeHow can I detect partially obscured objects using Python?Detecting address labels using Tensorflow Object Detection APIWhich learning tasks do brains use to train themselves to see?

Can a creature tell when it has been affected by a Divination wizard's Portent?

Are Boeing 737-800’s grounded?

Unexpected email from Yorkshire Bank

A question regarding using the definite article

What is the strongest case that can be made in favour of the UK regaining some control over fishing policy after Brexit?

Why is current rating for multicore cable lower than single core with the same cross section?

What word means to make something obsolete?

Airbnb - host wants to reduce rooms, can we get refund?

How to back up a running remote server?

Volunteering in England

What are the spoon bit of a spoon and fork bit of a fork called?

Python "triplet" dictionary?

Given what happens in Endgame, why doesn't Dormammu come back to attack the universe?

How can Republicans who favour free markets, consistently express anger when they don't like the outcome of that choice?

What is a Recurrent Neural Network?

Pressure to defend the relevance of one's area of mathematics

How to set printing options as reverse order as default on 18.04

Why do computer-science majors learn calculus?

How does a Swashbuckler rogue "fight with two weapons while safely darting away"?

Minimum value of 4 digit number divided by sum of its digits

What's the polite way to say "I need to urinate"?

Weird result in complex limit

Illegal assignment from SObject to Contact

How to replace the "space symbol" (squat-u) in listings?



How to calculate Average Precision for Image Segmentation?


What does the notation mAP@[.5:.95] mean?Convolution Neural Network Loss and performanceHow to calculate mAP for detection task for the PASCAL VOC Challenge?Unsupervised Anomaly Detection in ImagesUnsupervised image segmentationWhat is the difference between tensorflow saved_model.pb and frozen_inference_graph.pb?mean average precision - pseudo codeHow can I detect partially obscured objects using Python?Detecting address labels using Tensorflow Object Detection APIWhich learning tasks do brains use to train themselves to see?













0












$begingroup$


If I've understood things correctly, when calculating AP for Object Detection (e.g. VOC, COCO etc) the procedure is:



  1. collect up all the detected objects in your dataset

  2. sort the detections by their confidence score

  3. categorise each detection as True Positive or False Positive, by comparing the Intersection over Union with a Ground Truth object to a pre-set threshold

  4. plot Precision $fracTPn$ against Recall $fracnN$, where n is the number of objects in the list that have been considered so far, and N is the total number of objects

  5. integrate Precision with respect to Recall. (There are various different ways to perform the integration.)

When I attempt to replicate these steps for Segmentation, I found that my segmentation CNN didn't provide the confidence as an output. Even if it did, it would presumably be for each individual pixel. So I am stuck at step 2.



Calculating AP without sorting by confidence will obviously change the result. But is it still "valid" in some sense? If not, is there a roughly equivalent metric I could use to compare segmentation results? (Or perhaps more generally, a metric for detection where ranking is not possible?)



Edit: looking at VOCdevkit, it seems that they use accuracy $fracTPTP+FP+FN$ rather than AP as the metric to evaluate segmentation. Is that what I should be doing? It seems to me that AP is the "better" metric, so I would prefer to use something as close to that as possible.



Looking at the Berkeley Simultaneous Detection and Segmentation code, and the accompanying paper, they calculate a pixel-wise AP (called $AP^r$), but it seems like they have a confidence score for each object.










share|improve this question











$endgroup$
















    0












    $begingroup$


    If I've understood things correctly, when calculating AP for Object Detection (e.g. VOC, COCO etc) the procedure is:



    1. collect up all the detected objects in your dataset

    2. sort the detections by their confidence score

    3. categorise each detection as True Positive or False Positive, by comparing the Intersection over Union with a Ground Truth object to a pre-set threshold

    4. plot Precision $fracTPn$ against Recall $fracnN$, where n is the number of objects in the list that have been considered so far, and N is the total number of objects

    5. integrate Precision with respect to Recall. (There are various different ways to perform the integration.)

    When I attempt to replicate these steps for Segmentation, I found that my segmentation CNN didn't provide the confidence as an output. Even if it did, it would presumably be for each individual pixel. So I am stuck at step 2.



    Calculating AP without sorting by confidence will obviously change the result. But is it still "valid" in some sense? If not, is there a roughly equivalent metric I could use to compare segmentation results? (Or perhaps more generally, a metric for detection where ranking is not possible?)



    Edit: looking at VOCdevkit, it seems that they use accuracy $fracTPTP+FP+FN$ rather than AP as the metric to evaluate segmentation. Is that what I should be doing? It seems to me that AP is the "better" metric, so I would prefer to use something as close to that as possible.



    Looking at the Berkeley Simultaneous Detection and Segmentation code, and the accompanying paper, they calculate a pixel-wise AP (called $AP^r$), but it seems like they have a confidence score for each object.










    share|improve this question











    $endgroup$














      0












      0








      0





      $begingroup$


      If I've understood things correctly, when calculating AP for Object Detection (e.g. VOC, COCO etc) the procedure is:



      1. collect up all the detected objects in your dataset

      2. sort the detections by their confidence score

      3. categorise each detection as True Positive or False Positive, by comparing the Intersection over Union with a Ground Truth object to a pre-set threshold

      4. plot Precision $fracTPn$ against Recall $fracnN$, where n is the number of objects in the list that have been considered so far, and N is the total number of objects

      5. integrate Precision with respect to Recall. (There are various different ways to perform the integration.)

      When I attempt to replicate these steps for Segmentation, I found that my segmentation CNN didn't provide the confidence as an output. Even if it did, it would presumably be for each individual pixel. So I am stuck at step 2.



      Calculating AP without sorting by confidence will obviously change the result. But is it still "valid" in some sense? If not, is there a roughly equivalent metric I could use to compare segmentation results? (Or perhaps more generally, a metric for detection where ranking is not possible?)



      Edit: looking at VOCdevkit, it seems that they use accuracy $fracTPTP+FP+FN$ rather than AP as the metric to evaluate segmentation. Is that what I should be doing? It seems to me that AP is the "better" metric, so I would prefer to use something as close to that as possible.



      Looking at the Berkeley Simultaneous Detection and Segmentation code, and the accompanying paper, they calculate a pixel-wise AP (called $AP^r$), but it seems like they have a confidence score for each object.










      share|improve this question











      $endgroup$




      If I've understood things correctly, when calculating AP for Object Detection (e.g. VOC, COCO etc) the procedure is:



      1. collect up all the detected objects in your dataset

      2. sort the detections by their confidence score

      3. categorise each detection as True Positive or False Positive, by comparing the Intersection over Union with a Ground Truth object to a pre-set threshold

      4. plot Precision $fracTPn$ against Recall $fracnN$, where n is the number of objects in the list that have been considered so far, and N is the total number of objects

      5. integrate Precision with respect to Recall. (There are various different ways to perform the integration.)

      When I attempt to replicate these steps for Segmentation, I found that my segmentation CNN didn't provide the confidence as an output. Even if it did, it would presumably be for each individual pixel. So I am stuck at step 2.



      Calculating AP without sorting by confidence will obviously change the result. But is it still "valid" in some sense? If not, is there a roughly equivalent metric I could use to compare segmentation results? (Or perhaps more generally, a metric for detection where ranking is not possible?)



      Edit: looking at VOCdevkit, it seems that they use accuracy $fracTPTP+FP+FN$ rather than AP as the metric to evaluate segmentation. Is that what I should be doing? It seems to me that AP is the "better" metric, so I would prefer to use something as close to that as possible.



      Looking at the Berkeley Simultaneous Detection and Segmentation code, and the accompanying paper, they calculate a pixel-wise AP (called $AP^r$), but it seems like they have a confidence score for each object.







      neural-network computer-vision object-detection






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Apr 9 at 1:35







      craq

















      asked Apr 9 at 1:24









      craqcraq

      1014




      1014




















          0






          active

          oldest

          votes












          Your Answer








          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "557"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48923%2fhow-to-calculate-average-precision-for-image-segmentation%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Data Science Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48923%2fhow-to-calculate-average-precision-for-image-segmentation%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

          Tähtien Talli Jäsenet | Lähteet | NavigointivalikkoSuomen Hippos – Tähtien Talli

          Do these cracks on my tires look bad? The Next CEO of Stack OverflowDry rot tire should I replace?Having to replace tiresFishtailed so easily? Bad tires? ABS?Filling the tires with something other than air, to avoid puncture hassles?Used Michelin tires safe to install?Do these tyre cracks necessitate replacement?Rumbling noise: tires or mechanicalIs it possible to fix noisy feathered tires?Are bad winter tires still better than summer tires in winter?Torque converter failure - Related to replacing only 2 tires?Why use snow tires on all 4 wheels on 2-wheel-drive cars?