Telemetry for feature healthReferences for resources used in testWhat are the best practices for error seedingAnybody know of any tools for tailing a users path through the logs please?Good Grep Tools for WindowsExamples and Best Practices for Seeding Defects?Best practices for managing code duplication?Comparison metrics for a static analysis toolMeasuring feature coverage with BDDsResponse for a class (RFC) metric tools for PHPA good name for live system performance testing?
Coordinate position not precise
How could Frankenstein get the parts for his _second_ creature?
Do I need a multiple entry visa for a trip UK -> Sweden -> UK?
Why "be dealt cards" rather than "be dealing cards"?
quarter to five p.m
Should my PhD thesis be submitted under my legal name?
Implement the Thanos sorting algorithm
Why are on-board computers allowed to change controls without notifying the pilots?
There is only s̶i̶x̶t̶y one place he can be
If a character can use a +X magic weapon as a spellcasting focus, does it add the bonus to spell attacks or spell save DCs?
Print name if parameter passed to function
Was the picture area of a CRT a parallelogram (instead of a true rectangle)?
What would happen if the UK refused to take part in EU Parliamentary elections?
Mapping a list into a phase plot
How do I keep an essay about "feeling flat" from feeling flat?
Failed to fetch jessie backports repository
How do I define a right arrow with bar in LaTeX?
I'm in charge of equipment buying but no one's ever happy with what I choose. How to fix this?
What's the purpose of "true" in bash "if sudo true; then"
Is there an Impartial Brexit Deal comparison site?
Opposite of a diet
Can I Retrieve Email Addresses from BCC?
Everything Bob says is false. How does he get people to trust him?
Will it be accepted, if there is no ''Main Character" stereotype?
Telemetry for feature health
References for resources used in testWhat are the best practices for error seedingAnybody know of any tools for tailing a users path through the logs please?Good Grep Tools for WindowsExamples and Best Practices for Seeding Defects?Best practices for managing code duplication?Comparison metrics for a static analysis toolMeasuring feature coverage with BDDsResponse for a class (RFC) metric tools for PHPA good name for live system performance testing?
Not sure how opinion based my question is, but when you have a released (client facing) feature, how are you evaluating with telemetry if it's healthy? By healthy I mean users can interact with it (it's accessible) and when there was an interaction the expected outcome happened.
Here is an example for the expected outcome case:
There is a delete button and if it's pressed, a request sent to a server, and if that returns 200
, the delete functionality can be assumed to work.
Let's say we are sending a metric DeleteButtonPressed
when the user presses the button and DeleteSuccessful
upon receiving the 200
response code. If there is a drop in DeleteSuccessful
/ DeleteButtonPressed
it can be said that the feature is not healthy.
However how do we know if the feature is accessible? The problem is that it cannot really be distinguished if the users are not be able to use the feature (because of let's say in a specific OS version the app behaves differently and the feature is not visible) or they don't want to interact with it. Monitoring just the rate of DeleteButtonPressed
therefore isn't a good indicator.
Maybe I'm missing something but what are good metrics to monitor feature health?
metrics logs monitoring
New contributor
add a comment |
Not sure how opinion based my question is, but when you have a released (client facing) feature, how are you evaluating with telemetry if it's healthy? By healthy I mean users can interact with it (it's accessible) and when there was an interaction the expected outcome happened.
Here is an example for the expected outcome case:
There is a delete button and if it's pressed, a request sent to a server, and if that returns 200
, the delete functionality can be assumed to work.
Let's say we are sending a metric DeleteButtonPressed
when the user presses the button and DeleteSuccessful
upon receiving the 200
response code. If there is a drop in DeleteSuccessful
/ DeleteButtonPressed
it can be said that the feature is not healthy.
However how do we know if the feature is accessible? The problem is that it cannot really be distinguished if the users are not be able to use the feature (because of let's say in a specific OS version the app behaves differently and the feature is not visible) or they don't want to interact with it. Monitoring just the rate of DeleteButtonPressed
therefore isn't a good indicator.
Maybe I'm missing something but what are good metrics to monitor feature health?
metrics logs monitoring
New contributor
add a comment |
Not sure how opinion based my question is, but when you have a released (client facing) feature, how are you evaluating with telemetry if it's healthy? By healthy I mean users can interact with it (it's accessible) and when there was an interaction the expected outcome happened.
Here is an example for the expected outcome case:
There is a delete button and if it's pressed, a request sent to a server, and if that returns 200
, the delete functionality can be assumed to work.
Let's say we are sending a metric DeleteButtonPressed
when the user presses the button and DeleteSuccessful
upon receiving the 200
response code. If there is a drop in DeleteSuccessful
/ DeleteButtonPressed
it can be said that the feature is not healthy.
However how do we know if the feature is accessible? The problem is that it cannot really be distinguished if the users are not be able to use the feature (because of let's say in a specific OS version the app behaves differently and the feature is not visible) or they don't want to interact with it. Monitoring just the rate of DeleteButtonPressed
therefore isn't a good indicator.
Maybe I'm missing something but what are good metrics to monitor feature health?
metrics logs monitoring
New contributor
Not sure how opinion based my question is, but when you have a released (client facing) feature, how are you evaluating with telemetry if it's healthy? By healthy I mean users can interact with it (it's accessible) and when there was an interaction the expected outcome happened.
Here is an example for the expected outcome case:
There is a delete button and if it's pressed, a request sent to a server, and if that returns 200
, the delete functionality can be assumed to work.
Let's say we are sending a metric DeleteButtonPressed
when the user presses the button and DeleteSuccessful
upon receiving the 200
response code. If there is a drop in DeleteSuccessful
/ DeleteButtonPressed
it can be said that the feature is not healthy.
However how do we know if the feature is accessible? The problem is that it cannot really be distinguished if the users are not be able to use the feature (because of let's say in a specific OS version the app behaves differently and the feature is not visible) or they don't want to interact with it. Monitoring just the rate of DeleteButtonPressed
therefore isn't a good indicator.
Maybe I'm missing something but what are good metrics to monitor feature health?
metrics logs monitoring
metrics logs monitoring
New contributor
New contributor
New contributor
asked Mar 20 at 15:27
Dániel NagyDániel Nagy
1334
1334
New contributor
New contributor
add a comment |
add a comment |
3 Answers
3
active
oldest
votes
Summary: It's a reasonable concern but not one that I encounter in practice.
Create smoke tests that are high level are assert that basic functionality, such as visiting the first page, work. These should not be reliant on OS specific peculiarities about edge cases.
Run these tests as part of the deployment to an environment process and only consider the application to be deployed if they all pass.
Also, perform exploratory testing to ensure that the elements are truly visible.
For a specific feature as you mention, part of the puzzle is making sure you have quality unit tests, good integration tests and good Acceptance tests. Acceptance tests including UI tests can become part of your regression suite but this should happen selectively and mostly not happen otherwise you start building a massive and slow regression suite. Most business struggle with this because it seems attractive except that... it gets slower and slower and businesses today want speed.
I've been writing selenium tests for years and I have not experienced what you describe as a common issue actually occurring. I can certainly recall ONE TIME, where in IE you had to scroll down before the selenium finder would work - but even in this case, it worked for actual users, just not the automation without an additional scroll_to action.
add a comment |
Telemetry can't always pin point problems, but many times it can indicate the existence of a problem.
If you expect some problems to occur you can sometimes add smarter telemetry and better analysis of other pieces of information, for example users skipping the Delete button and closing the application all together.
A complementary approach is using A/B testing, give some users a Delete button of type A and some of type B and compare the results. You can use A/B testing to assess designs but also retrospectively to locate or fix problems.
add a comment |
To be able to effectively collect "telemetry" of a feature this capability should be initially (and intentionally) included into application architecture design. This is not that straightforward since it depends on many things like whether you need real-time monitoring, or, say, overnight analysis.
Generally speaking when you say that a feature is "alive" it usually means that is passes all the step sequence to deliver the result to the end-user. That result might be wrong one (however in my understanding that still means the feature is alive, but having a defect). To achieve that each high-level logical step composing a feature should log the step definition to the audit storage. Having such entries associated with user session and feature identifier you would be able to analyze if all the features produced the end result.
You would also need to define termination mark since sometimes the user just change their mind and do not complete the steps to accomplish the use-case.
So the metric could be the number of sequences which have no end-points achieved.
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "244"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Dániel Nagy is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsqa.stackexchange.com%2fquestions%2f38342%2ftelemetry-for-feature-health%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
Summary: It's a reasonable concern but not one that I encounter in practice.
Create smoke tests that are high level are assert that basic functionality, such as visiting the first page, work. These should not be reliant on OS specific peculiarities about edge cases.
Run these tests as part of the deployment to an environment process and only consider the application to be deployed if they all pass.
Also, perform exploratory testing to ensure that the elements are truly visible.
For a specific feature as you mention, part of the puzzle is making sure you have quality unit tests, good integration tests and good Acceptance tests. Acceptance tests including UI tests can become part of your regression suite but this should happen selectively and mostly not happen otherwise you start building a massive and slow regression suite. Most business struggle with this because it seems attractive except that... it gets slower and slower and businesses today want speed.
I've been writing selenium tests for years and I have not experienced what you describe as a common issue actually occurring. I can certainly recall ONE TIME, where in IE you had to scroll down before the selenium finder would work - but even in this case, it worked for actual users, just not the automation without an additional scroll_to action.
add a comment |
Summary: It's a reasonable concern but not one that I encounter in practice.
Create smoke tests that are high level are assert that basic functionality, such as visiting the first page, work. These should not be reliant on OS specific peculiarities about edge cases.
Run these tests as part of the deployment to an environment process and only consider the application to be deployed if they all pass.
Also, perform exploratory testing to ensure that the elements are truly visible.
For a specific feature as you mention, part of the puzzle is making sure you have quality unit tests, good integration tests and good Acceptance tests. Acceptance tests including UI tests can become part of your regression suite but this should happen selectively and mostly not happen otherwise you start building a massive and slow regression suite. Most business struggle with this because it seems attractive except that... it gets slower and slower and businesses today want speed.
I've been writing selenium tests for years and I have not experienced what you describe as a common issue actually occurring. I can certainly recall ONE TIME, where in IE you had to scroll down before the selenium finder would work - but even in this case, it worked for actual users, just not the automation without an additional scroll_to action.
add a comment |
Summary: It's a reasonable concern but not one that I encounter in practice.
Create smoke tests that are high level are assert that basic functionality, such as visiting the first page, work. These should not be reliant on OS specific peculiarities about edge cases.
Run these tests as part of the deployment to an environment process and only consider the application to be deployed if they all pass.
Also, perform exploratory testing to ensure that the elements are truly visible.
For a specific feature as you mention, part of the puzzle is making sure you have quality unit tests, good integration tests and good Acceptance tests. Acceptance tests including UI tests can become part of your regression suite but this should happen selectively and mostly not happen otherwise you start building a massive and slow regression suite. Most business struggle with this because it seems attractive except that... it gets slower and slower and businesses today want speed.
I've been writing selenium tests for years and I have not experienced what you describe as a common issue actually occurring. I can certainly recall ONE TIME, where in IE you had to scroll down before the selenium finder would work - but even in this case, it worked for actual users, just not the automation without an additional scroll_to action.
Summary: It's a reasonable concern but not one that I encounter in practice.
Create smoke tests that are high level are assert that basic functionality, such as visiting the first page, work. These should not be reliant on OS specific peculiarities about edge cases.
Run these tests as part of the deployment to an environment process and only consider the application to be deployed if they all pass.
Also, perform exploratory testing to ensure that the elements are truly visible.
For a specific feature as you mention, part of the puzzle is making sure you have quality unit tests, good integration tests and good Acceptance tests. Acceptance tests including UI tests can become part of your regression suite but this should happen selectively and mostly not happen otherwise you start building a massive and slow regression suite. Most business struggle with this because it seems attractive except that... it gets slower and slower and businesses today want speed.
I've been writing selenium tests for years and I have not experienced what you describe as a common issue actually occurring. I can certainly recall ONE TIME, where in IE you had to scroll down before the selenium finder would work - but even in this case, it worked for actual users, just not the automation without an additional scroll_to action.
edited Mar 20 at 21:28
answered Mar 20 at 16:27
Michael DurrantMichael Durrant
14.5k22165
14.5k22165
add a comment |
add a comment |
Telemetry can't always pin point problems, but many times it can indicate the existence of a problem.
If you expect some problems to occur you can sometimes add smarter telemetry and better analysis of other pieces of information, for example users skipping the Delete button and closing the application all together.
A complementary approach is using A/B testing, give some users a Delete button of type A and some of type B and compare the results. You can use A/B testing to assess designs but also retrospectively to locate or fix problems.
add a comment |
Telemetry can't always pin point problems, but many times it can indicate the existence of a problem.
If you expect some problems to occur you can sometimes add smarter telemetry and better analysis of other pieces of information, for example users skipping the Delete button and closing the application all together.
A complementary approach is using A/B testing, give some users a Delete button of type A and some of type B and compare the results. You can use A/B testing to assess designs but also retrospectively to locate or fix problems.
add a comment |
Telemetry can't always pin point problems, but many times it can indicate the existence of a problem.
If you expect some problems to occur you can sometimes add smarter telemetry and better analysis of other pieces of information, for example users skipping the Delete button and closing the application all together.
A complementary approach is using A/B testing, give some users a Delete button of type A and some of type B and compare the results. You can use A/B testing to assess designs but also retrospectively to locate or fix problems.
Telemetry can't always pin point problems, but many times it can indicate the existence of a problem.
If you expect some problems to occur you can sometimes add smarter telemetry and better analysis of other pieces of information, for example users skipping the Delete button and closing the application all together.
A complementary approach is using A/B testing, give some users a Delete button of type A and some of type B and compare the results. You can use A/B testing to assess designs but also retrospectively to locate or fix problems.
answered Mar 20 at 15:44
RsfRsf
4,22511426
4,22511426
add a comment |
add a comment |
To be able to effectively collect "telemetry" of a feature this capability should be initially (and intentionally) included into application architecture design. This is not that straightforward since it depends on many things like whether you need real-time monitoring, or, say, overnight analysis.
Generally speaking when you say that a feature is "alive" it usually means that is passes all the step sequence to deliver the result to the end-user. That result might be wrong one (however in my understanding that still means the feature is alive, but having a defect). To achieve that each high-level logical step composing a feature should log the step definition to the audit storage. Having such entries associated with user session and feature identifier you would be able to analyze if all the features produced the end result.
You would also need to define termination mark since sometimes the user just change their mind and do not complete the steps to accomplish the use-case.
So the metric could be the number of sequences which have no end-points achieved.
add a comment |
To be able to effectively collect "telemetry" of a feature this capability should be initially (and intentionally) included into application architecture design. This is not that straightforward since it depends on many things like whether you need real-time monitoring, or, say, overnight analysis.
Generally speaking when you say that a feature is "alive" it usually means that is passes all the step sequence to deliver the result to the end-user. That result might be wrong one (however in my understanding that still means the feature is alive, but having a defect). To achieve that each high-level logical step composing a feature should log the step definition to the audit storage. Having such entries associated with user session and feature identifier you would be able to analyze if all the features produced the end result.
You would also need to define termination mark since sometimes the user just change their mind and do not complete the steps to accomplish the use-case.
So the metric could be the number of sequences which have no end-points achieved.
add a comment |
To be able to effectively collect "telemetry" of a feature this capability should be initially (and intentionally) included into application architecture design. This is not that straightforward since it depends on many things like whether you need real-time monitoring, or, say, overnight analysis.
Generally speaking when you say that a feature is "alive" it usually means that is passes all the step sequence to deliver the result to the end-user. That result might be wrong one (however in my understanding that still means the feature is alive, but having a defect). To achieve that each high-level logical step composing a feature should log the step definition to the audit storage. Having such entries associated with user session and feature identifier you would be able to analyze if all the features produced the end result.
You would also need to define termination mark since sometimes the user just change their mind and do not complete the steps to accomplish the use-case.
So the metric could be the number of sequences which have no end-points achieved.
To be able to effectively collect "telemetry" of a feature this capability should be initially (and intentionally) included into application architecture design. This is not that straightforward since it depends on many things like whether you need real-time monitoring, or, say, overnight analysis.
Generally speaking when you say that a feature is "alive" it usually means that is passes all the step sequence to deliver the result to the end-user. That result might be wrong one (however in my understanding that still means the feature is alive, but having a defect). To achieve that each high-level logical step composing a feature should log the step definition to the audit storage. Having such entries associated with user session and feature identifier you would be able to analyze if all the features produced the end result.
You would also need to define termination mark since sometimes the user just change their mind and do not complete the steps to accomplish the use-case.
So the metric could be the number of sequences which have no end-points achieved.
answered Mar 20 at 15:57
Alexey R.Alexey R.
8,05511032
8,05511032
add a comment |
add a comment |
Dániel Nagy is a new contributor. Be nice, and check out our Code of Conduct.
Dániel Nagy is a new contributor. Be nice, and check out our Code of Conduct.
Dániel Nagy is a new contributor. Be nice, and check out our Code of Conduct.
Dániel Nagy is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Software Quality Assurance & Testing Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsqa.stackexchange.com%2fquestions%2f38342%2ftelemetry-for-feature-health%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown