Detect sensitive data from unstructured text documentsLearning time of arrival (ETA) from historical location data of vehicleWhich features do I select from text?How to choose an appropriate Machine Learning algorithm?Improving accuracy of Text Classificationfinding themes from text documentshow can i leverage NLP features like SRL, LSA, POS, NER, entity type , relation type with deep learning to find semantic similarity of texts?Hellinger Distance in GensimExtracting specific data from unstructured text - NERMultivariate VAR model: ValueError: x already contains a constantHow can I detect anomalies/outliers in my online streaming data on a real-time basis?
C++ check if statement can be evaluated constexpr
Can you use Vicious Mockery to win an argument or gain favours?
A variation to the phrase "hanging over my shoulders"
How can ping know if my host is down
"It doesn't matter" or "it won't matter"?
Change the color of a single dot in `ddot` symbol
Can I say "fingers" when referring to toes?
Why is so much work done on numerical verification of the Riemann Hypothesis?
What is the difference between lands and mana?
Is there any evidence that Cleopatra and Caesarion considered fleeing to India to escape the Romans?
How could a planet have erratic days?
Does an advisor owe his/her student anything? Will an advisor keep a PhD student only out of pity?
Delete multiple columns using awk or sed
Why does Carol not get rid of the Kree symbol on her suit when she changes its colours?
How to preserve electronics (computers, iPads and phones) for hundreds of years
How do I fix the group tension caused by my character stealing and possibly killing without provocation?
15% tax on $7.5k earnings. Is that right?
The Digit Triangles
Is there a nicer/politer/more positive alternative for "negates"?
Does "he squandered his car on drink" sound natural?
When were female captains banned from Starfleet?
Non-trope happy ending?
Quoting Keynes in a lecture
What features enable the Su-25 Frogfoot to operate with such a wide variety of fuels?
Detect sensitive data from unstructured text documents
Learning time of arrival (ETA) from historical location data of vehicleWhich features do I select from text?How to choose an appropriate Machine Learning algorithm?Improving accuracy of Text Classificationfinding themes from text documentshow can i leverage NLP features like SRL, LSA, POS, NER, entity type , relation type with deep learning to find semantic similarity of texts?Hellinger Distance in GensimExtracting specific data from unstructured text - NERMultivariate VAR model: ValueError: x already contains a constantHow can I detect anomalies/outliers in my online streaming data on a real-time basis?
$begingroup$
I know this question is broad, but I need an advice to know if it's possible to achieve what I want to do.
The problem is that I have around 2500 documents with sensitive data being replaced by four dots. I do not have the original documents, so I wonder if there is a way to build a model that can detect sensitive data from any new documents (with sensitive data not being removed) using the previous documents? I want to apply machine learning or deep learning approaches. And what I know is that the original data set with annotated sensitive data should be used for training, which I can't obtain.
I am new at this field so any advice would be very appropriated
machine-learning deep-learning nlp information-retrieval automatic-summarization
New contributor
$endgroup$
add a comment |
$begingroup$
I know this question is broad, but I need an advice to know if it's possible to achieve what I want to do.
The problem is that I have around 2500 documents with sensitive data being replaced by four dots. I do not have the original documents, so I wonder if there is a way to build a model that can detect sensitive data from any new documents (with sensitive data not being removed) using the previous documents? I want to apply machine learning or deep learning approaches. And what I know is that the original data set with annotated sensitive data should be used for training, which I can't obtain.
I am new at this field so any advice would be very appropriated
machine-learning deep-learning nlp information-retrieval automatic-summarization
New contributor
$endgroup$
add a comment |
$begingroup$
I know this question is broad, but I need an advice to know if it's possible to achieve what I want to do.
The problem is that I have around 2500 documents with sensitive data being replaced by four dots. I do not have the original documents, so I wonder if there is a way to build a model that can detect sensitive data from any new documents (with sensitive data not being removed) using the previous documents? I want to apply machine learning or deep learning approaches. And what I know is that the original data set with annotated sensitive data should be used for training, which I can't obtain.
I am new at this field so any advice would be very appropriated
machine-learning deep-learning nlp information-retrieval automatic-summarization
New contributor
$endgroup$
I know this question is broad, but I need an advice to know if it's possible to achieve what I want to do.
The problem is that I have around 2500 documents with sensitive data being replaced by four dots. I do not have the original documents, so I wonder if there is a way to build a model that can detect sensitive data from any new documents (with sensitive data not being removed) using the previous documents? I want to apply machine learning or deep learning approaches. And what I know is that the original data set with annotated sensitive data should be used for training, which I can't obtain.
I am new at this field so any advice would be very appropriated
machine-learning deep-learning nlp information-retrieval automatic-summarization
machine-learning deep-learning nlp information-retrieval automatic-summarization
New contributor
New contributor
New contributor
asked Mar 18 at 17:10
user971961user971961
111
111
New contributor
New contributor
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
Welcome to the site! Assuming that I understand your problem correctly, I think you can achieve a working model.
If I was in your position I would:
- Obtain the cleanest data possible from the documents. For example, you don't state if the docs are already in simple text or if you need to do something like OCR or whatnot. Having the cleanest set possible will be key for this.
- Make sure you have a consistent marker for the sensitive data. You mention four dots - is that the case for ALL instances? If not, clean that data now
- You're going to need to do standard NLP cleansing stuff like removing punctuation but you may or may not want to keep stop words (this will be part of your model testing). Also, this is key, be 100% certain that the four dots are viewed as a single work in your tokenization process - you should be able to verify this prior to committing to your tokenization file.
- I would take all my documents and create 3 word ngrams. I would then separate out ngrams that contain sensitive data and not sensitive data. That, essentially, becomes your labeled dataset and you should label them accordingly.
- My base model would use all entries that contain sensitive data in the second position of the ngram (the middle of the three words). I would train a neural network on that and see what kind of results I achieve with that. NOTE that your four dots will not be an input, only the word previous and after will be your inputs. You could almost treat this as a binary classification model - the middle word is either sensitive or it's not.
- Future iterations of my model would maybe use a multi-classification approach with something like (1) No sensitive data (2) sensitive data in first position (3) sensitive data in second position and (3) sensitive data in third position and so on and so on.
- From there, you can play with variations on the size of the ngram since the immediate words may or may not actually have an effect on the predictions. There's no limit to how crazy you can get with this - you won't know until you start modeling.
Finally, your entire project becomes even more interesting when you go to the prediction phase with new data. You will do the same and break down your document into ngrams and create a prediction for each one and output the result. In other words, you will need to break down your document only to turn around and build it up again - that should be a fun script to write! Good luck with this, let us know how it turns out.
$endgroup$
1
$begingroup$
Thank you very much for your informative answer. Forgive me if I'am asking basic questions, what you suggested is that I approach the problem as classification task where given the first and third words the model should predict the probability that the middle word is sensitive ? Do have any suggestion in which neural network architecture to use? Also, I want to know if you think using models like LSTM would be possible?
$endgroup$
– user971961
Mar 18 at 18:29
$begingroup$
@user971961 In effect, yes. Train the model on two words combos that do and don't have a sensitive word in between.That would be your binary classifier and you can use that as a baseline model (and it may very well be all you need for your problem). From there, I would change my training set to a multi-classification problem where for string containing X grams it would tell you if there is a word there that precedes a sensitive word. That should be more than enough to get you started and get a working model up and running.
$endgroup$
– I_Play_With_Data
Mar 18 at 18:33
$begingroup$
@user971961 Yes, LSTM would absolutely be a possible model to use. They usually work well for NLP problems and I could see a path forward that would have you using an LSTM neural network to solve this. But, that shouldn't be your concern right now. Focus on getting the datasets you're going need and try a variety of models, not just LSTM
$endgroup$
– I_Play_With_Data
Mar 18 at 18:35
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
user971961 is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47545%2fdetect-sensitive-data-from-unstructured-text-documents%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Welcome to the site! Assuming that I understand your problem correctly, I think you can achieve a working model.
If I was in your position I would:
- Obtain the cleanest data possible from the documents. For example, you don't state if the docs are already in simple text or if you need to do something like OCR or whatnot. Having the cleanest set possible will be key for this.
- Make sure you have a consistent marker for the sensitive data. You mention four dots - is that the case for ALL instances? If not, clean that data now
- You're going to need to do standard NLP cleansing stuff like removing punctuation but you may or may not want to keep stop words (this will be part of your model testing). Also, this is key, be 100% certain that the four dots are viewed as a single work in your tokenization process - you should be able to verify this prior to committing to your tokenization file.
- I would take all my documents and create 3 word ngrams. I would then separate out ngrams that contain sensitive data and not sensitive data. That, essentially, becomes your labeled dataset and you should label them accordingly.
- My base model would use all entries that contain sensitive data in the second position of the ngram (the middle of the three words). I would train a neural network on that and see what kind of results I achieve with that. NOTE that your four dots will not be an input, only the word previous and after will be your inputs. You could almost treat this as a binary classification model - the middle word is either sensitive or it's not.
- Future iterations of my model would maybe use a multi-classification approach with something like (1) No sensitive data (2) sensitive data in first position (3) sensitive data in second position and (3) sensitive data in third position and so on and so on.
- From there, you can play with variations on the size of the ngram since the immediate words may or may not actually have an effect on the predictions. There's no limit to how crazy you can get with this - you won't know until you start modeling.
Finally, your entire project becomes even more interesting when you go to the prediction phase with new data. You will do the same and break down your document into ngrams and create a prediction for each one and output the result. In other words, you will need to break down your document only to turn around and build it up again - that should be a fun script to write! Good luck with this, let us know how it turns out.
$endgroup$
1
$begingroup$
Thank you very much for your informative answer. Forgive me if I'am asking basic questions, what you suggested is that I approach the problem as classification task where given the first and third words the model should predict the probability that the middle word is sensitive ? Do have any suggestion in which neural network architecture to use? Also, I want to know if you think using models like LSTM would be possible?
$endgroup$
– user971961
Mar 18 at 18:29
$begingroup$
@user971961 In effect, yes. Train the model on two words combos that do and don't have a sensitive word in between.That would be your binary classifier and you can use that as a baseline model (and it may very well be all you need for your problem). From there, I would change my training set to a multi-classification problem where for string containing X grams it would tell you if there is a word there that precedes a sensitive word. That should be more than enough to get you started and get a working model up and running.
$endgroup$
– I_Play_With_Data
Mar 18 at 18:33
$begingroup$
@user971961 Yes, LSTM would absolutely be a possible model to use. They usually work well for NLP problems and I could see a path forward that would have you using an LSTM neural network to solve this. But, that shouldn't be your concern right now. Focus on getting the datasets you're going need and try a variety of models, not just LSTM
$endgroup$
– I_Play_With_Data
Mar 18 at 18:35
add a comment |
$begingroup$
Welcome to the site! Assuming that I understand your problem correctly, I think you can achieve a working model.
If I was in your position I would:
- Obtain the cleanest data possible from the documents. For example, you don't state if the docs are already in simple text or if you need to do something like OCR or whatnot. Having the cleanest set possible will be key for this.
- Make sure you have a consistent marker for the sensitive data. You mention four dots - is that the case for ALL instances? If not, clean that data now
- You're going to need to do standard NLP cleansing stuff like removing punctuation but you may or may not want to keep stop words (this will be part of your model testing). Also, this is key, be 100% certain that the four dots are viewed as a single work in your tokenization process - you should be able to verify this prior to committing to your tokenization file.
- I would take all my documents and create 3 word ngrams. I would then separate out ngrams that contain sensitive data and not sensitive data. That, essentially, becomes your labeled dataset and you should label them accordingly.
- My base model would use all entries that contain sensitive data in the second position of the ngram (the middle of the three words). I would train a neural network on that and see what kind of results I achieve with that. NOTE that your four dots will not be an input, only the word previous and after will be your inputs. You could almost treat this as a binary classification model - the middle word is either sensitive or it's not.
- Future iterations of my model would maybe use a multi-classification approach with something like (1) No sensitive data (2) sensitive data in first position (3) sensitive data in second position and (3) sensitive data in third position and so on and so on.
- From there, you can play with variations on the size of the ngram since the immediate words may or may not actually have an effect on the predictions. There's no limit to how crazy you can get with this - you won't know until you start modeling.
Finally, your entire project becomes even more interesting when you go to the prediction phase with new data. You will do the same and break down your document into ngrams and create a prediction for each one and output the result. In other words, you will need to break down your document only to turn around and build it up again - that should be a fun script to write! Good luck with this, let us know how it turns out.
$endgroup$
1
$begingroup$
Thank you very much for your informative answer. Forgive me if I'am asking basic questions, what you suggested is that I approach the problem as classification task where given the first and third words the model should predict the probability that the middle word is sensitive ? Do have any suggestion in which neural network architecture to use? Also, I want to know if you think using models like LSTM would be possible?
$endgroup$
– user971961
Mar 18 at 18:29
$begingroup$
@user971961 In effect, yes. Train the model on two words combos that do and don't have a sensitive word in between.That would be your binary classifier and you can use that as a baseline model (and it may very well be all you need for your problem). From there, I would change my training set to a multi-classification problem where for string containing X grams it would tell you if there is a word there that precedes a sensitive word. That should be more than enough to get you started and get a working model up and running.
$endgroup$
– I_Play_With_Data
Mar 18 at 18:33
$begingroup$
@user971961 Yes, LSTM would absolutely be a possible model to use. They usually work well for NLP problems and I could see a path forward that would have you using an LSTM neural network to solve this. But, that shouldn't be your concern right now. Focus on getting the datasets you're going need and try a variety of models, not just LSTM
$endgroup$
– I_Play_With_Data
Mar 18 at 18:35
add a comment |
$begingroup$
Welcome to the site! Assuming that I understand your problem correctly, I think you can achieve a working model.
If I was in your position I would:
- Obtain the cleanest data possible from the documents. For example, you don't state if the docs are already in simple text or if you need to do something like OCR or whatnot. Having the cleanest set possible will be key for this.
- Make sure you have a consistent marker for the sensitive data. You mention four dots - is that the case for ALL instances? If not, clean that data now
- You're going to need to do standard NLP cleansing stuff like removing punctuation but you may or may not want to keep stop words (this will be part of your model testing). Also, this is key, be 100% certain that the four dots are viewed as a single work in your tokenization process - you should be able to verify this prior to committing to your tokenization file.
- I would take all my documents and create 3 word ngrams. I would then separate out ngrams that contain sensitive data and not sensitive data. That, essentially, becomes your labeled dataset and you should label them accordingly.
- My base model would use all entries that contain sensitive data in the second position of the ngram (the middle of the three words). I would train a neural network on that and see what kind of results I achieve with that. NOTE that your four dots will not be an input, only the word previous and after will be your inputs. You could almost treat this as a binary classification model - the middle word is either sensitive or it's not.
- Future iterations of my model would maybe use a multi-classification approach with something like (1) No sensitive data (2) sensitive data in first position (3) sensitive data in second position and (3) sensitive data in third position and so on and so on.
- From there, you can play with variations on the size of the ngram since the immediate words may or may not actually have an effect on the predictions. There's no limit to how crazy you can get with this - you won't know until you start modeling.
Finally, your entire project becomes even more interesting when you go to the prediction phase with new data. You will do the same and break down your document into ngrams and create a prediction for each one and output the result. In other words, you will need to break down your document only to turn around and build it up again - that should be a fun script to write! Good luck with this, let us know how it turns out.
$endgroup$
Welcome to the site! Assuming that I understand your problem correctly, I think you can achieve a working model.
If I was in your position I would:
- Obtain the cleanest data possible from the documents. For example, you don't state if the docs are already in simple text or if you need to do something like OCR or whatnot. Having the cleanest set possible will be key for this.
- Make sure you have a consistent marker for the sensitive data. You mention four dots - is that the case for ALL instances? If not, clean that data now
- You're going to need to do standard NLP cleansing stuff like removing punctuation but you may or may not want to keep stop words (this will be part of your model testing). Also, this is key, be 100% certain that the four dots are viewed as a single work in your tokenization process - you should be able to verify this prior to committing to your tokenization file.
- I would take all my documents and create 3 word ngrams. I would then separate out ngrams that contain sensitive data and not sensitive data. That, essentially, becomes your labeled dataset and you should label them accordingly.
- My base model would use all entries that contain sensitive data in the second position of the ngram (the middle of the three words). I would train a neural network on that and see what kind of results I achieve with that. NOTE that your four dots will not be an input, only the word previous and after will be your inputs. You could almost treat this as a binary classification model - the middle word is either sensitive or it's not.
- Future iterations of my model would maybe use a multi-classification approach with something like (1) No sensitive data (2) sensitive data in first position (3) sensitive data in second position and (3) sensitive data in third position and so on and so on.
- From there, you can play with variations on the size of the ngram since the immediate words may or may not actually have an effect on the predictions. There's no limit to how crazy you can get with this - you won't know until you start modeling.
Finally, your entire project becomes even more interesting when you go to the prediction phase with new data. You will do the same and break down your document into ngrams and create a prediction for each one and output the result. In other words, you will need to break down your document only to turn around and build it up again - that should be a fun script to write! Good luck with this, let us know how it turns out.
answered Mar 18 at 17:32
I_Play_With_DataI_Play_With_Data
1,234532
1,234532
1
$begingroup$
Thank you very much for your informative answer. Forgive me if I'am asking basic questions, what you suggested is that I approach the problem as classification task where given the first and third words the model should predict the probability that the middle word is sensitive ? Do have any suggestion in which neural network architecture to use? Also, I want to know if you think using models like LSTM would be possible?
$endgroup$
– user971961
Mar 18 at 18:29
$begingroup$
@user971961 In effect, yes. Train the model on two words combos that do and don't have a sensitive word in between.That would be your binary classifier and you can use that as a baseline model (and it may very well be all you need for your problem). From there, I would change my training set to a multi-classification problem where for string containing X grams it would tell you if there is a word there that precedes a sensitive word. That should be more than enough to get you started and get a working model up and running.
$endgroup$
– I_Play_With_Data
Mar 18 at 18:33
$begingroup$
@user971961 Yes, LSTM would absolutely be a possible model to use. They usually work well for NLP problems and I could see a path forward that would have you using an LSTM neural network to solve this. But, that shouldn't be your concern right now. Focus on getting the datasets you're going need and try a variety of models, not just LSTM
$endgroup$
– I_Play_With_Data
Mar 18 at 18:35
add a comment |
1
$begingroup$
Thank you very much for your informative answer. Forgive me if I'am asking basic questions, what you suggested is that I approach the problem as classification task where given the first and third words the model should predict the probability that the middle word is sensitive ? Do have any suggestion in which neural network architecture to use? Also, I want to know if you think using models like LSTM would be possible?
$endgroup$
– user971961
Mar 18 at 18:29
$begingroup$
@user971961 In effect, yes. Train the model on two words combos that do and don't have a sensitive word in between.That would be your binary classifier and you can use that as a baseline model (and it may very well be all you need for your problem). From there, I would change my training set to a multi-classification problem where for string containing X grams it would tell you if there is a word there that precedes a sensitive word. That should be more than enough to get you started and get a working model up and running.
$endgroup$
– I_Play_With_Data
Mar 18 at 18:33
$begingroup$
@user971961 Yes, LSTM would absolutely be a possible model to use. They usually work well for NLP problems and I could see a path forward that would have you using an LSTM neural network to solve this. But, that shouldn't be your concern right now. Focus on getting the datasets you're going need and try a variety of models, not just LSTM
$endgroup$
– I_Play_With_Data
Mar 18 at 18:35
1
1
$begingroup$
Thank you very much for your informative answer. Forgive me if I'am asking basic questions, what you suggested is that I approach the problem as classification task where given the first and third words the model should predict the probability that the middle word is sensitive ? Do have any suggestion in which neural network architecture to use? Also, I want to know if you think using models like LSTM would be possible?
$endgroup$
– user971961
Mar 18 at 18:29
$begingroup$
Thank you very much for your informative answer. Forgive me if I'am asking basic questions, what you suggested is that I approach the problem as classification task where given the first and third words the model should predict the probability that the middle word is sensitive ? Do have any suggestion in which neural network architecture to use? Also, I want to know if you think using models like LSTM would be possible?
$endgroup$
– user971961
Mar 18 at 18:29
$begingroup$
@user971961 In effect, yes. Train the model on two words combos that do and don't have a sensitive word in between.That would be your binary classifier and you can use that as a baseline model (and it may very well be all you need for your problem). From there, I would change my training set to a multi-classification problem where for string containing X grams it would tell you if there is a word there that precedes a sensitive word. That should be more than enough to get you started and get a working model up and running.
$endgroup$
– I_Play_With_Data
Mar 18 at 18:33
$begingroup$
@user971961 In effect, yes. Train the model on two words combos that do and don't have a sensitive word in between.That would be your binary classifier and you can use that as a baseline model (and it may very well be all you need for your problem). From there, I would change my training set to a multi-classification problem where for string containing X grams it would tell you if there is a word there that precedes a sensitive word. That should be more than enough to get you started and get a working model up and running.
$endgroup$
– I_Play_With_Data
Mar 18 at 18:33
$begingroup$
@user971961 Yes, LSTM would absolutely be a possible model to use. They usually work well for NLP problems and I could see a path forward that would have you using an LSTM neural network to solve this. But, that shouldn't be your concern right now. Focus on getting the datasets you're going need and try a variety of models, not just LSTM
$endgroup$
– I_Play_With_Data
Mar 18 at 18:35
$begingroup$
@user971961 Yes, LSTM would absolutely be a possible model to use. They usually work well for NLP problems and I could see a path forward that would have you using an LSTM neural network to solve this. But, that shouldn't be your concern right now. Focus on getting the datasets you're going need and try a variety of models, not just LSTM
$endgroup$
– I_Play_With_Data
Mar 18 at 18:35
add a comment |
user971961 is a new contributor. Be nice, and check out our Code of Conduct.
user971961 is a new contributor. Be nice, and check out our Code of Conduct.
user971961 is a new contributor. Be nice, and check out our Code of Conduct.
user971961 is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47545%2fdetect-sensitive-data-from-unstructured-text-documents%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown