Loading Data into DW: Direct Insert from PreProcessing vs PreProcess and then loading from CSV files2019 Community Moderator ElectionPreprocessing in Data mining?Big Data and a fwf files reading with filtersTools to perform SQL analytics on 350TB of csv dataHow do I infer user activities from historical location data and other relevant data?Tools for ML on csv files and jsonsTransformation from Datawarehouse into Big Data structurePreprocessing to-be-predicted data in ML with R - “learn” and “apply” featuresWhat are recommended waystools for processing large data from Excel Files?Plot RDD data using a pyspark dataframe from csv fileLoading own train data and labels in dataloader using pytorch?
Go Pregnant or Go Home
Failed to fetch jessie backports repository
Under what conditions does the function C = f(A,B) satisfy H(C|A) = H(B)?
Why Were Madagascar and New Zealand Discovered So Late?
How easy is it to start Magic from scratch?
How to check is there any negative term in a large list?
Why are there no referendums in the US?
What is the best translation for "slot" in the context of multiplayer video games?
Why not increase contact surface when reentering the atmosphere?
Inappropriate reference requests from Journal reviewers
Implement the Thanos sorting algorithm
Customer Requests (Sometimes) Drive Me Bonkers!
Class Action - which options I have?
Is this apparent Class Action settlement a spam message?
How did Arya survive the stabbing?
How to be diplomatic in refusing to write code that breaches the privacy of our users
Flow chart document symbol
Trouble understanding the speech of overseas colleagues
Anatomically Correct Strange Women In Ponds Distributing Swords
Hostile work environment after whistle-blowing on coworker and our boss. What do I do?
Lay out the Carpet
What is the difference between "behavior" and "behaviour"?
Is the destination of a commercial flight important for the pilot?
How do I go from 300 unfinished/half written blog posts, to published posts?
Loading Data into DW: Direct Insert from PreProcessing vs PreProcess and then loading from CSV files
2019 Community Moderator ElectionPreprocessing in Data mining?Big Data and a fwf files reading with filtersTools to perform SQL analytics on 350TB of csv dataHow do I infer user activities from historical location data and other relevant data?Tools for ML on csv files and jsonsTransformation from Datawarehouse into Big Data structurePreprocessing to-be-predicted data in ML with R - “learn” and “apply” featuresWhat are recommended waystools for processing large data from Excel Files?Plot RDD data using a pyspark dataframe from csv fileLoading own train data and labels in dataloader using pytorch?
$begingroup$
I have a preprocessing script (google cloud function) that generates files (stored in Google Drive). I want to load those files in my DW (Big Query).
What are the pros and cons of:
1) Running the preprocessing script, generate the files and then loadthose files
vs
2) Loading data directly from the preprocessing script (skip the file generation, just do a direct insert in the DW from the preprocessing script)
?
I am interested in focusing the question not only in terms of technical stuff and costs, but also in terms of data processing methodology.
I think the question could lead to the dilemma loading online or more in a batch process.
I have added some conclusions of mine as answers.
Still, it would be great to have more comments about the technical perspective on when to use direct transfer and when to use a file for staging preprocessing results.
Thanks!
bigdata preprocessing
$endgroup$
add a comment |
$begingroup$
I have a preprocessing script (google cloud function) that generates files (stored in Google Drive). I want to load those files in my DW (Big Query).
What are the pros and cons of:
1) Running the preprocessing script, generate the files and then loadthose files
vs
2) Loading data directly from the preprocessing script (skip the file generation, just do a direct insert in the DW from the preprocessing script)
?
I am interested in focusing the question not only in terms of technical stuff and costs, but also in terms of data processing methodology.
I think the question could lead to the dilemma loading online or more in a batch process.
I have added some conclusions of mine as answers.
Still, it would be great to have more comments about the technical perspective on when to use direct transfer and when to use a file for staging preprocessing results.
Thanks!
bigdata preprocessing
$endgroup$
add a comment |
$begingroup$
I have a preprocessing script (google cloud function) that generates files (stored in Google Drive). I want to load those files in my DW (Big Query).
What are the pros and cons of:
1) Running the preprocessing script, generate the files and then loadthose files
vs
2) Loading data directly from the preprocessing script (skip the file generation, just do a direct insert in the DW from the preprocessing script)
?
I am interested in focusing the question not only in terms of technical stuff and costs, but also in terms of data processing methodology.
I think the question could lead to the dilemma loading online or more in a batch process.
I have added some conclusions of mine as answers.
Still, it would be great to have more comments about the technical perspective on when to use direct transfer and when to use a file for staging preprocessing results.
Thanks!
bigdata preprocessing
$endgroup$
I have a preprocessing script (google cloud function) that generates files (stored in Google Drive). I want to load those files in my DW (Big Query).
What are the pros and cons of:
1) Running the preprocessing script, generate the files and then loadthose files
vs
2) Loading data directly from the preprocessing script (skip the file generation, just do a direct insert in the DW from the preprocessing script)
?
I am interested in focusing the question not only in terms of technical stuff and costs, but also in terms of data processing methodology.
I think the question could lead to the dilemma loading online or more in a batch process.
I have added some conclusions of mine as answers.
Still, it would be great to have more comments about the technical perspective on when to use direct transfer and when to use a file for staging preprocessing results.
Thanks!
bigdata preprocessing
bigdata preprocessing
edited Mar 22 at 22:29
Gabriel
asked Jan 11 at 22:16
GabrielGabriel
1065
1065
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
Loading the data directly from the preprocessing script to the DW means not storing the results of your processing on storage.
Not storing those results on storage may imply:
- Needing to do the processing again if for some reason the data is needed again, for a new DW or for a new research.
- Not saving the data as a data sink and so not contributing to a data lake architecture, where data sinks are just saved for future reusing on "unpredicted" situations.
From a technical perspective, some tradeoffs are mentioned on the Google BigQuery docs on Streaming data, from which the following could be highlighted:
- Consistency management, as for errors or duplicates.
- Waiting time until data is available for copy and export operations
Still, it would be great to have more comments about the technical perspective on when to use direct transfer and when to use a file for staging preprocessing results.
Thanks
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f43873%2floading-data-into-dw-direct-insert-from-preprocessing-vs-preprocess-and-then-lo%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Loading the data directly from the preprocessing script to the DW means not storing the results of your processing on storage.
Not storing those results on storage may imply:
- Needing to do the processing again if for some reason the data is needed again, for a new DW or for a new research.
- Not saving the data as a data sink and so not contributing to a data lake architecture, where data sinks are just saved for future reusing on "unpredicted" situations.
From a technical perspective, some tradeoffs are mentioned on the Google BigQuery docs on Streaming data, from which the following could be highlighted:
- Consistency management, as for errors or duplicates.
- Waiting time until data is available for copy and export operations
Still, it would be great to have more comments about the technical perspective on when to use direct transfer and when to use a file for staging preprocessing results.
Thanks
$endgroup$
add a comment |
$begingroup$
Loading the data directly from the preprocessing script to the DW means not storing the results of your processing on storage.
Not storing those results on storage may imply:
- Needing to do the processing again if for some reason the data is needed again, for a new DW or for a new research.
- Not saving the data as a data sink and so not contributing to a data lake architecture, where data sinks are just saved for future reusing on "unpredicted" situations.
From a technical perspective, some tradeoffs are mentioned on the Google BigQuery docs on Streaming data, from which the following could be highlighted:
- Consistency management, as for errors or duplicates.
- Waiting time until data is available for copy and export operations
Still, it would be great to have more comments about the technical perspective on when to use direct transfer and when to use a file for staging preprocessing results.
Thanks
$endgroup$
add a comment |
$begingroup$
Loading the data directly from the preprocessing script to the DW means not storing the results of your processing on storage.
Not storing those results on storage may imply:
- Needing to do the processing again if for some reason the data is needed again, for a new DW or for a new research.
- Not saving the data as a data sink and so not contributing to a data lake architecture, where data sinks are just saved for future reusing on "unpredicted" situations.
From a technical perspective, some tradeoffs are mentioned on the Google BigQuery docs on Streaming data, from which the following could be highlighted:
- Consistency management, as for errors or duplicates.
- Waiting time until data is available for copy and export operations
Still, it would be great to have more comments about the technical perspective on when to use direct transfer and when to use a file for staging preprocessing results.
Thanks
$endgroup$
Loading the data directly from the preprocessing script to the DW means not storing the results of your processing on storage.
Not storing those results on storage may imply:
- Needing to do the processing again if for some reason the data is needed again, for a new DW or for a new research.
- Not saving the data as a data sink and so not contributing to a data lake architecture, where data sinks are just saved for future reusing on "unpredicted" situations.
From a technical perspective, some tradeoffs are mentioned on the Google BigQuery docs on Streaming data, from which the following could be highlighted:
- Consistency management, as for errors or duplicates.
- Waiting time until data is available for copy and export operations
Still, it would be great to have more comments about the technical perspective on when to use direct transfer and when to use a file for staging preprocessing results.
Thanks
answered Mar 22 at 22:29
GabrielGabriel
1065
1065
add a comment |
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f43873%2floading-data-into-dw-direct-insert-from-preprocessing-vs-preprocess-and-then-lo%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown