What is a good way to store processed CSV data to train model in Python?
$begingroup$
I have about 100MB of CSV data that is cleaned and used for training in Keras stored as Panda DataFrame. What is a good (simple) way of saving it for fast reads? I don't need to query or load part of it.
Some options appear to be:
- HDFS
- HDF5
- HDFS3
- PyArrow
python keras dataset csv serialisation
$endgroup$
add a comment |
$begingroup$
I have about 100MB of CSV data that is cleaned and used for training in Keras stored as Panda DataFrame. What is a good (simple) way of saving it for fast reads? I don't need to query or load part of it.
Some options appear to be:
- HDFS
- HDF5
- HDFS3
- PyArrow
python keras dataset csv serialisation
$endgroup$
$begingroup$
When I want to got 5 mts in distance, I would rather walk than to take a car.
$endgroup$
– Kiritee Gak
17 hours ago
$begingroup$
I think HDF5 is very good for you, your data size is small, I am working on h5 files it's fast.
$endgroup$
– honar.cs
17 hours ago
1
$begingroup$
Just leave it as CSV you don't need to do anything
$endgroup$
– arhwerhwe
16 hours ago
1
$begingroup$
Why not dump the dataframeto_pickle
? Easy, low memory, compression supported and fast loading without specifying columns or other parameters ...
$endgroup$
– n1tk
9 hours ago
add a comment |
$begingroup$
I have about 100MB of CSV data that is cleaned and used for training in Keras stored as Panda DataFrame. What is a good (simple) way of saving it for fast reads? I don't need to query or load part of it.
Some options appear to be:
- HDFS
- HDF5
- HDFS3
- PyArrow
python keras dataset csv serialisation
$endgroup$
I have about 100MB of CSV data that is cleaned and used for training in Keras stored as Panda DataFrame. What is a good (simple) way of saving it for fast reads? I don't need to query or load part of it.
Some options appear to be:
- HDFS
- HDF5
- HDFS3
- PyArrow
python keras dataset csv serialisation
python keras dataset csv serialisation
edited 17 hours ago
Media
7,42262162
7,42262162
asked 17 hours ago
B SevenB Seven
21218
21218
$begingroup$
When I want to got 5 mts in distance, I would rather walk than to take a car.
$endgroup$
– Kiritee Gak
17 hours ago
$begingroup$
I think HDF5 is very good for you, your data size is small, I am working on h5 files it's fast.
$endgroup$
– honar.cs
17 hours ago
1
$begingroup$
Just leave it as CSV you don't need to do anything
$endgroup$
– arhwerhwe
16 hours ago
1
$begingroup$
Why not dump the dataframeto_pickle
? Easy, low memory, compression supported and fast loading without specifying columns or other parameters ...
$endgroup$
– n1tk
9 hours ago
add a comment |
$begingroup$
When I want to got 5 mts in distance, I would rather walk than to take a car.
$endgroup$
– Kiritee Gak
17 hours ago
$begingroup$
I think HDF5 is very good for you, your data size is small, I am working on h5 files it's fast.
$endgroup$
– honar.cs
17 hours ago
1
$begingroup$
Just leave it as CSV you don't need to do anything
$endgroup$
– arhwerhwe
16 hours ago
1
$begingroup$
Why not dump the dataframeto_pickle
? Easy, low memory, compression supported and fast loading without specifying columns or other parameters ...
$endgroup$
– n1tk
9 hours ago
$begingroup$
When I want to got 5 mts in distance, I would rather walk than to take a car.
$endgroup$
– Kiritee Gak
17 hours ago
$begingroup$
When I want to got 5 mts in distance, I would rather walk than to take a car.
$endgroup$
– Kiritee Gak
17 hours ago
$begingroup$
I think HDF5 is very good for you, your data size is small, I am working on h5 files it's fast.
$endgroup$
– honar.cs
17 hours ago
$begingroup$
I think HDF5 is very good for you, your data size is small, I am working on h5 files it's fast.
$endgroup$
– honar.cs
17 hours ago
1
1
$begingroup$
Just leave it as CSV you don't need to do anything
$endgroup$
– arhwerhwe
16 hours ago
$begingroup$
Just leave it as CSV you don't need to do anything
$endgroup$
– arhwerhwe
16 hours ago
1
1
$begingroup$
Why not dump the dataframe
to_pickle
? Easy, low memory, compression supported and fast loading without specifying columns or other parameters ...$endgroup$
– n1tk
9 hours ago
$begingroup$
Why not dump the dataframe
to_pickle
? Easy, low memory, compression supported and fast loading without specifying columns or other parameters ...$endgroup$
– n1tk
9 hours ago
add a comment |
3 Answers
3
active
oldest
votes
$begingroup$
With 100MB data, you can store it in any filesystem as CSV since read is going to take less than a second.
Most of the time is going to be spent by dataframe runtime in parsing data and creation of in-memory data structures.
$endgroup$
1
$begingroup$
+1 Always profile first. Unless OP has evidence that reading from the data is causing the major bottleneck - they shouldn't be investing resources in optimising it.
$endgroup$
– Bilkokuya
13 hours ago
$begingroup$
That's a good point. I should find out how long it takes. Also, I can see that converting from CSV to DataFrame could take time as well...
$endgroup$
– B Seven
10 hours ago
add a comment |
$begingroup$
You can find a nice benchmark for every approach in here.
$endgroup$
add a comment |
$begingroup$
Your data size is not that much huge, but there are some debates whenever you deal with big data What is the best way to store data in Python and Optimized I/O operations in Python. They all depend on the way the serialisation occurs and the policies which are taken in different layers. For instance, security, valid transactions and such things. I guess the latter link can help you dealing with large data.
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "557"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48008%2fwhat-is-a-good-way-to-store-processed-csv-data-to-train-model-in-python%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
With 100MB data, you can store it in any filesystem as CSV since read is going to take less than a second.
Most of the time is going to be spent by dataframe runtime in parsing data and creation of in-memory data structures.
$endgroup$
1
$begingroup$
+1 Always profile first. Unless OP has evidence that reading from the data is causing the major bottleneck - they shouldn't be investing resources in optimising it.
$endgroup$
– Bilkokuya
13 hours ago
$begingroup$
That's a good point. I should find out how long it takes. Also, I can see that converting from CSV to DataFrame could take time as well...
$endgroup$
– B Seven
10 hours ago
add a comment |
$begingroup$
With 100MB data, you can store it in any filesystem as CSV since read is going to take less than a second.
Most of the time is going to be spent by dataframe runtime in parsing data and creation of in-memory data structures.
$endgroup$
1
$begingroup$
+1 Always profile first. Unless OP has evidence that reading from the data is causing the major bottleneck - they shouldn't be investing resources in optimising it.
$endgroup$
– Bilkokuya
13 hours ago
$begingroup$
That's a good point. I should find out how long it takes. Also, I can see that converting from CSV to DataFrame could take time as well...
$endgroup$
– B Seven
10 hours ago
add a comment |
$begingroup$
With 100MB data, you can store it in any filesystem as CSV since read is going to take less than a second.
Most of the time is going to be spent by dataframe runtime in parsing data and creation of in-memory data structures.
$endgroup$
With 100MB data, you can store it in any filesystem as CSV since read is going to take less than a second.
Most of the time is going to be spent by dataframe runtime in parsing data and creation of in-memory data structures.
answered 17 hours ago
Shamit VermaShamit Verma
1,00929
1,00929
1
$begingroup$
+1 Always profile first. Unless OP has evidence that reading from the data is causing the major bottleneck - they shouldn't be investing resources in optimising it.
$endgroup$
– Bilkokuya
13 hours ago
$begingroup$
That's a good point. I should find out how long it takes. Also, I can see that converting from CSV to DataFrame could take time as well...
$endgroup$
– B Seven
10 hours ago
add a comment |
1
$begingroup$
+1 Always profile first. Unless OP has evidence that reading from the data is causing the major bottleneck - they shouldn't be investing resources in optimising it.
$endgroup$
– Bilkokuya
13 hours ago
$begingroup$
That's a good point. I should find out how long it takes. Also, I can see that converting from CSV to DataFrame could take time as well...
$endgroup$
– B Seven
10 hours ago
1
1
$begingroup$
+1 Always profile first. Unless OP has evidence that reading from the data is causing the major bottleneck - they shouldn't be investing resources in optimising it.
$endgroup$
– Bilkokuya
13 hours ago
$begingroup$
+1 Always profile first. Unless OP has evidence that reading from the data is causing the major bottleneck - they shouldn't be investing resources in optimising it.
$endgroup$
– Bilkokuya
13 hours ago
$begingroup$
That's a good point. I should find out how long it takes. Also, I can see that converting from CSV to DataFrame could take time as well...
$endgroup$
– B Seven
10 hours ago
$begingroup$
That's a good point. I should find out how long it takes. Also, I can see that converting from CSV to DataFrame could take time as well...
$endgroup$
– B Seven
10 hours ago
add a comment |
$begingroup$
You can find a nice benchmark for every approach in here.
$endgroup$
add a comment |
$begingroup$
You can find a nice benchmark for every approach in here.
$endgroup$
add a comment |
$begingroup$
You can find a nice benchmark for every approach in here.
$endgroup$
You can find a nice benchmark for every approach in here.
answered 16 hours ago
Francesco PegoraroFrancesco Pegoraro
60918
60918
add a comment |
add a comment |
$begingroup$
Your data size is not that much huge, but there are some debates whenever you deal with big data What is the best way to store data in Python and Optimized I/O operations in Python. They all depend on the way the serialisation occurs and the policies which are taken in different layers. For instance, security, valid transactions and such things. I guess the latter link can help you dealing with large data.
$endgroup$
add a comment |
$begingroup$
Your data size is not that much huge, but there are some debates whenever you deal with big data What is the best way to store data in Python and Optimized I/O operations in Python. They all depend on the way the serialisation occurs and the policies which are taken in different layers. For instance, security, valid transactions and such things. I guess the latter link can help you dealing with large data.
$endgroup$
add a comment |
$begingroup$
Your data size is not that much huge, but there are some debates whenever you deal with big data What is the best way to store data in Python and Optimized I/O operations in Python. They all depend on the way the serialisation occurs and the policies which are taken in different layers. For instance, security, valid transactions and such things. I guess the latter link can help you dealing with large data.
$endgroup$
Your data size is not that much huge, but there are some debates whenever you deal with big data What is the best way to store data in Python and Optimized I/O operations in Python. They all depend on the way the serialisation occurs and the policies which are taken in different layers. For instance, security, valid transactions and such things. I guess the latter link can help you dealing with large data.
answered 17 hours ago
MediaMedia
7,42262162
7,42262162
add a comment |
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48008%2fwhat-is-a-good-way-to-store-processed-csv-data-to-train-model-in-python%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
When I want to got 5 mts in distance, I would rather walk than to take a car.
$endgroup$
– Kiritee Gak
17 hours ago
$begingroup$
I think HDF5 is very good for you, your data size is small, I am working on h5 files it's fast.
$endgroup$
– honar.cs
17 hours ago
1
$begingroup$
Just leave it as CSV you don't need to do anything
$endgroup$
– arhwerhwe
16 hours ago
1
$begingroup$
Why not dump the dataframe
to_pickle
? Easy, low memory, compression supported and fast loading without specifying columns or other parameters ...$endgroup$
– n1tk
9 hours ago