What is a good way to store processed CSV data to train model in Python?












1












$begingroup$


I have about 100MB of CSV data that is cleaned and used for training in Keras stored as Panda DataFrame. What is a good (simple) way of saving it for fast reads? I don't need to query or load part of it.



Some options appear to be:




  • HDFS

  • HDF5

  • HDFS3

  • PyArrow










share|improve this question











$endgroup$












  • $begingroup$
    When I want to got 5 mts in distance, I would rather walk than to take a car.
    $endgroup$
    – Kiritee Gak
    17 hours ago










  • $begingroup$
    I think HDF5 is very good for you, your data size is small, I am working on h5 files it's fast.
    $endgroup$
    – honar.cs
    17 hours ago






  • 1




    $begingroup$
    Just leave it as CSV you don't need to do anything
    $endgroup$
    – arhwerhwe
    16 hours ago






  • 1




    $begingroup$
    Why not dump the dataframe to_pickle ? Easy, low memory, compression supported and fast loading without specifying columns or other parameters ...
    $endgroup$
    – n1tk
    9 hours ago
















1












$begingroup$


I have about 100MB of CSV data that is cleaned and used for training in Keras stored as Panda DataFrame. What is a good (simple) way of saving it for fast reads? I don't need to query or load part of it.



Some options appear to be:




  • HDFS

  • HDF5

  • HDFS3

  • PyArrow










share|improve this question











$endgroup$












  • $begingroup$
    When I want to got 5 mts in distance, I would rather walk than to take a car.
    $endgroup$
    – Kiritee Gak
    17 hours ago










  • $begingroup$
    I think HDF5 is very good for you, your data size is small, I am working on h5 files it's fast.
    $endgroup$
    – honar.cs
    17 hours ago






  • 1




    $begingroup$
    Just leave it as CSV you don't need to do anything
    $endgroup$
    – arhwerhwe
    16 hours ago






  • 1




    $begingroup$
    Why not dump the dataframe to_pickle ? Easy, low memory, compression supported and fast loading without specifying columns or other parameters ...
    $endgroup$
    – n1tk
    9 hours ago














1












1








1





$begingroup$


I have about 100MB of CSV data that is cleaned and used for training in Keras stored as Panda DataFrame. What is a good (simple) way of saving it for fast reads? I don't need to query or load part of it.



Some options appear to be:




  • HDFS

  • HDF5

  • HDFS3

  • PyArrow










share|improve this question











$endgroup$




I have about 100MB of CSV data that is cleaned and used for training in Keras stored as Panda DataFrame. What is a good (simple) way of saving it for fast reads? I don't need to query or load part of it.



Some options appear to be:




  • HDFS

  • HDF5

  • HDFS3

  • PyArrow







python keras dataset csv serialisation






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited 17 hours ago









Media

7,42262162




7,42262162










asked 17 hours ago









B SevenB Seven

21218




21218












  • $begingroup$
    When I want to got 5 mts in distance, I would rather walk than to take a car.
    $endgroup$
    – Kiritee Gak
    17 hours ago










  • $begingroup$
    I think HDF5 is very good for you, your data size is small, I am working on h5 files it's fast.
    $endgroup$
    – honar.cs
    17 hours ago






  • 1




    $begingroup$
    Just leave it as CSV you don't need to do anything
    $endgroup$
    – arhwerhwe
    16 hours ago






  • 1




    $begingroup$
    Why not dump the dataframe to_pickle ? Easy, low memory, compression supported and fast loading without specifying columns or other parameters ...
    $endgroup$
    – n1tk
    9 hours ago


















  • $begingroup$
    When I want to got 5 mts in distance, I would rather walk than to take a car.
    $endgroup$
    – Kiritee Gak
    17 hours ago










  • $begingroup$
    I think HDF5 is very good for you, your data size is small, I am working on h5 files it's fast.
    $endgroup$
    – honar.cs
    17 hours ago






  • 1




    $begingroup$
    Just leave it as CSV you don't need to do anything
    $endgroup$
    – arhwerhwe
    16 hours ago






  • 1




    $begingroup$
    Why not dump the dataframe to_pickle ? Easy, low memory, compression supported and fast loading without specifying columns or other parameters ...
    $endgroup$
    – n1tk
    9 hours ago
















$begingroup$
When I want to got 5 mts in distance, I would rather walk than to take a car.
$endgroup$
– Kiritee Gak
17 hours ago




$begingroup$
When I want to got 5 mts in distance, I would rather walk than to take a car.
$endgroup$
– Kiritee Gak
17 hours ago












$begingroup$
I think HDF5 is very good for you, your data size is small, I am working on h5 files it's fast.
$endgroup$
– honar.cs
17 hours ago




$begingroup$
I think HDF5 is very good for you, your data size is small, I am working on h5 files it's fast.
$endgroup$
– honar.cs
17 hours ago




1




1




$begingroup$
Just leave it as CSV you don't need to do anything
$endgroup$
– arhwerhwe
16 hours ago




$begingroup$
Just leave it as CSV you don't need to do anything
$endgroup$
– arhwerhwe
16 hours ago




1




1




$begingroup$
Why not dump the dataframe to_pickle ? Easy, low memory, compression supported and fast loading without specifying columns or other parameters ...
$endgroup$
– n1tk
9 hours ago




$begingroup$
Why not dump the dataframe to_pickle ? Easy, low memory, compression supported and fast loading without specifying columns or other parameters ...
$endgroup$
– n1tk
9 hours ago










3 Answers
3






active

oldest

votes


















4












$begingroup$

With 100MB data, you can store it in any filesystem as CSV since read is going to take less than a second.



Most of the time is going to be spent by dataframe runtime in parsing data and creation of in-memory data structures.






share|improve this answer









$endgroup$









  • 1




    $begingroup$
    +1 Always profile first. Unless OP has evidence that reading from the data is causing the major bottleneck - they shouldn't be investing resources in optimising it.
    $endgroup$
    – Bilkokuya
    13 hours ago












  • $begingroup$
    That's a good point. I should find out how long it takes. Also, I can see that converting from CSV to DataFrame could take time as well...
    $endgroup$
    – B Seven
    10 hours ago



















3












$begingroup$

You can find a nice benchmark for every approach in here.



enter image description here






share|improve this answer









$endgroup$





















    1












    $begingroup$

    Your data size is not that much huge, but there are some debates whenever you deal with big data What is the best way to store data in Python and Optimized I/O operations in Python. They all depend on the way the serialisation occurs and the policies which are taken in different layers. For instance, security, valid transactions and such things. I guess the latter link can help you dealing with large data.






    share|improve this answer









    $endgroup$













      Your Answer





      StackExchange.ifUsing("editor", function () {
      return StackExchange.using("mathjaxEditing", function () {
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      });
      });
      }, "mathjax-editing");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "557"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48008%2fwhat-is-a-good-way-to-store-processed-csv-data-to-train-model-in-python%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      3 Answers
      3






      active

      oldest

      votes








      3 Answers
      3






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      4












      $begingroup$

      With 100MB data, you can store it in any filesystem as CSV since read is going to take less than a second.



      Most of the time is going to be spent by dataframe runtime in parsing data and creation of in-memory data structures.






      share|improve this answer









      $endgroup$









      • 1




        $begingroup$
        +1 Always profile first. Unless OP has evidence that reading from the data is causing the major bottleneck - they shouldn't be investing resources in optimising it.
        $endgroup$
        – Bilkokuya
        13 hours ago












      • $begingroup$
        That's a good point. I should find out how long it takes. Also, I can see that converting from CSV to DataFrame could take time as well...
        $endgroup$
        – B Seven
        10 hours ago
















      4












      $begingroup$

      With 100MB data, you can store it in any filesystem as CSV since read is going to take less than a second.



      Most of the time is going to be spent by dataframe runtime in parsing data and creation of in-memory data structures.






      share|improve this answer









      $endgroup$









      • 1




        $begingroup$
        +1 Always profile first. Unless OP has evidence that reading from the data is causing the major bottleneck - they shouldn't be investing resources in optimising it.
        $endgroup$
        – Bilkokuya
        13 hours ago












      • $begingroup$
        That's a good point. I should find out how long it takes. Also, I can see that converting from CSV to DataFrame could take time as well...
        $endgroup$
        – B Seven
        10 hours ago














      4












      4








      4





      $begingroup$

      With 100MB data, you can store it in any filesystem as CSV since read is going to take less than a second.



      Most of the time is going to be spent by dataframe runtime in parsing data and creation of in-memory data structures.






      share|improve this answer









      $endgroup$



      With 100MB data, you can store it in any filesystem as CSV since read is going to take less than a second.



      Most of the time is going to be spent by dataframe runtime in parsing data and creation of in-memory data structures.







      share|improve this answer












      share|improve this answer



      share|improve this answer










      answered 17 hours ago









      Shamit VermaShamit Verma

      1,00929




      1,00929








      • 1




        $begingroup$
        +1 Always profile first. Unless OP has evidence that reading from the data is causing the major bottleneck - they shouldn't be investing resources in optimising it.
        $endgroup$
        – Bilkokuya
        13 hours ago












      • $begingroup$
        That's a good point. I should find out how long it takes. Also, I can see that converting from CSV to DataFrame could take time as well...
        $endgroup$
        – B Seven
        10 hours ago














      • 1




        $begingroup$
        +1 Always profile first. Unless OP has evidence that reading from the data is causing the major bottleneck - they shouldn't be investing resources in optimising it.
        $endgroup$
        – Bilkokuya
        13 hours ago












      • $begingroup$
        That's a good point. I should find out how long it takes. Also, I can see that converting from CSV to DataFrame could take time as well...
        $endgroup$
        – B Seven
        10 hours ago








      1




      1




      $begingroup$
      +1 Always profile first. Unless OP has evidence that reading from the data is causing the major bottleneck - they shouldn't be investing resources in optimising it.
      $endgroup$
      – Bilkokuya
      13 hours ago






      $begingroup$
      +1 Always profile first. Unless OP has evidence that reading from the data is causing the major bottleneck - they shouldn't be investing resources in optimising it.
      $endgroup$
      – Bilkokuya
      13 hours ago














      $begingroup$
      That's a good point. I should find out how long it takes. Also, I can see that converting from CSV to DataFrame could take time as well...
      $endgroup$
      – B Seven
      10 hours ago




      $begingroup$
      That's a good point. I should find out how long it takes. Also, I can see that converting from CSV to DataFrame could take time as well...
      $endgroup$
      – B Seven
      10 hours ago











      3












      $begingroup$

      You can find a nice benchmark for every approach in here.



      enter image description here






      share|improve this answer









      $endgroup$


















        3












        $begingroup$

        You can find a nice benchmark for every approach in here.



        enter image description here






        share|improve this answer









        $endgroup$
















          3












          3








          3





          $begingroup$

          You can find a nice benchmark for every approach in here.



          enter image description here






          share|improve this answer









          $endgroup$



          You can find a nice benchmark for every approach in here.



          enter image description here







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered 16 hours ago









          Francesco PegoraroFrancesco Pegoraro

          60918




          60918























              1












              $begingroup$

              Your data size is not that much huge, but there are some debates whenever you deal with big data What is the best way to store data in Python and Optimized I/O operations in Python. They all depend on the way the serialisation occurs and the policies which are taken in different layers. For instance, security, valid transactions and such things. I guess the latter link can help you dealing with large data.






              share|improve this answer









              $endgroup$


















                1












                $begingroup$

                Your data size is not that much huge, but there are some debates whenever you deal with big data What is the best way to store data in Python and Optimized I/O operations in Python. They all depend on the way the serialisation occurs and the policies which are taken in different layers. For instance, security, valid transactions and such things. I guess the latter link can help you dealing with large data.






                share|improve this answer









                $endgroup$
















                  1












                  1








                  1





                  $begingroup$

                  Your data size is not that much huge, but there are some debates whenever you deal with big data What is the best way to store data in Python and Optimized I/O operations in Python. They all depend on the way the serialisation occurs and the policies which are taken in different layers. For instance, security, valid transactions and such things. I guess the latter link can help you dealing with large data.






                  share|improve this answer









                  $endgroup$



                  Your data size is not that much huge, but there are some debates whenever you deal with big data What is the best way to store data in Python and Optimized I/O operations in Python. They all depend on the way the serialisation occurs and the policies which are taken in different layers. For instance, security, valid transactions and such things. I guess the latter link can help you dealing with large data.







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered 17 hours ago









                  MediaMedia

                  7,42262162




                  7,42262162






























                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Data Science Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48008%2fwhat-is-a-good-way-to-store-processed-csv-data-to-train-model-in-python%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      He _____ here since 1970 . Answer needed [closed]What does “since he was so high” mean?Meaning of “catch birds for”?How do I ensure “since” takes the meaning I want?“Who cares here” meaningWhat does “right round toward” mean?the time tense (had now been detected)What does the phrase “ring around the roses” mean here?Correct usage of “visited upon”Meaning of “foiled rail sabotage bid”It was the third time I had gone to Rome or It is the third time I had been to Rome

                      Bunad

                      Færeyskur hestur Heimild | Tengill | Tilvísanir | LeiðsagnarvalRossið - síða um færeyska hrossið á færeyskuGott ár hjá færeyska hestinum