Performance of NVMe vs SCSI for Local SSDs in GCP using Container OS










1















In Google Cloud, I did a simple performance test comparing two "local SSD" drives attached to the same VM - first one attached as NVMe, second as SCSI. I was expecting NVMe to be somewhat faster, but got 5% performance hit instead:




NVMe SCSI
real 157.3 150.1
user 107.2 107.1
sys 21.6 22.2


The Google compute VM was running COS - Container Optimized OS, and the docker container itself was a busybox running md5sum on the same 45GB file. The results (averaged over 3 runs) are a bit puzzling - sys time is lower, user time is about the same, but the real time for NVMe is about 5% slower. The container was ran with



docker run -v /mnt/disks/nvme:/tmp1 -v /mnt/disks/scsi:/tmp2 -it busybox



Test was executed with



time md5sum largefile










share|improve this question




























    1















    In Google Cloud, I did a simple performance test comparing two "local SSD" drives attached to the same VM - first one attached as NVMe, second as SCSI. I was expecting NVMe to be somewhat faster, but got 5% performance hit instead:




    NVMe SCSI
    real 157.3 150.1
    user 107.2 107.1
    sys 21.6 22.2


    The Google compute VM was running COS - Container Optimized OS, and the docker container itself was a busybox running md5sum on the same 45GB file. The results (averaged over 3 runs) are a bit puzzling - sys time is lower, user time is about the same, but the real time for NVMe is about 5% slower. The container was ran with



    docker run -v /mnt/disks/nvme:/tmp1 -v /mnt/disks/scsi:/tmp2 -it busybox



    Test was executed with



    time md5sum largefile










    share|improve this question


























      1












      1








      1








      In Google Cloud, I did a simple performance test comparing two "local SSD" drives attached to the same VM - first one attached as NVMe, second as SCSI. I was expecting NVMe to be somewhat faster, but got 5% performance hit instead:




      NVMe SCSI
      real 157.3 150.1
      user 107.2 107.1
      sys 21.6 22.2


      The Google compute VM was running COS - Container Optimized OS, and the docker container itself was a busybox running md5sum on the same 45GB file. The results (averaged over 3 runs) are a bit puzzling - sys time is lower, user time is about the same, but the real time for NVMe is about 5% slower. The container was ran with



      docker run -v /mnt/disks/nvme:/tmp1 -v /mnt/disks/scsi:/tmp2 -it busybox



      Test was executed with



      time md5sum largefile










      share|improve this question
















      In Google Cloud, I did a simple performance test comparing two "local SSD" drives attached to the same VM - first one attached as NVMe, second as SCSI. I was expecting NVMe to be somewhat faster, but got 5% performance hit instead:




      NVMe SCSI
      real 157.3 150.1
      user 107.2 107.1
      sys 21.6 22.2


      The Google compute VM was running COS - Container Optimized OS, and the docker container itself was a busybox running md5sum on the same 45GB file. The results (averaged over 3 runs) are a bit puzzling - sys time is lower, user time is about the same, but the real time for NVMe is about 5% slower. The container was ran with



      docker run -v /mnt/disks/nvme:/tmp1 -v /mnt/disks/scsi:/tmp2 -it busybox



      Test was executed with



      time md5sum largefile







      performance google-cloud-platform google-compute-engine






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 13 '18 at 1:09







      Yurik

















      asked Nov 13 '18 at 0:39









      YurikYurik

      3,67223956




      3,67223956






















          1 Answer
          1






          active

          oldest

          votes


















          1














          I believe there was a recent improvement to the guest NVMe driver which might help with this. I heard that it's shipped by default on the latest Ubuntu images, but may not be included in the COS distribution yet. The patch is available here.



          FWIW, md5sum is also not meant as a storage performance benchmarking tool, so your results also may not be very reproducible -- it has a CPU overhead (to calculate the checksum), and also runs on top of your local filesystem (which can be fragmented or not, etc.), and who knows what kind of IO size it uses to read the data in, all of which could add variability into your test. If you want to do true IO benchmarking, Google's docs have a pretty good guide explaining how to use fio for that directly on top of local SSDs.






          share|improve this answer






















            Your Answer






            StackExchange.ifUsing("editor", function ()
            StackExchange.using("externalEditor", function ()
            StackExchange.using("snippets", function ()
            StackExchange.snippets.init();
            );
            );
            , "code-snippets");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "1"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53272133%2fperformance-of-nvme-vs-scsi-for-local-ssds-in-gcp-using-container-os%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            1














            I believe there was a recent improvement to the guest NVMe driver which might help with this. I heard that it's shipped by default on the latest Ubuntu images, but may not be included in the COS distribution yet. The patch is available here.



            FWIW, md5sum is also not meant as a storage performance benchmarking tool, so your results also may not be very reproducible -- it has a CPU overhead (to calculate the checksum), and also runs on top of your local filesystem (which can be fragmented or not, etc.), and who knows what kind of IO size it uses to read the data in, all of which could add variability into your test. If you want to do true IO benchmarking, Google's docs have a pretty good guide explaining how to use fio for that directly on top of local SSDs.






            share|improve this answer



























              1














              I believe there was a recent improvement to the guest NVMe driver which might help with this. I heard that it's shipped by default on the latest Ubuntu images, but may not be included in the COS distribution yet. The patch is available here.



              FWIW, md5sum is also not meant as a storage performance benchmarking tool, so your results also may not be very reproducible -- it has a CPU overhead (to calculate the checksum), and also runs on top of your local filesystem (which can be fragmented or not, etc.), and who knows what kind of IO size it uses to read the data in, all of which could add variability into your test. If you want to do true IO benchmarking, Google's docs have a pretty good guide explaining how to use fio for that directly on top of local SSDs.






              share|improve this answer

























                1












                1








                1







                I believe there was a recent improvement to the guest NVMe driver which might help with this. I heard that it's shipped by default on the latest Ubuntu images, but may not be included in the COS distribution yet. The patch is available here.



                FWIW, md5sum is also not meant as a storage performance benchmarking tool, so your results also may not be very reproducible -- it has a CPU overhead (to calculate the checksum), and also runs on top of your local filesystem (which can be fragmented or not, etc.), and who knows what kind of IO size it uses to read the data in, all of which could add variability into your test. If you want to do true IO benchmarking, Google's docs have a pretty good guide explaining how to use fio for that directly on top of local SSDs.






                share|improve this answer













                I believe there was a recent improvement to the guest NVMe driver which might help with this. I heard that it's shipped by default on the latest Ubuntu images, but may not be included in the COS distribution yet. The patch is available here.



                FWIW, md5sum is also not meant as a storage performance benchmarking tool, so your results also may not be very reproducible -- it has a CPU overhead (to calculate the checksum), and also runs on top of your local filesystem (which can be fragmented or not, etc.), and who knows what kind of IO size it uses to read the data in, all of which could add variability into your test. If you want to do true IO benchmarking, Google's docs have a pretty good guide explaining how to use fio for that directly on top of local SSDs.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Nov 13 '18 at 21:28









                DanDan

                4,17911838




                4,17911838



























                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53272133%2fperformance-of-nvme-vs-scsi-for-local-ssds-in-gcp-using-container-os%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Use pre created SQLite database for Android project in kotlin

                    Darth Vader #20

                    Ondo