Are there any Pros/Cons to the /j Robocopy option (unbuffered copying)











up vote
7
down vote

favorite
1












Robocopy has a /J command line option recommended for copying large files (it copies using unbuffered I/O).



What (if any) downsides are there?
Any reason this isn't enabled by default? (That's what made me think there MIGHT be downsides.)










share|improve this question
























  • I can imagine some performace downsides with lots of small files. But with large files? Not many. It might be slower. I expect it to be much more predictable when copying to a slow destination. Lets see which answers we get from other users since I am just guessing atm:)
    – Hennes
    Aug 16 '16 at 21:03












  • Hmmm... if it might be slower even on large files, I wonder what the BENEFITS are then. I updated the question to reflect that.
    – Clay Nichols
    Aug 16 '16 at 21:07










  • When I copy (using windows explorer, not robocopy) from fast HDD to external (slow) USB drive on a system with 18GB RAM (read: lots of memory which can be used as disk buffer) I often ran into situations where reading source files was done, yet unmounting the slow USB2 disk took about 45 minutes while cache was being flushed. I wish I could have limited cache memory there. This might just be the option for that in robocopy.. ANyway, post tagged, it will be interesting for both of us to see which answers show up.
    – Hennes
    Aug 16 '16 at 21:10















up vote
7
down vote

favorite
1












Robocopy has a /J command line option recommended for copying large files (it copies using unbuffered I/O).



What (if any) downsides are there?
Any reason this isn't enabled by default? (That's what made me think there MIGHT be downsides.)










share|improve this question
























  • I can imagine some performace downsides with lots of small files. But with large files? Not many. It might be slower. I expect it to be much more predictable when copying to a slow destination. Lets see which answers we get from other users since I am just guessing atm:)
    – Hennes
    Aug 16 '16 at 21:03












  • Hmmm... if it might be slower even on large files, I wonder what the BENEFITS are then. I updated the question to reflect that.
    – Clay Nichols
    Aug 16 '16 at 21:07










  • When I copy (using windows explorer, not robocopy) from fast HDD to external (slow) USB drive on a system with 18GB RAM (read: lots of memory which can be used as disk buffer) I often ran into situations where reading source files was done, yet unmounting the slow USB2 disk took about 45 minutes while cache was being flushed. I wish I could have limited cache memory there. This might just be the option for that in robocopy.. ANyway, post tagged, it will be interesting for both of us to see which answers show up.
    – Hennes
    Aug 16 '16 at 21:10













up vote
7
down vote

favorite
1









up vote
7
down vote

favorite
1






1





Robocopy has a /J command line option recommended for copying large files (it copies using unbuffered I/O).



What (if any) downsides are there?
Any reason this isn't enabled by default? (That's what made me think there MIGHT be downsides.)










share|improve this question















Robocopy has a /J command line option recommended for copying large files (it copies using unbuffered I/O).



What (if any) downsides are there?
Any reason this isn't enabled by default? (That's what made me think there MIGHT be downsides.)







windows robocopy io buffer






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited May 20 '17 at 17:18









Scott

15.5k113889




15.5k113889










asked Aug 16 '16 at 21:01









Clay Nichols

2,437186694




2,437186694












  • I can imagine some performace downsides with lots of small files. But with large files? Not many. It might be slower. I expect it to be much more predictable when copying to a slow destination. Lets see which answers we get from other users since I am just guessing atm:)
    – Hennes
    Aug 16 '16 at 21:03












  • Hmmm... if it might be slower even on large files, I wonder what the BENEFITS are then. I updated the question to reflect that.
    – Clay Nichols
    Aug 16 '16 at 21:07










  • When I copy (using windows explorer, not robocopy) from fast HDD to external (slow) USB drive on a system with 18GB RAM (read: lots of memory which can be used as disk buffer) I often ran into situations where reading source files was done, yet unmounting the slow USB2 disk took about 45 minutes while cache was being flushed. I wish I could have limited cache memory there. This might just be the option for that in robocopy.. ANyway, post tagged, it will be interesting for both of us to see which answers show up.
    – Hennes
    Aug 16 '16 at 21:10


















  • I can imagine some performace downsides with lots of small files. But with large files? Not many. It might be slower. I expect it to be much more predictable when copying to a slow destination. Lets see which answers we get from other users since I am just guessing atm:)
    – Hennes
    Aug 16 '16 at 21:03












  • Hmmm... if it might be slower even on large files, I wonder what the BENEFITS are then. I updated the question to reflect that.
    – Clay Nichols
    Aug 16 '16 at 21:07










  • When I copy (using windows explorer, not robocopy) from fast HDD to external (slow) USB drive on a system with 18GB RAM (read: lots of memory which can be used as disk buffer) I often ran into situations where reading source files was done, yet unmounting the slow USB2 disk took about 45 minutes while cache was being flushed. I wish I could have limited cache memory there. This might just be the option for that in robocopy.. ANyway, post tagged, it will be interesting for both of us to see which answers show up.
    – Hennes
    Aug 16 '16 at 21:10
















I can imagine some performace downsides with lots of small files. But with large files? Not many. It might be slower. I expect it to be much more predictable when copying to a slow destination. Lets see which answers we get from other users since I am just guessing atm:)
– Hennes
Aug 16 '16 at 21:03






I can imagine some performace downsides with lots of small files. But with large files? Not many. It might be slower. I expect it to be much more predictable when copying to a slow destination. Lets see which answers we get from other users since I am just guessing atm:)
– Hennes
Aug 16 '16 at 21:03














Hmmm... if it might be slower even on large files, I wonder what the BENEFITS are then. I updated the question to reflect that.
– Clay Nichols
Aug 16 '16 at 21:07




Hmmm... if it might be slower even on large files, I wonder what the BENEFITS are then. I updated the question to reflect that.
– Clay Nichols
Aug 16 '16 at 21:07












When I copy (using windows explorer, not robocopy) from fast HDD to external (slow) USB drive on a system with 18GB RAM (read: lots of memory which can be used as disk buffer) I often ran into situations where reading source files was done, yet unmounting the slow USB2 disk took about 45 minutes while cache was being flushed. I wish I could have limited cache memory there. This might just be the option for that in robocopy.. ANyway, post tagged, it will be interesting for both of us to see which answers show up.
– Hennes
Aug 16 '16 at 21:10




When I copy (using windows explorer, not robocopy) from fast HDD to external (slow) USB drive on a system with 18GB RAM (read: lots of memory which can be used as disk buffer) I often ran into situations where reading source files was done, yet unmounting the slow USB2 disk took about 45 minutes while cache was being flushed. I wish I could have limited cache memory there. This might just be the option for that in robocopy.. ANyway, post tagged, it will be interesting for both of us to see which answers show up.
– Hennes
Aug 16 '16 at 21:10










3 Answers
3






active

oldest

votes

















up vote
5
down vote













Great question.



Unbuffered I/O is a simple file copy from a source location to a destination location. Buffered I/O augments the simple copy to optimize for future reads of (and writes to) the same file by copying the file into the filesystem cache, which is a region of virtual memory. Buffered I/O incurs a performance penalty the first time the file is accessed because it has to copy the file into memory; however, because memory access is faster than disk access, subsequent file access should be faster. The operating system takes care of synchronizing file writes back to disk, and reads can be pulled directly from memory.



The usage note mentions large files vis-à-vis buffered I/O because:





  1. The up-front cost is expensive. The performance penalty with buffered I/O is substantially worse for large files.


  2. You get little in return. Large file blocks don't tend to stay in the cache for very long anyway, unless you have a ton of memory relative to the file size.


  3. It may not avoid disk I/O. Reads and write of large file data blocks increase the probability of requiring disk I/O.


  4. You probably don't need to buffer anyway. Large files tend to be less frequently accessed in practice than smaller files.


So there is a tradeoff, but which is appropriate for you depends on your particular case. If you are zipping up a bunch of files and transmitting the zip to a backup target, unbuffered is the way to go. Copying a bunch of files that were just changed? Buffered should be faster.



Finally, note that file size is not the only factor in deciding between buffered and unbuffered. As with any cache, the filesystem cache is faster but smaller than the source behind it. It requires a cache replacement strategy that governs when to evict items to make room for new items. It loses its benefit when frequently-accessed items get evicted. For example, if you are synchronizing user home directories intraday to a separate location (i.e., while users are actively using the files), buffered I/O would benefit from files already resident in the cache, but may temporarily pollute the cache with stale files; on the other hand, unbuffered would forego any benefit of files already cached. No clear winner in such a case.



Note: this also applies to xcopy /J



See Microsoft's Ask The Performance Team Blog for more.






share|improve this answer




























    up vote
    1
    down vote













    I tried the following:



    When you copy from a fast device (NAS via Gigabit-Ethernet) to another fast device (USB3-Disk)




    • without /J: the data is read into a buffer and written after that, so either the network or the harddrive is idle

    • with /J: the data is read and written without wait, so the network and the hard drive are used simultanously


    I would suggest to use this option.






    share|improve this answer






























      up vote
      0
      down vote













      If you are copying across the WAN, I recommend NOT having the /J option enabled for large files as your average copy time will increase significantly.
      The files I copied were anywhere from 500MB to 23GB.



      On a 50Mbps line, I averaged 43.5Mbps (other traffic and overhead) while never going below 32Mbps WITHOUT /J.
      With /J my average was around 25Mbps...looking at perfmon, I could see large peaks and valleys at the bottom.



      Hope this helps someone out.






      share|improve this answer





















        Your Answer








        StackExchange.ready(function() {
        var channelOptions = {
        tags: "".split(" "),
        id: "3"
        };
        initTagRenderer("".split(" "), "".split(" "), channelOptions);

        StackExchange.using("externalEditor", function() {
        // Have to fire editor after snippets, if snippets enabled
        if (StackExchange.settings.snippets.snippetsEnabled) {
        StackExchange.using("snippets", function() {
        createEditor();
        });
        }
        else {
        createEditor();
        }
        });

        function createEditor() {
        StackExchange.prepareEditor({
        heartbeatType: 'answer',
        convertImagesToLinks: true,
        noModals: true,
        showLowRepImageUploadWarning: true,
        reputationToPostImages: 10,
        bindNavPrevention: true,
        postfix: "",
        imageUploader: {
        brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
        contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
        allowUrls: true
        },
        onDemand: true,
        discardSelector: ".discard-answer"
        ,immediatelyShowMarkdownHelp:true
        });


        }
        });














        draft saved

        draft discarded


















        StackExchange.ready(
        function () {
        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1114311%2fare-there-any-pros-cons-to-the-j-robocopy-option-unbuffered-copying%23new-answer', 'question_page');
        }
        );

        Post as a guest















        Required, but never shown

























        3 Answers
        3






        active

        oldest

        votes








        3 Answers
        3






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes








        up vote
        5
        down vote













        Great question.



        Unbuffered I/O is a simple file copy from a source location to a destination location. Buffered I/O augments the simple copy to optimize for future reads of (and writes to) the same file by copying the file into the filesystem cache, which is a region of virtual memory. Buffered I/O incurs a performance penalty the first time the file is accessed because it has to copy the file into memory; however, because memory access is faster than disk access, subsequent file access should be faster. The operating system takes care of synchronizing file writes back to disk, and reads can be pulled directly from memory.



        The usage note mentions large files vis-à-vis buffered I/O because:





        1. The up-front cost is expensive. The performance penalty with buffered I/O is substantially worse for large files.


        2. You get little in return. Large file blocks don't tend to stay in the cache for very long anyway, unless you have a ton of memory relative to the file size.


        3. It may not avoid disk I/O. Reads and write of large file data blocks increase the probability of requiring disk I/O.


        4. You probably don't need to buffer anyway. Large files tend to be less frequently accessed in practice than smaller files.


        So there is a tradeoff, but which is appropriate for you depends on your particular case. If you are zipping up a bunch of files and transmitting the zip to a backup target, unbuffered is the way to go. Copying a bunch of files that were just changed? Buffered should be faster.



        Finally, note that file size is not the only factor in deciding between buffered and unbuffered. As with any cache, the filesystem cache is faster but smaller than the source behind it. It requires a cache replacement strategy that governs when to evict items to make room for new items. It loses its benefit when frequently-accessed items get evicted. For example, if you are synchronizing user home directories intraday to a separate location (i.e., while users are actively using the files), buffered I/O would benefit from files already resident in the cache, but may temporarily pollute the cache with stale files; on the other hand, unbuffered would forego any benefit of files already cached. No clear winner in such a case.



        Note: this also applies to xcopy /J



        See Microsoft's Ask The Performance Team Blog for more.






        share|improve this answer

























          up vote
          5
          down vote













          Great question.



          Unbuffered I/O is a simple file copy from a source location to a destination location. Buffered I/O augments the simple copy to optimize for future reads of (and writes to) the same file by copying the file into the filesystem cache, which is a region of virtual memory. Buffered I/O incurs a performance penalty the first time the file is accessed because it has to copy the file into memory; however, because memory access is faster than disk access, subsequent file access should be faster. The operating system takes care of synchronizing file writes back to disk, and reads can be pulled directly from memory.



          The usage note mentions large files vis-à-vis buffered I/O because:





          1. The up-front cost is expensive. The performance penalty with buffered I/O is substantially worse for large files.


          2. You get little in return. Large file blocks don't tend to stay in the cache for very long anyway, unless you have a ton of memory relative to the file size.


          3. It may not avoid disk I/O. Reads and write of large file data blocks increase the probability of requiring disk I/O.


          4. You probably don't need to buffer anyway. Large files tend to be less frequently accessed in practice than smaller files.


          So there is a tradeoff, but which is appropriate for you depends on your particular case. If you are zipping up a bunch of files and transmitting the zip to a backup target, unbuffered is the way to go. Copying a bunch of files that were just changed? Buffered should be faster.



          Finally, note that file size is not the only factor in deciding between buffered and unbuffered. As with any cache, the filesystem cache is faster but smaller than the source behind it. It requires a cache replacement strategy that governs when to evict items to make room for new items. It loses its benefit when frequently-accessed items get evicted. For example, if you are synchronizing user home directories intraday to a separate location (i.e., while users are actively using the files), buffered I/O would benefit from files already resident in the cache, but may temporarily pollute the cache with stale files; on the other hand, unbuffered would forego any benefit of files already cached. No clear winner in such a case.



          Note: this also applies to xcopy /J



          See Microsoft's Ask The Performance Team Blog for more.






          share|improve this answer























            up vote
            5
            down vote










            up vote
            5
            down vote









            Great question.



            Unbuffered I/O is a simple file copy from a source location to a destination location. Buffered I/O augments the simple copy to optimize for future reads of (and writes to) the same file by copying the file into the filesystem cache, which is a region of virtual memory. Buffered I/O incurs a performance penalty the first time the file is accessed because it has to copy the file into memory; however, because memory access is faster than disk access, subsequent file access should be faster. The operating system takes care of synchronizing file writes back to disk, and reads can be pulled directly from memory.



            The usage note mentions large files vis-à-vis buffered I/O because:





            1. The up-front cost is expensive. The performance penalty with buffered I/O is substantially worse for large files.


            2. You get little in return. Large file blocks don't tend to stay in the cache for very long anyway, unless you have a ton of memory relative to the file size.


            3. It may not avoid disk I/O. Reads and write of large file data blocks increase the probability of requiring disk I/O.


            4. You probably don't need to buffer anyway. Large files tend to be less frequently accessed in practice than smaller files.


            So there is a tradeoff, but which is appropriate for you depends on your particular case. If you are zipping up a bunch of files and transmitting the zip to a backup target, unbuffered is the way to go. Copying a bunch of files that were just changed? Buffered should be faster.



            Finally, note that file size is not the only factor in deciding between buffered and unbuffered. As with any cache, the filesystem cache is faster but smaller than the source behind it. It requires a cache replacement strategy that governs when to evict items to make room for new items. It loses its benefit when frequently-accessed items get evicted. For example, if you are synchronizing user home directories intraday to a separate location (i.e., while users are actively using the files), buffered I/O would benefit from files already resident in the cache, but may temporarily pollute the cache with stale files; on the other hand, unbuffered would forego any benefit of files already cached. No clear winner in such a case.



            Note: this also applies to xcopy /J



            See Microsoft's Ask The Performance Team Blog for more.






            share|improve this answer












            Great question.



            Unbuffered I/O is a simple file copy from a source location to a destination location. Buffered I/O augments the simple copy to optimize for future reads of (and writes to) the same file by copying the file into the filesystem cache, which is a region of virtual memory. Buffered I/O incurs a performance penalty the first time the file is accessed because it has to copy the file into memory; however, because memory access is faster than disk access, subsequent file access should be faster. The operating system takes care of synchronizing file writes back to disk, and reads can be pulled directly from memory.



            The usage note mentions large files vis-à-vis buffered I/O because:





            1. The up-front cost is expensive. The performance penalty with buffered I/O is substantially worse for large files.


            2. You get little in return. Large file blocks don't tend to stay in the cache for very long anyway, unless you have a ton of memory relative to the file size.


            3. It may not avoid disk I/O. Reads and write of large file data blocks increase the probability of requiring disk I/O.


            4. You probably don't need to buffer anyway. Large files tend to be less frequently accessed in practice than smaller files.


            So there is a tradeoff, but which is appropriate for you depends on your particular case. If you are zipping up a bunch of files and transmitting the zip to a backup target, unbuffered is the way to go. Copying a bunch of files that were just changed? Buffered should be faster.



            Finally, note that file size is not the only factor in deciding between buffered and unbuffered. As with any cache, the filesystem cache is faster but smaller than the source behind it. It requires a cache replacement strategy that governs when to evict items to make room for new items. It loses its benefit when frequently-accessed items get evicted. For example, if you are synchronizing user home directories intraday to a separate location (i.e., while users are actively using the files), buffered I/O would benefit from files already resident in the cache, but may temporarily pollute the cache with stale files; on the other hand, unbuffered would forego any benefit of files already cached. No clear winner in such a case.



            Note: this also applies to xcopy /J



            See Microsoft's Ask The Performance Team Blog for more.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered May 20 '17 at 16:16









            Alejandro C De Baca

            15124




            15124
























                up vote
                1
                down vote













                I tried the following:



                When you copy from a fast device (NAS via Gigabit-Ethernet) to another fast device (USB3-Disk)




                • without /J: the data is read into a buffer and written after that, so either the network or the harddrive is idle

                • with /J: the data is read and written without wait, so the network and the hard drive are used simultanously


                I would suggest to use this option.






                share|improve this answer



























                  up vote
                  1
                  down vote













                  I tried the following:



                  When you copy from a fast device (NAS via Gigabit-Ethernet) to another fast device (USB3-Disk)




                  • without /J: the data is read into a buffer and written after that, so either the network or the harddrive is idle

                  • with /J: the data is read and written without wait, so the network and the hard drive are used simultanously


                  I would suggest to use this option.






                  share|improve this answer

























                    up vote
                    1
                    down vote










                    up vote
                    1
                    down vote









                    I tried the following:



                    When you copy from a fast device (NAS via Gigabit-Ethernet) to another fast device (USB3-Disk)




                    • without /J: the data is read into a buffer and written after that, so either the network or the harddrive is idle

                    • with /J: the data is read and written without wait, so the network and the hard drive are used simultanously


                    I would suggest to use this option.






                    share|improve this answer














                    I tried the following:



                    When you copy from a fast device (NAS via Gigabit-Ethernet) to another fast device (USB3-Disk)




                    • without /J: the data is read into a buffer and written after that, so either the network or the harddrive is idle

                    • with /J: the data is read and written without wait, so the network and the hard drive are used simultanously


                    I would suggest to use this option.







                    share|improve this answer














                    share|improve this answer



                    share|improve this answer








                    edited Mar 14 '17 at 13:14









                    RamonRobben

                    821415




                    821415










                    answered Mar 13 '17 at 11:33









                    Rüdiger

                    111




                    111






















                        up vote
                        0
                        down vote













                        If you are copying across the WAN, I recommend NOT having the /J option enabled for large files as your average copy time will increase significantly.
                        The files I copied were anywhere from 500MB to 23GB.



                        On a 50Mbps line, I averaged 43.5Mbps (other traffic and overhead) while never going below 32Mbps WITHOUT /J.
                        With /J my average was around 25Mbps...looking at perfmon, I could see large peaks and valleys at the bottom.



                        Hope this helps someone out.






                        share|improve this answer

























                          up vote
                          0
                          down vote













                          If you are copying across the WAN, I recommend NOT having the /J option enabled for large files as your average copy time will increase significantly.
                          The files I copied were anywhere from 500MB to 23GB.



                          On a 50Mbps line, I averaged 43.5Mbps (other traffic and overhead) while never going below 32Mbps WITHOUT /J.
                          With /J my average was around 25Mbps...looking at perfmon, I could see large peaks and valleys at the bottom.



                          Hope this helps someone out.






                          share|improve this answer























                            up vote
                            0
                            down vote










                            up vote
                            0
                            down vote









                            If you are copying across the WAN, I recommend NOT having the /J option enabled for large files as your average copy time will increase significantly.
                            The files I copied were anywhere from 500MB to 23GB.



                            On a 50Mbps line, I averaged 43.5Mbps (other traffic and overhead) while never going below 32Mbps WITHOUT /J.
                            With /J my average was around 25Mbps...looking at perfmon, I could see large peaks and valleys at the bottom.



                            Hope this helps someone out.






                            share|improve this answer












                            If you are copying across the WAN, I recommend NOT having the /J option enabled for large files as your average copy time will increase significantly.
                            The files I copied were anywhere from 500MB to 23GB.



                            On a 50Mbps line, I averaged 43.5Mbps (other traffic and overhead) while never going below 32Mbps WITHOUT /J.
                            With /J my average was around 25Mbps...looking at perfmon, I could see large peaks and valleys at the bottom.



                            Hope this helps someone out.







                            share|improve this answer












                            share|improve this answer



                            share|improve this answer










                            answered Oct 9 '17 at 8:32









                            user778642

                            11




                            11






























                                draft saved

                                draft discarded




















































                                Thanks for contributing an answer to Super User!


                                • Please be sure to answer the question. Provide details and share your research!

                                But avoid



                                • Asking for help, clarification, or responding to other answers.

                                • Making statements based on opinion; back them up with references or personal experience.


                                To learn more, see our tips on writing great answers.





                                Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                                Please pay close attention to the following guidance:


                                • Please be sure to answer the question. Provide details and share your research!

                                But avoid



                                • Asking for help, clarification, or responding to other answers.

                                • Making statements based on opinion; back them up with references or personal experience.


                                To learn more, see our tips on writing great answers.




                                draft saved


                                draft discarded














                                StackExchange.ready(
                                function () {
                                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1114311%2fare-there-any-pros-cons-to-the-j-robocopy-option-unbuffered-copying%23new-answer', 'question_page');
                                }
                                );

                                Post as a guest















                                Required, but never shown





















































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown

































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown







                                Popular posts from this blog

                                flock() on closed filehandle LOCK_FILE at /usr/bin/apt-mirror

                                Mangá

                                Eduardo VII do Reino Unido