Is there a downside to using flock in cron jobs?












0















I use several cron jobs, where they run frequently I use flock to prevent duplicate jobs running. I just thought that it seems to make sense to use flock on every job, irrespective of the frequency, but is there any downside to doing that?



I am 100% Linux with Mint, Raspbian and Ubuntu server.










share|improve this question

























  • Please do not crosspost. See Is cross-posting a question on multiple Stack Exchange sites permitted if the question is on-topic for each site?

    – DavidPostill
    Jan 8 at 20:06
















0















I use several cron jobs, where they run frequently I use flock to prevent duplicate jobs running. I just thought that it seems to make sense to use flock on every job, irrespective of the frequency, but is there any downside to doing that?



I am 100% Linux with Mint, Raspbian and Ubuntu server.










share|improve this question

























  • Please do not crosspost. See Is cross-posting a question on multiple Stack Exchange sites permitted if the question is on-topic for each site?

    – DavidPostill
    Jan 8 at 20:06














0












0








0








I use several cron jobs, where they run frequently I use flock to prevent duplicate jobs running. I just thought that it seems to make sense to use flock on every job, irrespective of the frequency, but is there any downside to doing that?



I am 100% Linux with Mint, Raspbian and Ubuntu server.










share|improve this question
















I use several cron jobs, where they run frequently I use flock to prevent duplicate jobs running. I just thought that it seems to make sense to use flock on every job, irrespective of the frequency, but is there any downside to doing that?



I am 100% Linux with Mint, Raspbian and Ubuntu server.







linux cron






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Jan 8 at 19:52









JakeGould

31.2k1096138




31.2k1096138










asked Jan 8 at 19:11









Mick SulleyMick Sulley

31




31













  • Please do not crosspost. See Is cross-posting a question on multiple Stack Exchange sites permitted if the question is on-topic for each site?

    – DavidPostill
    Jan 8 at 20:06



















  • Please do not crosspost. See Is cross-posting a question on multiple Stack Exchange sites permitted if the question is on-topic for each site?

    – DavidPostill
    Jan 8 at 20:06

















Please do not crosspost. See Is cross-posting a question on multiple Stack Exchange sites permitted if the question is on-topic for each site?

– DavidPostill
Jan 8 at 20:06





Please do not crosspost. See Is cross-posting a question on multiple Stack Exchange sites permitted if the question is on-topic for each site?

– DavidPostill
Jan 8 at 20:06










1 Answer
1






active

oldest

votes


















0














The only consistent downside is that there's extra overhead to using flock. Aside from the obvious aspect of having to open a file and lock it, you also have the fact that there's going to be another process involved (or at least an extra executable and call to exec() if you're using the --no-fork option), and there's some extra overhead in the cleanup (because the OS has to release the lock when it automatically closes the file).



There are also a couple of other really situationally specific downsides to locking cron jobs like this (this is not an exhaustive list):




  • If you need exclusive locks, you need a writable filesystem path, otherwise the flock command will always fail. This means that:


    • If you're not careful, a filesystem error can completely stop your cron jobs from running (if it causes the path you use for locks to get remounted read-only).

    • On some tightly secured devices, you may have to give some extra permissions to the cron jobs so that they can run.



  • In some cases, you actually want the new instance of the cron job to be the one that continues, not the old one. The best example I can give for this is a high-frequency cron job that runs every few minutes to synchronize files to another system, where waiting until the previous instance is finished may delay the most recent updates by an arbitrarily long time. If, instead, you have the cron job kill any old copies of itself when it starts, you can still progress, and recent changes are more likely to get propagated out quickly.






share|improve this answer























    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "3"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1391999%2fis-there-a-downside-to-using-flock-in-cron-jobs%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0














    The only consistent downside is that there's extra overhead to using flock. Aside from the obvious aspect of having to open a file and lock it, you also have the fact that there's going to be another process involved (or at least an extra executable and call to exec() if you're using the --no-fork option), and there's some extra overhead in the cleanup (because the OS has to release the lock when it automatically closes the file).



    There are also a couple of other really situationally specific downsides to locking cron jobs like this (this is not an exhaustive list):




    • If you need exclusive locks, you need a writable filesystem path, otherwise the flock command will always fail. This means that:


      • If you're not careful, a filesystem error can completely stop your cron jobs from running (if it causes the path you use for locks to get remounted read-only).

      • On some tightly secured devices, you may have to give some extra permissions to the cron jobs so that they can run.



    • In some cases, you actually want the new instance of the cron job to be the one that continues, not the old one. The best example I can give for this is a high-frequency cron job that runs every few minutes to synchronize files to another system, where waiting until the previous instance is finished may delay the most recent updates by an arbitrarily long time. If, instead, you have the cron job kill any old copies of itself when it starts, you can still progress, and recent changes are more likely to get propagated out quickly.






    share|improve this answer




























      0














      The only consistent downside is that there's extra overhead to using flock. Aside from the obvious aspect of having to open a file and lock it, you also have the fact that there's going to be another process involved (or at least an extra executable and call to exec() if you're using the --no-fork option), and there's some extra overhead in the cleanup (because the OS has to release the lock when it automatically closes the file).



      There are also a couple of other really situationally specific downsides to locking cron jobs like this (this is not an exhaustive list):




      • If you need exclusive locks, you need a writable filesystem path, otherwise the flock command will always fail. This means that:


        • If you're not careful, a filesystem error can completely stop your cron jobs from running (if it causes the path you use for locks to get remounted read-only).

        • On some tightly secured devices, you may have to give some extra permissions to the cron jobs so that they can run.



      • In some cases, you actually want the new instance of the cron job to be the one that continues, not the old one. The best example I can give for this is a high-frequency cron job that runs every few minutes to synchronize files to another system, where waiting until the previous instance is finished may delay the most recent updates by an arbitrarily long time. If, instead, you have the cron job kill any old copies of itself when it starts, you can still progress, and recent changes are more likely to get propagated out quickly.






      share|improve this answer


























        0












        0








        0







        The only consistent downside is that there's extra overhead to using flock. Aside from the obvious aspect of having to open a file and lock it, you also have the fact that there's going to be another process involved (or at least an extra executable and call to exec() if you're using the --no-fork option), and there's some extra overhead in the cleanup (because the OS has to release the lock when it automatically closes the file).



        There are also a couple of other really situationally specific downsides to locking cron jobs like this (this is not an exhaustive list):




        • If you need exclusive locks, you need a writable filesystem path, otherwise the flock command will always fail. This means that:


          • If you're not careful, a filesystem error can completely stop your cron jobs from running (if it causes the path you use for locks to get remounted read-only).

          • On some tightly secured devices, you may have to give some extra permissions to the cron jobs so that they can run.



        • In some cases, you actually want the new instance of the cron job to be the one that continues, not the old one. The best example I can give for this is a high-frequency cron job that runs every few minutes to synchronize files to another system, where waiting until the previous instance is finished may delay the most recent updates by an arbitrarily long time. If, instead, you have the cron job kill any old copies of itself when it starts, you can still progress, and recent changes are more likely to get propagated out quickly.






        share|improve this answer













        The only consistent downside is that there's extra overhead to using flock. Aside from the obvious aspect of having to open a file and lock it, you also have the fact that there's going to be another process involved (or at least an extra executable and call to exec() if you're using the --no-fork option), and there's some extra overhead in the cleanup (because the OS has to release the lock when it automatically closes the file).



        There are also a couple of other really situationally specific downsides to locking cron jobs like this (this is not an exhaustive list):




        • If you need exclusive locks, you need a writable filesystem path, otherwise the flock command will always fail. This means that:


          • If you're not careful, a filesystem error can completely stop your cron jobs from running (if it causes the path you use for locks to get remounted read-only).

          • On some tightly secured devices, you may have to give some extra permissions to the cron jobs so that they can run.



        • In some cases, you actually want the new instance of the cron job to be the one that continues, not the old one. The best example I can give for this is a high-frequency cron job that runs every few minutes to synchronize files to another system, where waiting until the previous instance is finished may delay the most recent updates by an arbitrarily long time. If, instead, you have the cron job kill any old copies of itself when it starts, you can still progress, and recent changes are more likely to get propagated out quickly.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Jan 8 at 19:47









        Austin HemmelgarnAustin Hemmelgarn

        2,62419




        2,62419






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Super User!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1391999%2fis-there-a-downside-to-using-flock-in-cron-jobs%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            flock() on closed filehandle LOCK_FILE at /usr/bin/apt-mirror

            Mangá

            Eduardo VII do Reino Unido