Creating a large file of random bytes quickly












7















I want to create a large file ~10G filled with zeros and random values. I have tried using:



dd if=/dev/urandom of=10Gfile bs=5G count=10


it creates a file of about 2Gb and exits with a exit status 0. I fail to understand why?



I also tried creating file using:



head -c 10G </dev/urandom >myfile


but it takes about 28-30 mins to create it. But i want it created faster. Anyone has a solution?



Also i wish to create multiple files with same (pseudo) random pattern for comparison. Does anyone know a way to do that? Thanks










share|improve this question

























  • If it's important that the files contain random numbers, that should be part of the title! What means "filled with zeros and random values."?

    – Volker Siegel
    Aug 4 '14 at 22:20


















7















I want to create a large file ~10G filled with zeros and random values. I have tried using:



dd if=/dev/urandom of=10Gfile bs=5G count=10


it creates a file of about 2Gb and exits with a exit status 0. I fail to understand why?



I also tried creating file using:



head -c 10G </dev/urandom >myfile


but it takes about 28-30 mins to create it. But i want it created faster. Anyone has a solution?



Also i wish to create multiple files with same (pseudo) random pattern for comparison. Does anyone know a way to do that? Thanks










share|improve this question

























  • If it's important that the files contain random numbers, that should be part of the title! What means "filled with zeros and random values."?

    – Volker Siegel
    Aug 4 '14 at 22:20
















7












7








7


3






I want to create a large file ~10G filled with zeros and random values. I have tried using:



dd if=/dev/urandom of=10Gfile bs=5G count=10


it creates a file of about 2Gb and exits with a exit status 0. I fail to understand why?



I also tried creating file using:



head -c 10G </dev/urandom >myfile


but it takes about 28-30 mins to create it. But i want it created faster. Anyone has a solution?



Also i wish to create multiple files with same (pseudo) random pattern for comparison. Does anyone know a way to do that? Thanks










share|improve this question
















I want to create a large file ~10G filled with zeros and random values. I have tried using:



dd if=/dev/urandom of=10Gfile bs=5G count=10


it creates a file of about 2Gb and exits with a exit status 0. I fail to understand why?



I also tried creating file using:



head -c 10G </dev/urandom >myfile


but it takes about 28-30 mins to create it. But i want it created faster. Anyone has a solution?



Also i wish to create multiple files with same (pseudo) random pattern for comparison. Does anyone know a way to do that? Thanks







unix dd random-number-generator head






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Oct 16 '14 at 11:38









Der Hochstapler

67.7k49230284




67.7k49230284










asked Aug 4 '14 at 21:26









skaneskane

46113




46113













  • If it's important that the files contain random numbers, that should be part of the title! What means "filled with zeros and random values."?

    – Volker Siegel
    Aug 4 '14 at 22:20





















  • If it's important that the files contain random numbers, that should be part of the title! What means "filled with zeros and random values."?

    – Volker Siegel
    Aug 4 '14 at 22:20



















If it's important that the files contain random numbers, that should be part of the title! What means "filled with zeros and random values."?

– Volker Siegel
Aug 4 '14 at 22:20







If it's important that the files contain random numbers, that should be part of the title! What means "filled with zeros and random values."?

– Volker Siegel
Aug 4 '14 at 22:20












4 Answers
4






active

oldest

votes


















13














I've seen a pretty neat trick at commandlinefu: use /dev/urandom as a source of randomness (it is a good source), and then using that as a password to an AES stream cipher.



I can't tell you with 100% sure, but I do believe that if you change the parameters (i.e. use way more than just 128 bytes from /dev/urandom), it is at least close enough to a cryptographically secure PRNG, for all practical purposes:




This command generates a pseudo-random data stream using aes-256-ctr
with a seed set by /dev/urandom. Redirect to a block device for secure
data scrambling.




openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero > randomfile.bin


How does this work?



openssl enc -aes-256-ctr will use openssl to encrypt zeroes with AES-256 in CTR mode.





  • What will it encrypt?



    /dev/zero




  • What is the password it will use to encrypt it?



    dd if=/dev/urandom bs=128 count=1 | base64



    That is one block of 128 bytes of /dev/urandom encoded in base64 (the redirect to /dev/null is to ignore errors).




  • I'm actually not sure why -nosalt is being used, since OpenSSL's man page states the following:



    -salt
    use a salt in the key derivation routines. This is the default.

    -nosalt
    don't use a salt in the key derivation routines. This option SHOULD NOT be used except for test purposes or compatibility with ancient versions of OpenSSL and SSLeay.


    Perhaps the point is to make this run as fast as possible, and the use of salts would be unjustified, but I'm not sure whether this would leave any kind of pattern in the ciphertext. The folks at the Cryptography Stack Exchange may be able to give us a more thorough explanation on that.



  • The input is /dev/zero. This is because it really doesn't matter what is being encrypted - the output will be something resembling random data. Zeros are fast to get, and you can get (and encrypt) as much as you want without running out of them.


  • The output is randomfile.bin. It could also be /dev/sdz and you would randomize a full block device.



But I want to create a file with a fixed size! How do I do that?



Simple!



dd if=<(openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero) of=filename bs=1M count=100 iflag=fullblock


Just dd that command with a fixed blocksize (which is 1 MB here) and count. The file size will be blocksize * count = 1M * 100 = 100M.






share|improve this answer


























  • I was able to generate file quickly but it doesn't stop w/o Ctrl+C.Is there any way of giving out a file size? Also i didn't understand the "-nosalt < /dev/zero >" part.Frm wht i found ol it gives a initialization vector. So in this case is the IV /dev/zero? Also if i want to generate another file with same contents is that possible?

    – skane
    Aug 6 '14 at 18:47













  • It should stop by itself with a "disk full" warning. I'm updating the post to explain how it works.

    – Valmiky Arquissandas
    Aug 6 '14 at 18:52











  • I appreciate it.Able to understand clearly now, Thank you! The only issue is generating a output file of a particular size.I need a bunch of them to transfer and check timings n stuff.Can't have it running till the "disk full" warning

    – skane
    Aug 6 '14 at 19:33













  • In that case, you can also use dd. The following line will create a 100 MB file with random data (count * blocksize = 100 * 1M): dd if=<(openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero) of=filename bs=1M count=100

    – Valmiky Arquissandas
    Aug 6 '14 at 22:36











  • Sorry, the above line doesn't work because dd can't handle the input file being a stream. To make it accumulate the input blocks, you need to add the option iflag=fullblock to the outer dd, like this: dd if=<(openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero) of=filename bs=1M count=100 iflag=fullblock. I am adding this to the answer.

    – Valmiky Arquissandas
    Aug 6 '14 at 22:43



















5














There is a random number generator program sharand, it writes random bytes to a file. (The program was originally called sharnd, with one letter a less ( see http://mattmahoney.net/dc/)



It takes roughly one third of the time compared to reading /dev/urandom



It's a secure RNG - there are faster, but not secure RNG, but that's not what's needed normally.

To be really fast, look for the collection of RNG algorithms for perl: libstring-random-perl.





Let's give it a try (apt-get install sharand):



$ time sharand a 1000000000                      
sharand a 1000000000 21.72s user 0.34s system 99% cpu 22.087 total

$ time head -c 1000000000 /dev/urandom > urand.out
head -c 1000000000 /dev/urandom > urand.out 0.13s user 61.22s system 99% cpu 1:01.41 total


And the result files - (they do look more random from the inside):



$ ls -l
-rw-rw-r-- 1 siegel siegel 1000000000 Aug 5 03:02 sharand.out
-rw-rw-r-- 1 siegel siegel 1000000000 Aug 5 03:11 urand.out




Comparing the 'total' time values, sharand took only a third of the time needed by the urandom method to create a little less than a GB random bytes:



sharand: 22s total
urandom: 61s total






share|improve this answer


























  • I am using CentOS 6.5 and sharand is not available. I tried installing using: yum install sharand. It gives me "No package sharand available." Also if i run "time sharand a 1000000000", it says : command not found

    – skane
    Aug 5 '14 at 20:36













  • Oh, I just found that the program originally was not called sharand, as in ubuntu, but sharnd whith one "a" less. So that may be just a different package name. Looking at the home page of the software, it seems like he does not provide any packages, except the source; But most tools are just an algorithm in a single .c file, and very simple to build. If you can not find a package with the original name either, we'll find another way. (Sources and math papers here: mattmahoney.net/dc)

    – Volker Siegel
    Aug 6 '14 at 1:16











  • Unfortunately,i am unable to figure out how to work with sharnd also. One method bewlo with the openssl is working fine for now but only issue is how to specify the output file size.

    – skane
    Aug 6 '14 at 19:36





















3














I'm getting good speeds using the shred utility.




  • 2G with dd in=/dev/urandom - 250sec

  • 2G with openssl rand - 81sec

  • 2G with shred - 39sec


So I expect about 3-4 minutes for 10G with shred.





Create an empty file and shred it by passing the desired file size.



touch file
shred -n 1 -s 10G file


I'm not sure how cryptographically secure the generated data is, but it looks random. Here's some info on that.






share|improve this answer





















  • 3





    +1 for introducing me to shred <3. So useful. I used to loop dd.

    – aggregate1166877
    Apr 17 '18 at 21:50



















2














You want a special file in Linux, /dev/random serves as a random number generator on a Linux system. /dev/random will eventually block unless your system has a lot of activity, /dev/urandom in non-blocking. We don't want blocking when we're creating our files so we use /dev/urandom.





try this command:



dd if=/dev/urandom bs=1024 count=1000000 of=file_1GB conv=notrunc


This will create a file with bs*count random bytes, in our case 1024*1000000 = 1GB.
The file will not contain anything readable, but there will be some newlines in it.



xKon@xK0n-ubuntu-vm:~/tmp$ dd if=/dev/urandom of=file.txt bs=1048576 count=100 conv=notrunc
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 13.4593 s, 7.8 MB/s
xKon@xK0n-ubuntu-vm:~/tmp$ wc -l file.txt
410102 file.txt


You can use the option seek with dd to speed up the process a little more:



$ dd if=/dev/zero of=1g.img bs=1 count=0 seek=1G
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 8.12307 s, 132 MB/s
$ ls -lh t
-rw-rw-r-- 1 xK0n xK0n 1.1G 2014-08-05 11:43 t


The disadvantages here are the fact that the file does not contain anything readable and the fact that it is quite a bit slower than the /dev/zero method (around 10 seconds for 100Mb).





You may also like fallocate command that Preallocates space to a file.



fallocate -l 1G test.img


output




-rw-r--r--. 1 xK0n xK0n 1.0G Aug 05 11:43 test.img







share|improve this answer























    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "3"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f792427%2fcreating-a-large-file-of-random-bytes-quickly%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    4 Answers
    4






    active

    oldest

    votes








    4 Answers
    4






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    13














    I've seen a pretty neat trick at commandlinefu: use /dev/urandom as a source of randomness (it is a good source), and then using that as a password to an AES stream cipher.



    I can't tell you with 100% sure, but I do believe that if you change the parameters (i.e. use way more than just 128 bytes from /dev/urandom), it is at least close enough to a cryptographically secure PRNG, for all practical purposes:




    This command generates a pseudo-random data stream using aes-256-ctr
    with a seed set by /dev/urandom. Redirect to a block device for secure
    data scrambling.




    openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero > randomfile.bin


    How does this work?



    openssl enc -aes-256-ctr will use openssl to encrypt zeroes with AES-256 in CTR mode.





    • What will it encrypt?



      /dev/zero




    • What is the password it will use to encrypt it?



      dd if=/dev/urandom bs=128 count=1 | base64



      That is one block of 128 bytes of /dev/urandom encoded in base64 (the redirect to /dev/null is to ignore errors).




    • I'm actually not sure why -nosalt is being used, since OpenSSL's man page states the following:



      -salt
      use a salt in the key derivation routines. This is the default.

      -nosalt
      don't use a salt in the key derivation routines. This option SHOULD NOT be used except for test purposes or compatibility with ancient versions of OpenSSL and SSLeay.


      Perhaps the point is to make this run as fast as possible, and the use of salts would be unjustified, but I'm not sure whether this would leave any kind of pattern in the ciphertext. The folks at the Cryptography Stack Exchange may be able to give us a more thorough explanation on that.



    • The input is /dev/zero. This is because it really doesn't matter what is being encrypted - the output will be something resembling random data. Zeros are fast to get, and you can get (and encrypt) as much as you want without running out of them.


    • The output is randomfile.bin. It could also be /dev/sdz and you would randomize a full block device.



    But I want to create a file with a fixed size! How do I do that?



    Simple!



    dd if=<(openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero) of=filename bs=1M count=100 iflag=fullblock


    Just dd that command with a fixed blocksize (which is 1 MB here) and count. The file size will be blocksize * count = 1M * 100 = 100M.






    share|improve this answer


























    • I was able to generate file quickly but it doesn't stop w/o Ctrl+C.Is there any way of giving out a file size? Also i didn't understand the "-nosalt < /dev/zero >" part.Frm wht i found ol it gives a initialization vector. So in this case is the IV /dev/zero? Also if i want to generate another file with same contents is that possible?

      – skane
      Aug 6 '14 at 18:47













    • It should stop by itself with a "disk full" warning. I'm updating the post to explain how it works.

      – Valmiky Arquissandas
      Aug 6 '14 at 18:52











    • I appreciate it.Able to understand clearly now, Thank you! The only issue is generating a output file of a particular size.I need a bunch of them to transfer and check timings n stuff.Can't have it running till the "disk full" warning

      – skane
      Aug 6 '14 at 19:33













    • In that case, you can also use dd. The following line will create a 100 MB file with random data (count * blocksize = 100 * 1M): dd if=<(openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero) of=filename bs=1M count=100

      – Valmiky Arquissandas
      Aug 6 '14 at 22:36











    • Sorry, the above line doesn't work because dd can't handle the input file being a stream. To make it accumulate the input blocks, you need to add the option iflag=fullblock to the outer dd, like this: dd if=<(openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero) of=filename bs=1M count=100 iflag=fullblock. I am adding this to the answer.

      – Valmiky Arquissandas
      Aug 6 '14 at 22:43
















    13














    I've seen a pretty neat trick at commandlinefu: use /dev/urandom as a source of randomness (it is a good source), and then using that as a password to an AES stream cipher.



    I can't tell you with 100% sure, but I do believe that if you change the parameters (i.e. use way more than just 128 bytes from /dev/urandom), it is at least close enough to a cryptographically secure PRNG, for all practical purposes:




    This command generates a pseudo-random data stream using aes-256-ctr
    with a seed set by /dev/urandom. Redirect to a block device for secure
    data scrambling.




    openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero > randomfile.bin


    How does this work?



    openssl enc -aes-256-ctr will use openssl to encrypt zeroes with AES-256 in CTR mode.





    • What will it encrypt?



      /dev/zero




    • What is the password it will use to encrypt it?



      dd if=/dev/urandom bs=128 count=1 | base64



      That is one block of 128 bytes of /dev/urandom encoded in base64 (the redirect to /dev/null is to ignore errors).




    • I'm actually not sure why -nosalt is being used, since OpenSSL's man page states the following:



      -salt
      use a salt in the key derivation routines. This is the default.

      -nosalt
      don't use a salt in the key derivation routines. This option SHOULD NOT be used except for test purposes or compatibility with ancient versions of OpenSSL and SSLeay.


      Perhaps the point is to make this run as fast as possible, and the use of salts would be unjustified, but I'm not sure whether this would leave any kind of pattern in the ciphertext. The folks at the Cryptography Stack Exchange may be able to give us a more thorough explanation on that.



    • The input is /dev/zero. This is because it really doesn't matter what is being encrypted - the output will be something resembling random data. Zeros are fast to get, and you can get (and encrypt) as much as you want without running out of them.


    • The output is randomfile.bin. It could also be /dev/sdz and you would randomize a full block device.



    But I want to create a file with a fixed size! How do I do that?



    Simple!



    dd if=<(openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero) of=filename bs=1M count=100 iflag=fullblock


    Just dd that command with a fixed blocksize (which is 1 MB here) and count. The file size will be blocksize * count = 1M * 100 = 100M.






    share|improve this answer


























    • I was able to generate file quickly but it doesn't stop w/o Ctrl+C.Is there any way of giving out a file size? Also i didn't understand the "-nosalt < /dev/zero >" part.Frm wht i found ol it gives a initialization vector. So in this case is the IV /dev/zero? Also if i want to generate another file with same contents is that possible?

      – skane
      Aug 6 '14 at 18:47













    • It should stop by itself with a "disk full" warning. I'm updating the post to explain how it works.

      – Valmiky Arquissandas
      Aug 6 '14 at 18:52











    • I appreciate it.Able to understand clearly now, Thank you! The only issue is generating a output file of a particular size.I need a bunch of them to transfer and check timings n stuff.Can't have it running till the "disk full" warning

      – skane
      Aug 6 '14 at 19:33













    • In that case, you can also use dd. The following line will create a 100 MB file with random data (count * blocksize = 100 * 1M): dd if=<(openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero) of=filename bs=1M count=100

      – Valmiky Arquissandas
      Aug 6 '14 at 22:36











    • Sorry, the above line doesn't work because dd can't handle the input file being a stream. To make it accumulate the input blocks, you need to add the option iflag=fullblock to the outer dd, like this: dd if=<(openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero) of=filename bs=1M count=100 iflag=fullblock. I am adding this to the answer.

      – Valmiky Arquissandas
      Aug 6 '14 at 22:43














    13












    13








    13







    I've seen a pretty neat trick at commandlinefu: use /dev/urandom as a source of randomness (it is a good source), and then using that as a password to an AES stream cipher.



    I can't tell you with 100% sure, but I do believe that if you change the parameters (i.e. use way more than just 128 bytes from /dev/urandom), it is at least close enough to a cryptographically secure PRNG, for all practical purposes:




    This command generates a pseudo-random data stream using aes-256-ctr
    with a seed set by /dev/urandom. Redirect to a block device for secure
    data scrambling.




    openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero > randomfile.bin


    How does this work?



    openssl enc -aes-256-ctr will use openssl to encrypt zeroes with AES-256 in CTR mode.





    • What will it encrypt?



      /dev/zero




    • What is the password it will use to encrypt it?



      dd if=/dev/urandom bs=128 count=1 | base64



      That is one block of 128 bytes of /dev/urandom encoded in base64 (the redirect to /dev/null is to ignore errors).




    • I'm actually not sure why -nosalt is being used, since OpenSSL's man page states the following:



      -salt
      use a salt in the key derivation routines. This is the default.

      -nosalt
      don't use a salt in the key derivation routines. This option SHOULD NOT be used except for test purposes or compatibility with ancient versions of OpenSSL and SSLeay.


      Perhaps the point is to make this run as fast as possible, and the use of salts would be unjustified, but I'm not sure whether this would leave any kind of pattern in the ciphertext. The folks at the Cryptography Stack Exchange may be able to give us a more thorough explanation on that.



    • The input is /dev/zero. This is because it really doesn't matter what is being encrypted - the output will be something resembling random data. Zeros are fast to get, and you can get (and encrypt) as much as you want without running out of them.


    • The output is randomfile.bin. It could also be /dev/sdz and you would randomize a full block device.



    But I want to create a file with a fixed size! How do I do that?



    Simple!



    dd if=<(openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero) of=filename bs=1M count=100 iflag=fullblock


    Just dd that command with a fixed blocksize (which is 1 MB here) and count. The file size will be blocksize * count = 1M * 100 = 100M.






    share|improve this answer















    I've seen a pretty neat trick at commandlinefu: use /dev/urandom as a source of randomness (it is a good source), and then using that as a password to an AES stream cipher.



    I can't tell you with 100% sure, but I do believe that if you change the parameters (i.e. use way more than just 128 bytes from /dev/urandom), it is at least close enough to a cryptographically secure PRNG, for all practical purposes:




    This command generates a pseudo-random data stream using aes-256-ctr
    with a seed set by /dev/urandom. Redirect to a block device for secure
    data scrambling.




    openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero > randomfile.bin


    How does this work?



    openssl enc -aes-256-ctr will use openssl to encrypt zeroes with AES-256 in CTR mode.





    • What will it encrypt?



      /dev/zero




    • What is the password it will use to encrypt it?



      dd if=/dev/urandom bs=128 count=1 | base64



      That is one block of 128 bytes of /dev/urandom encoded in base64 (the redirect to /dev/null is to ignore errors).




    • I'm actually not sure why -nosalt is being used, since OpenSSL's man page states the following:



      -salt
      use a salt in the key derivation routines. This is the default.

      -nosalt
      don't use a salt in the key derivation routines. This option SHOULD NOT be used except for test purposes or compatibility with ancient versions of OpenSSL and SSLeay.


      Perhaps the point is to make this run as fast as possible, and the use of salts would be unjustified, but I'm not sure whether this would leave any kind of pattern in the ciphertext. The folks at the Cryptography Stack Exchange may be able to give us a more thorough explanation on that.



    • The input is /dev/zero. This is because it really doesn't matter what is being encrypted - the output will be something resembling random data. Zeros are fast to get, and you can get (and encrypt) as much as you want without running out of them.


    • The output is randomfile.bin. It could also be /dev/sdz and you would randomize a full block device.



    But I want to create a file with a fixed size! How do I do that?



    Simple!



    dd if=<(openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero) of=filename bs=1M count=100 iflag=fullblock


    Just dd that command with a fixed blocksize (which is 1 MB here) and count. The file size will be blocksize * count = 1M * 100 = 100M.







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited Aug 6 '14 at 22:46

























    answered Aug 6 '14 at 1:39









    Valmiky ArquissandasValmiky Arquissandas

    1,671922




    1,671922













    • I was able to generate file quickly but it doesn't stop w/o Ctrl+C.Is there any way of giving out a file size? Also i didn't understand the "-nosalt < /dev/zero >" part.Frm wht i found ol it gives a initialization vector. So in this case is the IV /dev/zero? Also if i want to generate another file with same contents is that possible?

      – skane
      Aug 6 '14 at 18:47













    • It should stop by itself with a "disk full" warning. I'm updating the post to explain how it works.

      – Valmiky Arquissandas
      Aug 6 '14 at 18:52











    • I appreciate it.Able to understand clearly now, Thank you! The only issue is generating a output file of a particular size.I need a bunch of them to transfer and check timings n stuff.Can't have it running till the "disk full" warning

      – skane
      Aug 6 '14 at 19:33













    • In that case, you can also use dd. The following line will create a 100 MB file with random data (count * blocksize = 100 * 1M): dd if=<(openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero) of=filename bs=1M count=100

      – Valmiky Arquissandas
      Aug 6 '14 at 22:36











    • Sorry, the above line doesn't work because dd can't handle the input file being a stream. To make it accumulate the input blocks, you need to add the option iflag=fullblock to the outer dd, like this: dd if=<(openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero) of=filename bs=1M count=100 iflag=fullblock. I am adding this to the answer.

      – Valmiky Arquissandas
      Aug 6 '14 at 22:43



















    • I was able to generate file quickly but it doesn't stop w/o Ctrl+C.Is there any way of giving out a file size? Also i didn't understand the "-nosalt < /dev/zero >" part.Frm wht i found ol it gives a initialization vector. So in this case is the IV /dev/zero? Also if i want to generate another file with same contents is that possible?

      – skane
      Aug 6 '14 at 18:47













    • It should stop by itself with a "disk full" warning. I'm updating the post to explain how it works.

      – Valmiky Arquissandas
      Aug 6 '14 at 18:52











    • I appreciate it.Able to understand clearly now, Thank you! The only issue is generating a output file of a particular size.I need a bunch of them to transfer and check timings n stuff.Can't have it running till the "disk full" warning

      – skane
      Aug 6 '14 at 19:33













    • In that case, you can also use dd. The following line will create a 100 MB file with random data (count * blocksize = 100 * 1M): dd if=<(openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero) of=filename bs=1M count=100

      – Valmiky Arquissandas
      Aug 6 '14 at 22:36











    • Sorry, the above line doesn't work because dd can't handle the input file being a stream. To make it accumulate the input blocks, you need to add the option iflag=fullblock to the outer dd, like this: dd if=<(openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero) of=filename bs=1M count=100 iflag=fullblock. I am adding this to the answer.

      – Valmiky Arquissandas
      Aug 6 '14 at 22:43

















    I was able to generate file quickly but it doesn't stop w/o Ctrl+C.Is there any way of giving out a file size? Also i didn't understand the "-nosalt < /dev/zero >" part.Frm wht i found ol it gives a initialization vector. So in this case is the IV /dev/zero? Also if i want to generate another file with same contents is that possible?

    – skane
    Aug 6 '14 at 18:47







    I was able to generate file quickly but it doesn't stop w/o Ctrl+C.Is there any way of giving out a file size? Also i didn't understand the "-nosalt < /dev/zero >" part.Frm wht i found ol it gives a initialization vector. So in this case is the IV /dev/zero? Also if i want to generate another file with same contents is that possible?

    – skane
    Aug 6 '14 at 18:47















    It should stop by itself with a "disk full" warning. I'm updating the post to explain how it works.

    – Valmiky Arquissandas
    Aug 6 '14 at 18:52





    It should stop by itself with a "disk full" warning. I'm updating the post to explain how it works.

    – Valmiky Arquissandas
    Aug 6 '14 at 18:52













    I appreciate it.Able to understand clearly now, Thank you! The only issue is generating a output file of a particular size.I need a bunch of them to transfer and check timings n stuff.Can't have it running till the "disk full" warning

    – skane
    Aug 6 '14 at 19:33







    I appreciate it.Able to understand clearly now, Thank you! The only issue is generating a output file of a particular size.I need a bunch of them to transfer and check timings n stuff.Can't have it running till the "disk full" warning

    – skane
    Aug 6 '14 at 19:33















    In that case, you can also use dd. The following line will create a 100 MB file with random data (count * blocksize = 100 * 1M): dd if=<(openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero) of=filename bs=1M count=100

    – Valmiky Arquissandas
    Aug 6 '14 at 22:36





    In that case, you can also use dd. The following line will create a 100 MB file with random data (count * blocksize = 100 * 1M): dd if=<(openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero) of=filename bs=1M count=100

    – Valmiky Arquissandas
    Aug 6 '14 at 22:36













    Sorry, the above line doesn't work because dd can't handle the input file being a stream. To make it accumulate the input blocks, you need to add the option iflag=fullblock to the outer dd, like this: dd if=<(openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero) of=filename bs=1M count=100 iflag=fullblock. I am adding this to the answer.

    – Valmiky Arquissandas
    Aug 6 '14 at 22:43





    Sorry, the above line doesn't work because dd can't handle the input file being a stream. To make it accumulate the input blocks, you need to add the option iflag=fullblock to the outer dd, like this: dd if=<(openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero) of=filename bs=1M count=100 iflag=fullblock. I am adding this to the answer.

    – Valmiky Arquissandas
    Aug 6 '14 at 22:43













    5














    There is a random number generator program sharand, it writes random bytes to a file. (The program was originally called sharnd, with one letter a less ( see http://mattmahoney.net/dc/)



    It takes roughly one third of the time compared to reading /dev/urandom



    It's a secure RNG - there are faster, but not secure RNG, but that's not what's needed normally.

    To be really fast, look for the collection of RNG algorithms for perl: libstring-random-perl.





    Let's give it a try (apt-get install sharand):



    $ time sharand a 1000000000                      
    sharand a 1000000000 21.72s user 0.34s system 99% cpu 22.087 total

    $ time head -c 1000000000 /dev/urandom > urand.out
    head -c 1000000000 /dev/urandom > urand.out 0.13s user 61.22s system 99% cpu 1:01.41 total


    And the result files - (they do look more random from the inside):



    $ ls -l
    -rw-rw-r-- 1 siegel siegel 1000000000 Aug 5 03:02 sharand.out
    -rw-rw-r-- 1 siegel siegel 1000000000 Aug 5 03:11 urand.out




    Comparing the 'total' time values, sharand took only a third of the time needed by the urandom method to create a little less than a GB random bytes:



    sharand: 22s total
    urandom: 61s total






    share|improve this answer


























    • I am using CentOS 6.5 and sharand is not available. I tried installing using: yum install sharand. It gives me "No package sharand available." Also if i run "time sharand a 1000000000", it says : command not found

      – skane
      Aug 5 '14 at 20:36













    • Oh, I just found that the program originally was not called sharand, as in ubuntu, but sharnd whith one "a" less. So that may be just a different package name. Looking at the home page of the software, it seems like he does not provide any packages, except the source; But most tools are just an algorithm in a single .c file, and very simple to build. If you can not find a package with the original name either, we'll find another way. (Sources and math papers here: mattmahoney.net/dc)

      – Volker Siegel
      Aug 6 '14 at 1:16











    • Unfortunately,i am unable to figure out how to work with sharnd also. One method bewlo with the openssl is working fine for now but only issue is how to specify the output file size.

      – skane
      Aug 6 '14 at 19:36


















    5














    There is a random number generator program sharand, it writes random bytes to a file. (The program was originally called sharnd, with one letter a less ( see http://mattmahoney.net/dc/)



    It takes roughly one third of the time compared to reading /dev/urandom



    It's a secure RNG - there are faster, but not secure RNG, but that's not what's needed normally.

    To be really fast, look for the collection of RNG algorithms for perl: libstring-random-perl.





    Let's give it a try (apt-get install sharand):



    $ time sharand a 1000000000                      
    sharand a 1000000000 21.72s user 0.34s system 99% cpu 22.087 total

    $ time head -c 1000000000 /dev/urandom > urand.out
    head -c 1000000000 /dev/urandom > urand.out 0.13s user 61.22s system 99% cpu 1:01.41 total


    And the result files - (they do look more random from the inside):



    $ ls -l
    -rw-rw-r-- 1 siegel siegel 1000000000 Aug 5 03:02 sharand.out
    -rw-rw-r-- 1 siegel siegel 1000000000 Aug 5 03:11 urand.out




    Comparing the 'total' time values, sharand took only a third of the time needed by the urandom method to create a little less than a GB random bytes:



    sharand: 22s total
    urandom: 61s total






    share|improve this answer


























    • I am using CentOS 6.5 and sharand is not available. I tried installing using: yum install sharand. It gives me "No package sharand available." Also if i run "time sharand a 1000000000", it says : command not found

      – skane
      Aug 5 '14 at 20:36













    • Oh, I just found that the program originally was not called sharand, as in ubuntu, but sharnd whith one "a" less. So that may be just a different package name. Looking at the home page of the software, it seems like he does not provide any packages, except the source; But most tools are just an algorithm in a single .c file, and very simple to build. If you can not find a package with the original name either, we'll find another way. (Sources and math papers here: mattmahoney.net/dc)

      – Volker Siegel
      Aug 6 '14 at 1:16











    • Unfortunately,i am unable to figure out how to work with sharnd also. One method bewlo with the openssl is working fine for now but only issue is how to specify the output file size.

      – skane
      Aug 6 '14 at 19:36
















    5












    5








    5







    There is a random number generator program sharand, it writes random bytes to a file. (The program was originally called sharnd, with one letter a less ( see http://mattmahoney.net/dc/)



    It takes roughly one third of the time compared to reading /dev/urandom



    It's a secure RNG - there are faster, but not secure RNG, but that's not what's needed normally.

    To be really fast, look for the collection of RNG algorithms for perl: libstring-random-perl.





    Let's give it a try (apt-get install sharand):



    $ time sharand a 1000000000                      
    sharand a 1000000000 21.72s user 0.34s system 99% cpu 22.087 total

    $ time head -c 1000000000 /dev/urandom > urand.out
    head -c 1000000000 /dev/urandom > urand.out 0.13s user 61.22s system 99% cpu 1:01.41 total


    And the result files - (they do look more random from the inside):



    $ ls -l
    -rw-rw-r-- 1 siegel siegel 1000000000 Aug 5 03:02 sharand.out
    -rw-rw-r-- 1 siegel siegel 1000000000 Aug 5 03:11 urand.out




    Comparing the 'total' time values, sharand took only a third of the time needed by the urandom method to create a little less than a GB random bytes:



    sharand: 22s total
    urandom: 61s total






    share|improve this answer















    There is a random number generator program sharand, it writes random bytes to a file. (The program was originally called sharnd, with one letter a less ( see http://mattmahoney.net/dc/)



    It takes roughly one third of the time compared to reading /dev/urandom



    It's a secure RNG - there are faster, but not secure RNG, but that's not what's needed normally.

    To be really fast, look for the collection of RNG algorithms for perl: libstring-random-perl.





    Let's give it a try (apt-get install sharand):



    $ time sharand a 1000000000                      
    sharand a 1000000000 21.72s user 0.34s system 99% cpu 22.087 total

    $ time head -c 1000000000 /dev/urandom > urand.out
    head -c 1000000000 /dev/urandom > urand.out 0.13s user 61.22s system 99% cpu 1:01.41 total


    And the result files - (they do look more random from the inside):



    $ ls -l
    -rw-rw-r-- 1 siegel siegel 1000000000 Aug 5 03:02 sharand.out
    -rw-rw-r-- 1 siegel siegel 1000000000 Aug 5 03:11 urand.out




    Comparing the 'total' time values, sharand took only a third of the time needed by the urandom method to create a little less than a GB random bytes:



    sharand: 22s total
    urandom: 61s total







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited Aug 6 '14 at 1:22

























    answered Aug 5 '14 at 1:57









    Volker SiegelVolker Siegel

    1,058720




    1,058720













    • I am using CentOS 6.5 and sharand is not available. I tried installing using: yum install sharand. It gives me "No package sharand available." Also if i run "time sharand a 1000000000", it says : command not found

      – skane
      Aug 5 '14 at 20:36













    • Oh, I just found that the program originally was not called sharand, as in ubuntu, but sharnd whith one "a" less. So that may be just a different package name. Looking at the home page of the software, it seems like he does not provide any packages, except the source; But most tools are just an algorithm in a single .c file, and very simple to build. If you can not find a package with the original name either, we'll find another way. (Sources and math papers here: mattmahoney.net/dc)

      – Volker Siegel
      Aug 6 '14 at 1:16











    • Unfortunately,i am unable to figure out how to work with sharnd also. One method bewlo with the openssl is working fine for now but only issue is how to specify the output file size.

      – skane
      Aug 6 '14 at 19:36





















    • I am using CentOS 6.5 and sharand is not available. I tried installing using: yum install sharand. It gives me "No package sharand available." Also if i run "time sharand a 1000000000", it says : command not found

      – skane
      Aug 5 '14 at 20:36













    • Oh, I just found that the program originally was not called sharand, as in ubuntu, but sharnd whith one "a" less. So that may be just a different package name. Looking at the home page of the software, it seems like he does not provide any packages, except the source; But most tools are just an algorithm in a single .c file, and very simple to build. If you can not find a package with the original name either, we'll find another way. (Sources and math papers here: mattmahoney.net/dc)

      – Volker Siegel
      Aug 6 '14 at 1:16











    • Unfortunately,i am unable to figure out how to work with sharnd also. One method bewlo with the openssl is working fine for now but only issue is how to specify the output file size.

      – skane
      Aug 6 '14 at 19:36



















    I am using CentOS 6.5 and sharand is not available. I tried installing using: yum install sharand. It gives me "No package sharand available." Also if i run "time sharand a 1000000000", it says : command not found

    – skane
    Aug 5 '14 at 20:36







    I am using CentOS 6.5 and sharand is not available. I tried installing using: yum install sharand. It gives me "No package sharand available." Also if i run "time sharand a 1000000000", it says : command not found

    – skane
    Aug 5 '14 at 20:36















    Oh, I just found that the program originally was not called sharand, as in ubuntu, but sharnd whith one "a" less. So that may be just a different package name. Looking at the home page of the software, it seems like he does not provide any packages, except the source; But most tools are just an algorithm in a single .c file, and very simple to build. If you can not find a package with the original name either, we'll find another way. (Sources and math papers here: mattmahoney.net/dc)

    – Volker Siegel
    Aug 6 '14 at 1:16





    Oh, I just found that the program originally was not called sharand, as in ubuntu, but sharnd whith one "a" less. So that may be just a different package name. Looking at the home page of the software, it seems like he does not provide any packages, except the source; But most tools are just an algorithm in a single .c file, and very simple to build. If you can not find a package with the original name either, we'll find another way. (Sources and math papers here: mattmahoney.net/dc)

    – Volker Siegel
    Aug 6 '14 at 1:16













    Unfortunately,i am unable to figure out how to work with sharnd also. One method bewlo with the openssl is working fine for now but only issue is how to specify the output file size.

    – skane
    Aug 6 '14 at 19:36







    Unfortunately,i am unable to figure out how to work with sharnd also. One method bewlo with the openssl is working fine for now but only issue is how to specify the output file size.

    – skane
    Aug 6 '14 at 19:36













    3














    I'm getting good speeds using the shred utility.




    • 2G with dd in=/dev/urandom - 250sec

    • 2G with openssl rand - 81sec

    • 2G with shred - 39sec


    So I expect about 3-4 minutes for 10G with shred.





    Create an empty file and shred it by passing the desired file size.



    touch file
    shred -n 1 -s 10G file


    I'm not sure how cryptographically secure the generated data is, but it looks random. Here's some info on that.






    share|improve this answer





















    • 3





      +1 for introducing me to shred <3. So useful. I used to loop dd.

      – aggregate1166877
      Apr 17 '18 at 21:50
















    3














    I'm getting good speeds using the shred utility.




    • 2G with dd in=/dev/urandom - 250sec

    • 2G with openssl rand - 81sec

    • 2G with shred - 39sec


    So I expect about 3-4 minutes for 10G with shred.





    Create an empty file and shred it by passing the desired file size.



    touch file
    shred -n 1 -s 10G file


    I'm not sure how cryptographically secure the generated data is, but it looks random. Here's some info on that.






    share|improve this answer





















    • 3





      +1 for introducing me to shred <3. So useful. I used to loop dd.

      – aggregate1166877
      Apr 17 '18 at 21:50














    3












    3








    3







    I'm getting good speeds using the shred utility.




    • 2G with dd in=/dev/urandom - 250sec

    • 2G with openssl rand - 81sec

    • 2G with shred - 39sec


    So I expect about 3-4 minutes for 10G with shred.





    Create an empty file and shred it by passing the desired file size.



    touch file
    shred -n 1 -s 10G file


    I'm not sure how cryptographically secure the generated data is, but it looks random. Here's some info on that.






    share|improve this answer















    I'm getting good speeds using the shred utility.




    • 2G with dd in=/dev/urandom - 250sec

    • 2G with openssl rand - 81sec

    • 2G with shred - 39sec


    So I expect about 3-4 minutes for 10G with shred.





    Create an empty file and shred it by passing the desired file size.



    touch file
    shred -n 1 -s 10G file


    I'm not sure how cryptographically secure the generated data is, but it looks random. Here's some info on that.







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited Jan 14 at 8:13

























    answered Nov 8 '17 at 14:38









    lyuboslav kanevlyuboslav kanev

    313




    313








    • 3





      +1 for introducing me to shred <3. So useful. I used to loop dd.

      – aggregate1166877
      Apr 17 '18 at 21:50














    • 3





      +1 for introducing me to shred <3. So useful. I used to loop dd.

      – aggregate1166877
      Apr 17 '18 at 21:50








    3




    3





    +1 for introducing me to shred <3. So useful. I used to loop dd.

    – aggregate1166877
    Apr 17 '18 at 21:50





    +1 for introducing me to shred <3. So useful. I used to loop dd.

    – aggregate1166877
    Apr 17 '18 at 21:50











    2














    You want a special file in Linux, /dev/random serves as a random number generator on a Linux system. /dev/random will eventually block unless your system has a lot of activity, /dev/urandom in non-blocking. We don't want blocking when we're creating our files so we use /dev/urandom.





    try this command:



    dd if=/dev/urandom bs=1024 count=1000000 of=file_1GB conv=notrunc


    This will create a file with bs*count random bytes, in our case 1024*1000000 = 1GB.
    The file will not contain anything readable, but there will be some newlines in it.



    xKon@xK0n-ubuntu-vm:~/tmp$ dd if=/dev/urandom of=file.txt bs=1048576 count=100 conv=notrunc
    100+0 records in
    100+0 records out
    104857600 bytes (105 MB) copied, 13.4593 s, 7.8 MB/s
    xKon@xK0n-ubuntu-vm:~/tmp$ wc -l file.txt
    410102 file.txt


    You can use the option seek with dd to speed up the process a little more:



    $ dd if=/dev/zero of=1g.img bs=1 count=0 seek=1G
    1+0 records in
    1+0 records out
    1073741824 bytes (1.1 GB) copied, 8.12307 s, 132 MB/s
    $ ls -lh t
    -rw-rw-r-- 1 xK0n xK0n 1.1G 2014-08-05 11:43 t


    The disadvantages here are the fact that the file does not contain anything readable and the fact that it is quite a bit slower than the /dev/zero method (around 10 seconds for 100Mb).





    You may also like fallocate command that Preallocates space to a file.



    fallocate -l 1G test.img


    output




    -rw-r--r--. 1 xK0n xK0n 1.0G Aug 05 11:43 test.img







    share|improve this answer




























      2














      You want a special file in Linux, /dev/random serves as a random number generator on a Linux system. /dev/random will eventually block unless your system has a lot of activity, /dev/urandom in non-blocking. We don't want blocking when we're creating our files so we use /dev/urandom.





      try this command:



      dd if=/dev/urandom bs=1024 count=1000000 of=file_1GB conv=notrunc


      This will create a file with bs*count random bytes, in our case 1024*1000000 = 1GB.
      The file will not contain anything readable, but there will be some newlines in it.



      xKon@xK0n-ubuntu-vm:~/tmp$ dd if=/dev/urandom of=file.txt bs=1048576 count=100 conv=notrunc
      100+0 records in
      100+0 records out
      104857600 bytes (105 MB) copied, 13.4593 s, 7.8 MB/s
      xKon@xK0n-ubuntu-vm:~/tmp$ wc -l file.txt
      410102 file.txt


      You can use the option seek with dd to speed up the process a little more:



      $ dd if=/dev/zero of=1g.img bs=1 count=0 seek=1G
      1+0 records in
      1+0 records out
      1073741824 bytes (1.1 GB) copied, 8.12307 s, 132 MB/s
      $ ls -lh t
      -rw-rw-r-- 1 xK0n xK0n 1.1G 2014-08-05 11:43 t


      The disadvantages here are the fact that the file does not contain anything readable and the fact that it is quite a bit slower than the /dev/zero method (around 10 seconds for 100Mb).





      You may also like fallocate command that Preallocates space to a file.



      fallocate -l 1G test.img


      output




      -rw-r--r--. 1 xK0n xK0n 1.0G Aug 05 11:43 test.img







      share|improve this answer


























        2












        2








        2







        You want a special file in Linux, /dev/random serves as a random number generator on a Linux system. /dev/random will eventually block unless your system has a lot of activity, /dev/urandom in non-blocking. We don't want blocking when we're creating our files so we use /dev/urandom.





        try this command:



        dd if=/dev/urandom bs=1024 count=1000000 of=file_1GB conv=notrunc


        This will create a file with bs*count random bytes, in our case 1024*1000000 = 1GB.
        The file will not contain anything readable, but there will be some newlines in it.



        xKon@xK0n-ubuntu-vm:~/tmp$ dd if=/dev/urandom of=file.txt bs=1048576 count=100 conv=notrunc
        100+0 records in
        100+0 records out
        104857600 bytes (105 MB) copied, 13.4593 s, 7.8 MB/s
        xKon@xK0n-ubuntu-vm:~/tmp$ wc -l file.txt
        410102 file.txt


        You can use the option seek with dd to speed up the process a little more:



        $ dd if=/dev/zero of=1g.img bs=1 count=0 seek=1G
        1+0 records in
        1+0 records out
        1073741824 bytes (1.1 GB) copied, 8.12307 s, 132 MB/s
        $ ls -lh t
        -rw-rw-r-- 1 xK0n xK0n 1.1G 2014-08-05 11:43 t


        The disadvantages here are the fact that the file does not contain anything readable and the fact that it is quite a bit slower than the /dev/zero method (around 10 seconds for 100Mb).





        You may also like fallocate command that Preallocates space to a file.



        fallocate -l 1G test.img


        output




        -rw-r--r--. 1 xK0n xK0n 1.0G Aug 05 11:43 test.img







        share|improve this answer













        You want a special file in Linux, /dev/random serves as a random number generator on a Linux system. /dev/random will eventually block unless your system has a lot of activity, /dev/urandom in non-blocking. We don't want blocking when we're creating our files so we use /dev/urandom.





        try this command:



        dd if=/dev/urandom bs=1024 count=1000000 of=file_1GB conv=notrunc


        This will create a file with bs*count random bytes, in our case 1024*1000000 = 1GB.
        The file will not contain anything readable, but there will be some newlines in it.



        xKon@xK0n-ubuntu-vm:~/tmp$ dd if=/dev/urandom of=file.txt bs=1048576 count=100 conv=notrunc
        100+0 records in
        100+0 records out
        104857600 bytes (105 MB) copied, 13.4593 s, 7.8 MB/s
        xKon@xK0n-ubuntu-vm:~/tmp$ wc -l file.txt
        410102 file.txt


        You can use the option seek with dd to speed up the process a little more:



        $ dd if=/dev/zero of=1g.img bs=1 count=0 seek=1G
        1+0 records in
        1+0 records out
        1073741824 bytes (1.1 GB) copied, 8.12307 s, 132 MB/s
        $ ls -lh t
        -rw-rw-r-- 1 xK0n xK0n 1.1G 2014-08-05 11:43 t


        The disadvantages here are the fact that the file does not contain anything readable and the fact that it is quite a bit slower than the /dev/zero method (around 10 seconds for 100Mb).





        You may also like fallocate command that Preallocates space to a file.



        fallocate -l 1G test.img


        output




        -rw-r--r--. 1 xK0n xK0n 1.0G Aug 05 11:43 test.img








        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Aug 5 '14 at 6:17









        xxbinxxxxbinxx

        1363




        1363






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Super User!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f792427%2fcreating-a-large-file-of-random-bytes-quickly%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            flock() on closed filehandle LOCK_FILE at /usr/bin/apt-mirror

            Mangá

             ⁒  ․,‪⁊‑⁙ ⁖, ⁇‒※‌, †,⁖‗‌⁝    ‾‸⁘,‖⁔⁣,⁂‾
”‑,‥–,‬ ,⁀‹⁋‴⁑ ‒ ,‴⁋”‼ ⁨,‷⁔„ ‰′,‐‚ ‥‡‎“‷⁃⁨⁅⁣,⁔
⁇‘⁔⁡⁏⁌⁡‿‶‏⁨ ⁣⁕⁖⁨⁩⁥‽⁀  ‴‬⁜‟ ⁃‣‧⁕‮ …‍⁨‴ ⁩,⁚⁖‫ ,‵ ⁀,‮⁝‣‣ ⁑  ⁂– ․, ‾‽ ‏⁁“⁗‸ ‾… ‹‡⁌⁎‸‘ ‡⁏⁌‪ ‵⁛ ‎⁨ ―⁦⁤⁄⁕