How is it that a file compression program can use up more RAM than the uncompressed file it is compressing?












3















I was compressing a 120 MB set of files on the best compression that 7z offers and noticed that it was consuming nearly 600MB of RAM at peak.



Why do these compression programs use so much RAM even when working with realitivly small data sets, even to the point of consuming multiple times more memory than the uncompressed size of its data set?



Just curious, I'm more interested in the technical side of it.










share|improve this question





























    3















    I was compressing a 120 MB set of files on the best compression that 7z offers and noticed that it was consuming nearly 600MB of RAM at peak.



    Why do these compression programs use so much RAM even when working with realitivly small data sets, even to the point of consuming multiple times more memory than the uncompressed size of its data set?



    Just curious, I'm more interested in the technical side of it.










    share|improve this question



























      3












      3








      3








      I was compressing a 120 MB set of files on the best compression that 7z offers and noticed that it was consuming nearly 600MB of RAM at peak.



      Why do these compression programs use so much RAM even when working with realitivly small data sets, even to the point of consuming multiple times more memory than the uncompressed size of its data set?



      Just curious, I'm more interested in the technical side of it.










      share|improve this question
















      I was compressing a 120 MB set of files on the best compression that 7z offers and noticed that it was consuming nearly 600MB of RAM at peak.



      Why do these compression programs use so much RAM even when working with realitivly small data sets, even to the point of consuming multiple times more memory than the uncompressed size of its data set?



      Just curious, I'm more interested in the technical side of it.







      compression






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Oct 13 '10 at 14:58









      Dennis Williamson

      76.7k14129167




      76.7k14129167










      asked Oct 13 '10 at 14:22









      FakenFaken

      75661434




      75661434






















          2 Answers
          2






          active

          oldest

          votes


















          6














          Never been into compression technically, but lets start searching ...



          The 7z helpfile mentions:




          LZMA is an algorithm based on
          Lempel-Ziv algorithm. It provides very
          fast decompression (about 10-20 times
          faster than compression). Memory
          requirements for compression and
          decompression also are different (see
          d={Size}[b|k|m] switch for details).




          (Note that the L-Z algorithm article on wikipedia does not mention anything about memory requirement.)




          d={Size}[b|k|m] Sets Dictionary size
          for LZMA
          . You must specify the size in
          bytes, kilobytes, or megabytes. The
          maximum value for dictionary size is 1
          GB = 2^30 bytes. Default values for
          LZMA are 24 (16 MB) in normal mode, 25
          (32 MB) in maximum mode (-mx=7) and 26
          (64 MB) in ultra mode (-mx=9). If you
          do not specify any symbol from the set
          [b|k|m], the dictionary size will be
          calculated as DictionarySize = 2^Size
          bytes. For decompressing a file
          compressed by LZMA method with
          dictionary size N, you need about N
          bytes of memory (RAM) available.




          Following wikipedia further to the article about dictionary coders it would appear that the algorithm works by comparing the data to be compressed to a set of data in a "dictionary" that has to be based on the raw data that is to be compressed.



          Regardless of how this dictionary is built, since it must be kept in memory, the RAM requirement is a function of this dictionary. And since this dictionary isn't raw data, but some uncompressed data structure, it will (can) be bigger than the raw data that is processed. Makes sense?






          share|improve this answer
























          • Read this, it can give you some clues: en.wikipedia.org/wiki/LZ77_and_LZ78

            – LawrenceC
            Jun 19 '12 at 12:48



















          0














          If the other answer is too challenging for someone to read because it has a lot of technical jargon, I offer my answer.



          A file is stored in the hard drive or solid drive. What is a file you ask? I answer, a bunch of 1s and 0s arranged in a particular order that it looks like a file from the outside. What is an executable program *.exe? It is machine code executable, also a bunch of 1s and 0s. It is also stored in your disk drive. When you click on the file compression executable, the code instructions algorithm gets loaded from the *.exe in the disk drive into the RAM. Only then it is able to run. The computer's CPU runs programs and reads/writes data. It cannot get anything directly from the disk drive. It has to load everything into the RAM memory first, which acts as a middle man between the CPU and the disk drive where all your data is stored.



          Now the file compression program is being run by the CPU in the RAM. What do the code instructions tell the CPU to do? They tell it to load the actual file itself from the disk drive into the RAM memory so that the program can work with it. So now we have two things in the RAM memory: the program itself, and the file.



          You tell this file compression program to compress the file. However it cannot magically just do that. To be compressed, a file has to be arranged in a certain order, as tightly as possible. Perhaps prior to compression, the file was somewhat unorganized, like your file cabinet. The file compression program has to organize the file as neatly and tightly as possible. To do this it has to temporarily put the file into an even more unorganized state in order to find all the pieces where everything belongs.



          Think about how you would compress your papers. You would first spread them all over your desk to that you can see all them, and them order them by categories, and start putting the papers into folders.



          So now we have three things in the RAM memory:
          1. The program instructions itself.
          2. The original file which was loaded from the disk drive.
          3. A temporary copy of the original file, which is in state of being taken apart and put back together.
          Maybe multiple temporary copies of the whole file or even parts of it are made in the RAM to make it easier for the program to organize and compress this file. Do you now see how file compression programs can take up much more RAM when they are working compared relatively to the size of the original file in the disk drive?



          The amount of RAM used up during this process depends on the skill of the programmer who designed the application. There are clever and efficient ways to write the code so that it minimizes the consumption of RAM. And then there are brute force ways to achieve the same task but it runs slower and takes up more RAM. RAM may even be wasted if the program has a memory leak. Think of a memory leak as making multiple copies of the same data but then leaving it on the desk and never even bothering to clean up after yourself.



          Eventually though all the temporary copies would be condensed into the compressed version of the file. It's still in the RAM memory though, so then that compressed version of the file has to be sent all the way back to the hard drive disk where it is saved permanently.



          The main idea is that to reach a state of low entropy you should temporarily go through a state of high entropy. This is of course written in the most general terms.



          Picture of the RAM inside






          share|improve this answer























            Your Answer








            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "3"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f199031%2fhow-is-it-that-a-file-compression-program-can-use-up-more-ram-than-the-uncompres%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            6














            Never been into compression technically, but lets start searching ...



            The 7z helpfile mentions:




            LZMA is an algorithm based on
            Lempel-Ziv algorithm. It provides very
            fast decompression (about 10-20 times
            faster than compression). Memory
            requirements for compression and
            decompression also are different (see
            d={Size}[b|k|m] switch for details).




            (Note that the L-Z algorithm article on wikipedia does not mention anything about memory requirement.)




            d={Size}[b|k|m] Sets Dictionary size
            for LZMA
            . You must specify the size in
            bytes, kilobytes, or megabytes. The
            maximum value for dictionary size is 1
            GB = 2^30 bytes. Default values for
            LZMA are 24 (16 MB) in normal mode, 25
            (32 MB) in maximum mode (-mx=7) and 26
            (64 MB) in ultra mode (-mx=9). If you
            do not specify any symbol from the set
            [b|k|m], the dictionary size will be
            calculated as DictionarySize = 2^Size
            bytes. For decompressing a file
            compressed by LZMA method with
            dictionary size N, you need about N
            bytes of memory (RAM) available.




            Following wikipedia further to the article about dictionary coders it would appear that the algorithm works by comparing the data to be compressed to a set of data in a "dictionary" that has to be based on the raw data that is to be compressed.



            Regardless of how this dictionary is built, since it must be kept in memory, the RAM requirement is a function of this dictionary. And since this dictionary isn't raw data, but some uncompressed data structure, it will (can) be bigger than the raw data that is processed. Makes sense?






            share|improve this answer
























            • Read this, it can give you some clues: en.wikipedia.org/wiki/LZ77_and_LZ78

              – LawrenceC
              Jun 19 '12 at 12:48
















            6














            Never been into compression technically, but lets start searching ...



            The 7z helpfile mentions:




            LZMA is an algorithm based on
            Lempel-Ziv algorithm. It provides very
            fast decompression (about 10-20 times
            faster than compression). Memory
            requirements for compression and
            decompression also are different (see
            d={Size}[b|k|m] switch for details).




            (Note that the L-Z algorithm article on wikipedia does not mention anything about memory requirement.)




            d={Size}[b|k|m] Sets Dictionary size
            for LZMA
            . You must specify the size in
            bytes, kilobytes, or megabytes. The
            maximum value for dictionary size is 1
            GB = 2^30 bytes. Default values for
            LZMA are 24 (16 MB) in normal mode, 25
            (32 MB) in maximum mode (-mx=7) and 26
            (64 MB) in ultra mode (-mx=9). If you
            do not specify any symbol from the set
            [b|k|m], the dictionary size will be
            calculated as DictionarySize = 2^Size
            bytes. For decompressing a file
            compressed by LZMA method with
            dictionary size N, you need about N
            bytes of memory (RAM) available.




            Following wikipedia further to the article about dictionary coders it would appear that the algorithm works by comparing the data to be compressed to a set of data in a "dictionary" that has to be based on the raw data that is to be compressed.



            Regardless of how this dictionary is built, since it must be kept in memory, the RAM requirement is a function of this dictionary. And since this dictionary isn't raw data, but some uncompressed data structure, it will (can) be bigger than the raw data that is processed. Makes sense?






            share|improve this answer
























            • Read this, it can give you some clues: en.wikipedia.org/wiki/LZ77_and_LZ78

              – LawrenceC
              Jun 19 '12 at 12:48














            6












            6








            6







            Never been into compression technically, but lets start searching ...



            The 7z helpfile mentions:




            LZMA is an algorithm based on
            Lempel-Ziv algorithm. It provides very
            fast decompression (about 10-20 times
            faster than compression). Memory
            requirements for compression and
            decompression also are different (see
            d={Size}[b|k|m] switch for details).




            (Note that the L-Z algorithm article on wikipedia does not mention anything about memory requirement.)




            d={Size}[b|k|m] Sets Dictionary size
            for LZMA
            . You must specify the size in
            bytes, kilobytes, or megabytes. The
            maximum value for dictionary size is 1
            GB = 2^30 bytes. Default values for
            LZMA are 24 (16 MB) in normal mode, 25
            (32 MB) in maximum mode (-mx=7) and 26
            (64 MB) in ultra mode (-mx=9). If you
            do not specify any symbol from the set
            [b|k|m], the dictionary size will be
            calculated as DictionarySize = 2^Size
            bytes. For decompressing a file
            compressed by LZMA method with
            dictionary size N, you need about N
            bytes of memory (RAM) available.




            Following wikipedia further to the article about dictionary coders it would appear that the algorithm works by comparing the data to be compressed to a set of data in a "dictionary" that has to be based on the raw data that is to be compressed.



            Regardless of how this dictionary is built, since it must be kept in memory, the RAM requirement is a function of this dictionary. And since this dictionary isn't raw data, but some uncompressed data structure, it will (can) be bigger than the raw data that is processed. Makes sense?






            share|improve this answer













            Never been into compression technically, but lets start searching ...



            The 7z helpfile mentions:




            LZMA is an algorithm based on
            Lempel-Ziv algorithm. It provides very
            fast decompression (about 10-20 times
            faster than compression). Memory
            requirements for compression and
            decompression also are different (see
            d={Size}[b|k|m] switch for details).




            (Note that the L-Z algorithm article on wikipedia does not mention anything about memory requirement.)




            d={Size}[b|k|m] Sets Dictionary size
            for LZMA
            . You must specify the size in
            bytes, kilobytes, or megabytes. The
            maximum value for dictionary size is 1
            GB = 2^30 bytes. Default values for
            LZMA are 24 (16 MB) in normal mode, 25
            (32 MB) in maximum mode (-mx=7) and 26
            (64 MB) in ultra mode (-mx=9). If you
            do not specify any symbol from the set
            [b|k|m], the dictionary size will be
            calculated as DictionarySize = 2^Size
            bytes. For decompressing a file
            compressed by LZMA method with
            dictionary size N, you need about N
            bytes of memory (RAM) available.




            Following wikipedia further to the article about dictionary coders it would appear that the algorithm works by comparing the data to be compressed to a set of data in a "dictionary" that has to be based on the raw data that is to be compressed.



            Regardless of how this dictionary is built, since it must be kept in memory, the RAM requirement is a function of this dictionary. And since this dictionary isn't raw data, but some uncompressed data structure, it will (can) be bigger than the raw data that is processed. Makes sense?







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Oct 13 '10 at 14:46









            MartinMartin

            1,19142750




            1,19142750













            • Read this, it can give you some clues: en.wikipedia.org/wiki/LZ77_and_LZ78

              – LawrenceC
              Jun 19 '12 at 12:48



















            • Read this, it can give you some clues: en.wikipedia.org/wiki/LZ77_and_LZ78

              – LawrenceC
              Jun 19 '12 at 12:48

















            Read this, it can give you some clues: en.wikipedia.org/wiki/LZ77_and_LZ78

            – LawrenceC
            Jun 19 '12 at 12:48





            Read this, it can give you some clues: en.wikipedia.org/wiki/LZ77_and_LZ78

            – LawrenceC
            Jun 19 '12 at 12:48













            0














            If the other answer is too challenging for someone to read because it has a lot of technical jargon, I offer my answer.



            A file is stored in the hard drive or solid drive. What is a file you ask? I answer, a bunch of 1s and 0s arranged in a particular order that it looks like a file from the outside. What is an executable program *.exe? It is machine code executable, also a bunch of 1s and 0s. It is also stored in your disk drive. When you click on the file compression executable, the code instructions algorithm gets loaded from the *.exe in the disk drive into the RAM. Only then it is able to run. The computer's CPU runs programs and reads/writes data. It cannot get anything directly from the disk drive. It has to load everything into the RAM memory first, which acts as a middle man between the CPU and the disk drive where all your data is stored.



            Now the file compression program is being run by the CPU in the RAM. What do the code instructions tell the CPU to do? They tell it to load the actual file itself from the disk drive into the RAM memory so that the program can work with it. So now we have two things in the RAM memory: the program itself, and the file.



            You tell this file compression program to compress the file. However it cannot magically just do that. To be compressed, a file has to be arranged in a certain order, as tightly as possible. Perhaps prior to compression, the file was somewhat unorganized, like your file cabinet. The file compression program has to organize the file as neatly and tightly as possible. To do this it has to temporarily put the file into an even more unorganized state in order to find all the pieces where everything belongs.



            Think about how you would compress your papers. You would first spread them all over your desk to that you can see all them, and them order them by categories, and start putting the papers into folders.



            So now we have three things in the RAM memory:
            1. The program instructions itself.
            2. The original file which was loaded from the disk drive.
            3. A temporary copy of the original file, which is in state of being taken apart and put back together.
            Maybe multiple temporary copies of the whole file or even parts of it are made in the RAM to make it easier for the program to organize and compress this file. Do you now see how file compression programs can take up much more RAM when they are working compared relatively to the size of the original file in the disk drive?



            The amount of RAM used up during this process depends on the skill of the programmer who designed the application. There are clever and efficient ways to write the code so that it minimizes the consumption of RAM. And then there are brute force ways to achieve the same task but it runs slower and takes up more RAM. RAM may even be wasted if the program has a memory leak. Think of a memory leak as making multiple copies of the same data but then leaving it on the desk and never even bothering to clean up after yourself.



            Eventually though all the temporary copies would be condensed into the compressed version of the file. It's still in the RAM memory though, so then that compressed version of the file has to be sent all the way back to the hard drive disk where it is saved permanently.



            The main idea is that to reach a state of low entropy you should temporarily go through a state of high entropy. This is of course written in the most general terms.



            Picture of the RAM inside






            share|improve this answer




























              0














              If the other answer is too challenging for someone to read because it has a lot of technical jargon, I offer my answer.



              A file is stored in the hard drive or solid drive. What is a file you ask? I answer, a bunch of 1s and 0s arranged in a particular order that it looks like a file from the outside. What is an executable program *.exe? It is machine code executable, also a bunch of 1s and 0s. It is also stored in your disk drive. When you click on the file compression executable, the code instructions algorithm gets loaded from the *.exe in the disk drive into the RAM. Only then it is able to run. The computer's CPU runs programs and reads/writes data. It cannot get anything directly from the disk drive. It has to load everything into the RAM memory first, which acts as a middle man between the CPU and the disk drive where all your data is stored.



              Now the file compression program is being run by the CPU in the RAM. What do the code instructions tell the CPU to do? They tell it to load the actual file itself from the disk drive into the RAM memory so that the program can work with it. So now we have two things in the RAM memory: the program itself, and the file.



              You tell this file compression program to compress the file. However it cannot magically just do that. To be compressed, a file has to be arranged in a certain order, as tightly as possible. Perhaps prior to compression, the file was somewhat unorganized, like your file cabinet. The file compression program has to organize the file as neatly and tightly as possible. To do this it has to temporarily put the file into an even more unorganized state in order to find all the pieces where everything belongs.



              Think about how you would compress your papers. You would first spread them all over your desk to that you can see all them, and them order them by categories, and start putting the papers into folders.



              So now we have three things in the RAM memory:
              1. The program instructions itself.
              2. The original file which was loaded from the disk drive.
              3. A temporary copy of the original file, which is in state of being taken apart and put back together.
              Maybe multiple temporary copies of the whole file or even parts of it are made in the RAM to make it easier for the program to organize and compress this file. Do you now see how file compression programs can take up much more RAM when they are working compared relatively to the size of the original file in the disk drive?



              The amount of RAM used up during this process depends on the skill of the programmer who designed the application. There are clever and efficient ways to write the code so that it minimizes the consumption of RAM. And then there are brute force ways to achieve the same task but it runs slower and takes up more RAM. RAM may even be wasted if the program has a memory leak. Think of a memory leak as making multiple copies of the same data but then leaving it on the desk and never even bothering to clean up after yourself.



              Eventually though all the temporary copies would be condensed into the compressed version of the file. It's still in the RAM memory though, so then that compressed version of the file has to be sent all the way back to the hard drive disk where it is saved permanently.



              The main idea is that to reach a state of low entropy you should temporarily go through a state of high entropy. This is of course written in the most general terms.



              Picture of the RAM inside






              share|improve this answer


























                0












                0








                0







                If the other answer is too challenging for someone to read because it has a lot of technical jargon, I offer my answer.



                A file is stored in the hard drive or solid drive. What is a file you ask? I answer, a bunch of 1s and 0s arranged in a particular order that it looks like a file from the outside. What is an executable program *.exe? It is machine code executable, also a bunch of 1s and 0s. It is also stored in your disk drive. When you click on the file compression executable, the code instructions algorithm gets loaded from the *.exe in the disk drive into the RAM. Only then it is able to run. The computer's CPU runs programs and reads/writes data. It cannot get anything directly from the disk drive. It has to load everything into the RAM memory first, which acts as a middle man between the CPU and the disk drive where all your data is stored.



                Now the file compression program is being run by the CPU in the RAM. What do the code instructions tell the CPU to do? They tell it to load the actual file itself from the disk drive into the RAM memory so that the program can work with it. So now we have two things in the RAM memory: the program itself, and the file.



                You tell this file compression program to compress the file. However it cannot magically just do that. To be compressed, a file has to be arranged in a certain order, as tightly as possible. Perhaps prior to compression, the file was somewhat unorganized, like your file cabinet. The file compression program has to organize the file as neatly and tightly as possible. To do this it has to temporarily put the file into an even more unorganized state in order to find all the pieces where everything belongs.



                Think about how you would compress your papers. You would first spread them all over your desk to that you can see all them, and them order them by categories, and start putting the papers into folders.



                So now we have three things in the RAM memory:
                1. The program instructions itself.
                2. The original file which was loaded from the disk drive.
                3. A temporary copy of the original file, which is in state of being taken apart and put back together.
                Maybe multiple temporary copies of the whole file or even parts of it are made in the RAM to make it easier for the program to organize and compress this file. Do you now see how file compression programs can take up much more RAM when they are working compared relatively to the size of the original file in the disk drive?



                The amount of RAM used up during this process depends on the skill of the programmer who designed the application. There are clever and efficient ways to write the code so that it minimizes the consumption of RAM. And then there are brute force ways to achieve the same task but it runs slower and takes up more RAM. RAM may even be wasted if the program has a memory leak. Think of a memory leak as making multiple copies of the same data but then leaving it on the desk and never even bothering to clean up after yourself.



                Eventually though all the temporary copies would be condensed into the compressed version of the file. It's still in the RAM memory though, so then that compressed version of the file has to be sent all the way back to the hard drive disk where it is saved permanently.



                The main idea is that to reach a state of low entropy you should temporarily go through a state of high entropy. This is of course written in the most general terms.



                Picture of the RAM inside






                share|improve this answer













                If the other answer is too challenging for someone to read because it has a lot of technical jargon, I offer my answer.



                A file is stored in the hard drive or solid drive. What is a file you ask? I answer, a bunch of 1s and 0s arranged in a particular order that it looks like a file from the outside. What is an executable program *.exe? It is machine code executable, also a bunch of 1s and 0s. It is also stored in your disk drive. When you click on the file compression executable, the code instructions algorithm gets loaded from the *.exe in the disk drive into the RAM. Only then it is able to run. The computer's CPU runs programs and reads/writes data. It cannot get anything directly from the disk drive. It has to load everything into the RAM memory first, which acts as a middle man between the CPU and the disk drive where all your data is stored.



                Now the file compression program is being run by the CPU in the RAM. What do the code instructions tell the CPU to do? They tell it to load the actual file itself from the disk drive into the RAM memory so that the program can work with it. So now we have two things in the RAM memory: the program itself, and the file.



                You tell this file compression program to compress the file. However it cannot magically just do that. To be compressed, a file has to be arranged in a certain order, as tightly as possible. Perhaps prior to compression, the file was somewhat unorganized, like your file cabinet. The file compression program has to organize the file as neatly and tightly as possible. To do this it has to temporarily put the file into an even more unorganized state in order to find all the pieces where everything belongs.



                Think about how you would compress your papers. You would first spread them all over your desk to that you can see all them, and them order them by categories, and start putting the papers into folders.



                So now we have three things in the RAM memory:
                1. The program instructions itself.
                2. The original file which was loaded from the disk drive.
                3. A temporary copy of the original file, which is in state of being taken apart and put back together.
                Maybe multiple temporary copies of the whole file or even parts of it are made in the RAM to make it easier for the program to organize and compress this file. Do you now see how file compression programs can take up much more RAM when they are working compared relatively to the size of the original file in the disk drive?



                The amount of RAM used up during this process depends on the skill of the programmer who designed the application. There are clever and efficient ways to write the code so that it minimizes the consumption of RAM. And then there are brute force ways to achieve the same task but it runs slower and takes up more RAM. RAM may even be wasted if the program has a memory leak. Think of a memory leak as making multiple copies of the same data but then leaving it on the desk and never even bothering to clean up after yourself.



                Eventually though all the temporary copies would be condensed into the compressed version of the file. It's still in the RAM memory though, so then that compressed version of the file has to be sent all the way back to the hard drive disk where it is saved permanently.



                The main idea is that to reach a state of low entropy you should temporarily go through a state of high entropy. This is of course written in the most general terms.



                Picture of the RAM inside







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Jan 5 at 5:49









                GalaxyGalaxy

                11816




                11816






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Super User!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f199031%2fhow-is-it-that-a-file-compression-program-can-use-up-more-ram-than-the-uncompres%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    flock() on closed filehandle LOCK_FILE at /usr/bin/apt-mirror

                    Mangá

                     ⁒  ․,‪⁊‑⁙ ⁖, ⁇‒※‌, †,⁖‗‌⁝    ‾‸⁘,‖⁔⁣,⁂‾
”‑,‥–,‬ ,⁀‹⁋‴⁑ ‒ ,‴⁋”‼ ⁨,‷⁔„ ‰′,‐‚ ‥‡‎“‷⁃⁨⁅⁣,⁔
⁇‘⁔⁡⁏⁌⁡‿‶‏⁨ ⁣⁕⁖⁨⁩⁥‽⁀  ‴‬⁜‟ ⁃‣‧⁕‮ …‍⁨‴ ⁩,⁚⁖‫ ,‵ ⁀,‮⁝‣‣ ⁑  ⁂– ․, ‾‽ ‏⁁“⁗‸ ‾… ‹‡⁌⁎‸‘ ‡⁏⁌‪ ‵⁛ ‎⁨ ―⁦⁤⁄⁕