How is graphics RAM different from system RAM?











up vote
39
down vote

favorite
3












I know that a GPU and a CPU are fundamentally different things and why they both suck at doing the other's job. But what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards.



As I understand it, they're both just different types of DRAM, but it seems to me that the differences could be abstracted away by the memory controller baked into CPU and GPU silicon. The current standard for system RAM is DDR4, but video cards were using GDDR4 for years before DDR4 became a thing for desktops. Video cards are now shipping with HBM RAM (GDDR5?), which is faster than DDR4 system memory.



Why aren't we using the same kind of RAM for both? What makes them different?










share|improve this question






















  • I do want to point out that in some cases the system RAM and graphics RAM are exactly the same. Typically found in lower end computers, the BIOS assigns an amount of the system's RAM to the GPU to use as graphics memory. This amount is typically 128 megabytes or less, which is more than enough for a graphical desktop.
    – Keltari
    yesterday










  • what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards. - they're not. GDDR5 is basically DDR3 optimized for bandwidth (at the expense of latency), if it was up to me, GDDR5 would have been named GDDR3.
    – hanshenrik
    7 hours ago

















up vote
39
down vote

favorite
3












I know that a GPU and a CPU are fundamentally different things and why they both suck at doing the other's job. But what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards.



As I understand it, they're both just different types of DRAM, but it seems to me that the differences could be abstracted away by the memory controller baked into CPU and GPU silicon. The current standard for system RAM is DDR4, but video cards were using GDDR4 for years before DDR4 became a thing for desktops. Video cards are now shipping with HBM RAM (GDDR5?), which is faster than DDR4 system memory.



Why aren't we using the same kind of RAM for both? What makes them different?










share|improve this question






















  • I do want to point out that in some cases the system RAM and graphics RAM are exactly the same. Typically found in lower end computers, the BIOS assigns an amount of the system's RAM to the GPU to use as graphics memory. This amount is typically 128 megabytes or less, which is more than enough for a graphical desktop.
    – Keltari
    yesterday










  • what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards. - they're not. GDDR5 is basically DDR3 optimized for bandwidth (at the expense of latency), if it was up to me, GDDR5 would have been named GDDR3.
    – hanshenrik
    7 hours ago















up vote
39
down vote

favorite
3









up vote
39
down vote

favorite
3






3





I know that a GPU and a CPU are fundamentally different things and why they both suck at doing the other's job. But what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards.



As I understand it, they're both just different types of DRAM, but it seems to me that the differences could be abstracted away by the memory controller baked into CPU and GPU silicon. The current standard for system RAM is DDR4, but video cards were using GDDR4 for years before DDR4 became a thing for desktops. Video cards are now shipping with HBM RAM (GDDR5?), which is faster than DDR4 system memory.



Why aren't we using the same kind of RAM for both? What makes them different?










share|improve this question













I know that a GPU and a CPU are fundamentally different things and why they both suck at doing the other's job. But what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards.



As I understand it, they're both just different types of DRAM, but it seems to me that the differences could be abstracted away by the memory controller baked into CPU and GPU silicon. The current standard for system RAM is DDR4, but video cards were using GDDR4 for years before DDR4 became a thing for desktops. Video cards are now shipping with HBM RAM (GDDR5?), which is faster than DDR4 system memory.



Why aren't we using the same kind of RAM for both? What makes them different?







memory graphics-card cpu






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked yesterday









Wes Sayeed

10.5k32756




10.5k32756












  • I do want to point out that in some cases the system RAM and graphics RAM are exactly the same. Typically found in lower end computers, the BIOS assigns an amount of the system's RAM to the GPU to use as graphics memory. This amount is typically 128 megabytes or less, which is more than enough for a graphical desktop.
    – Keltari
    yesterday










  • what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards. - they're not. GDDR5 is basically DDR3 optimized for bandwidth (at the expense of latency), if it was up to me, GDDR5 would have been named GDDR3.
    – hanshenrik
    7 hours ago




















  • I do want to point out that in some cases the system RAM and graphics RAM are exactly the same. Typically found in lower end computers, the BIOS assigns an amount of the system's RAM to the GPU to use as graphics memory. This amount is typically 128 megabytes or less, which is more than enough for a graphical desktop.
    – Keltari
    yesterday










  • what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards. - they're not. GDDR5 is basically DDR3 optimized for bandwidth (at the expense of latency), if it was up to me, GDDR5 would have been named GDDR3.
    – hanshenrik
    7 hours ago


















I do want to point out that in some cases the system RAM and graphics RAM are exactly the same. Typically found in lower end computers, the BIOS assigns an amount of the system's RAM to the GPU to use as graphics memory. This amount is typically 128 megabytes or less, which is more than enough for a graphical desktop.
– Keltari
yesterday




I do want to point out that in some cases the system RAM and graphics RAM are exactly the same. Typically found in lower end computers, the BIOS assigns an amount of the system's RAM to the GPU to use as graphics memory. This amount is typically 128 megabytes or less, which is more than enough for a graphical desktop.
– Keltari
yesterday












what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards. - they're not. GDDR5 is basically DDR3 optimized for bandwidth (at the expense of latency), if it was up to me, GDDR5 would have been named GDDR3.
– hanshenrik
7 hours ago






what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards. - they're not. GDDR5 is basically DDR3 optimized for bandwidth (at the expense of latency), if it was up to me, GDDR5 would have been named GDDR3.
– hanshenrik
7 hours ago












3 Answers
3






active

oldest

votes

















up vote
47
down vote














But what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards.




The GDDR specification, while based on the DDR standard, has its own hardware specification. The DDR specification is technically ahead of the GDDR specification, since GDDR is based on the previous DDR specification (most of the time, except when it's based on the previous GDDR specification).



One of the reasons there is a false belief that GDDR is ahead of DDR is that there have been multiple iterations of the GDDR standard that were based on DDR3. This was also the case with GDDR2, being that its specification has design elements from both DDR and DDR2.




However, it is important to note that this GDDR2 memory used on graphics cards is not DDR2 per se, but rather an early midpoint between DDR and DDR2 technologies. Using "DDR2" to refer to GDDR2 is a colloquial misnomer.




Source: DDR2 SDRAM



Likewise, GDDR4 and GDDR5 both took design elements from DDR3. GDDR5 obviously is an improved GDDR design, when compared to GDDR4.




Like its predecessor, GDDR4, GDDR5 is based on DDR3 SDRAM memory, which has double the data lines compared to DDR2 SDRAM. GDDR5 also uses 8-bit wide prefetch buffers similar to GDDR4 and DDR3 SDRAM.




Source: GDDR5 SDRAM




As I understand it, they're both just different types of SDRAM, but it seems to me that the differences could be abstracted away by the memory controller baked into CPU and GPU silicon.




The two standards are actually vastly different. The difference in the number of bits that can be transferred over a data line is one of those differences. The GDDR specification is not compatible with Intel and AMD x86 processors. The GDDR specification is able to transfer more bits, due to it being connected to an entirely different connection, mainly PCI-e (within the specification of the various revisions of this standard).




The current standard for system RAM is DDR4, but video cards were using GDDR4 for years before DDR4 became a thing for desktops.




This is due to the fact that GDDR4 is based off the DDR3 specification, not the DDR2 specification. The DDR3 standard wasn't ratified until 2005. We didn't see products until 2007 due to entirely different market needs. GDDR4 was announced in 2005 and didn't see products until 2007. So you can see that while they have different names, the actual products were released together.




  • GDDR4 SDRAM

  • DDR3 SDRAM



Video cards are now shipping with HBM RAM (GDDR5?), which is faster than DDR4 system memory.




The current GDDR standard(s) are actually GDDR5X and GDDR6. HBM (High Bandwidth Memory) is a Hynix and Samsung DDR manufacturing process.




Why aren't we using the same kind of RAM for both?




The two standards are not compatible with one another.




What makes them different?




What makes them different is their manufacturing process and their specifications. While GDDR is based off the DDR specification, GDDR is not actually ahead of DDR, although there huge performance gaps between the two standards at this point due to the available bandwidth that GDDR has access to.






share|improve this answer






























    up vote
    28
    down vote













    The underlying tech is more or less the same, GPUs just leverage a much wider memory bus.



    GPUs are easier to design this way as a single unit where many memory modules can be directly connected to the processing unit through a custom circuit board. This allows for a very wide memory bus, often exceeding 256-bits. HBM takes this further with with a 1024-bit bus.



    CPUs rely on a much more generalized architecture of sockets and motherboard specifications, so more than the standard two 64-bit channels is typically reserved to the high-end and server market.



    It should also be mentioned that GPU memory is tuned to trade latency performance for high bandwidth - lots of shoveling and not a lot of seeking. This is not the case with CPU memory where low latency is desired for good random access speeds.






    share|improve this answer








    New contributor




    Robert is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.














    • 4




      Your last paragraph is, I think, the most important point: they're optimized for different things. Graphics cards need high bandwidth but aren't as concerned with latency, whereas CPUs need the best latency possible and bandwidth is a more secondary concern. There's no fundamental reason a CPU couldn't use GDDR or a GPU use regular DDR (indeed, many integrated graphics do), it's just that performance would be worse.
      – Nate Strickland
      yesterday










    • @NateStrickland CPUs do actually use GDDR as their memory on consoles. Specifically, two last generations of consoles use GDDR as a shared memory for both CPU and GPU.
      – creker
      5 hours ago


















    up vote
    1
    down vote













    One special feature of some types of graphics RAM is that they can be accessed by two independent or mostly independent bus systems - which makes using them as either framebuffers (the portion of video ram where the pixels sent to the screen every 1/60th or so second are kept) or texture buffers easier and doable with less access conflicts and overhead....






    share|improve this answer





















      Your Answer








      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "3"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














       

      draft saved


      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1375854%2fhow-is-graphics-ram-different-from-system-ram%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      3 Answers
      3






      active

      oldest

      votes








      3 Answers
      3






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes








      up vote
      47
      down vote














      But what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards.




      The GDDR specification, while based on the DDR standard, has its own hardware specification. The DDR specification is technically ahead of the GDDR specification, since GDDR is based on the previous DDR specification (most of the time, except when it's based on the previous GDDR specification).



      One of the reasons there is a false belief that GDDR is ahead of DDR is that there have been multiple iterations of the GDDR standard that were based on DDR3. This was also the case with GDDR2, being that its specification has design elements from both DDR and DDR2.




      However, it is important to note that this GDDR2 memory used on graphics cards is not DDR2 per se, but rather an early midpoint between DDR and DDR2 technologies. Using "DDR2" to refer to GDDR2 is a colloquial misnomer.




      Source: DDR2 SDRAM



      Likewise, GDDR4 and GDDR5 both took design elements from DDR3. GDDR5 obviously is an improved GDDR design, when compared to GDDR4.




      Like its predecessor, GDDR4, GDDR5 is based on DDR3 SDRAM memory, which has double the data lines compared to DDR2 SDRAM. GDDR5 also uses 8-bit wide prefetch buffers similar to GDDR4 and DDR3 SDRAM.




      Source: GDDR5 SDRAM




      As I understand it, they're both just different types of SDRAM, but it seems to me that the differences could be abstracted away by the memory controller baked into CPU and GPU silicon.




      The two standards are actually vastly different. The difference in the number of bits that can be transferred over a data line is one of those differences. The GDDR specification is not compatible with Intel and AMD x86 processors. The GDDR specification is able to transfer more bits, due to it being connected to an entirely different connection, mainly PCI-e (within the specification of the various revisions of this standard).




      The current standard for system RAM is DDR4, but video cards were using GDDR4 for years before DDR4 became a thing for desktops.




      This is due to the fact that GDDR4 is based off the DDR3 specification, not the DDR2 specification. The DDR3 standard wasn't ratified until 2005. We didn't see products until 2007 due to entirely different market needs. GDDR4 was announced in 2005 and didn't see products until 2007. So you can see that while they have different names, the actual products were released together.




      • GDDR4 SDRAM

      • DDR3 SDRAM



      Video cards are now shipping with HBM RAM (GDDR5?), which is faster than DDR4 system memory.




      The current GDDR standard(s) are actually GDDR5X and GDDR6. HBM (High Bandwidth Memory) is a Hynix and Samsung DDR manufacturing process.




      Why aren't we using the same kind of RAM for both?




      The two standards are not compatible with one another.




      What makes them different?




      What makes them different is their manufacturing process and their specifications. While GDDR is based off the DDR specification, GDDR is not actually ahead of DDR, although there huge performance gaps between the two standards at this point due to the available bandwidth that GDDR has access to.






      share|improve this answer



























        up vote
        47
        down vote














        But what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards.




        The GDDR specification, while based on the DDR standard, has its own hardware specification. The DDR specification is technically ahead of the GDDR specification, since GDDR is based on the previous DDR specification (most of the time, except when it's based on the previous GDDR specification).



        One of the reasons there is a false belief that GDDR is ahead of DDR is that there have been multiple iterations of the GDDR standard that were based on DDR3. This was also the case with GDDR2, being that its specification has design elements from both DDR and DDR2.




        However, it is important to note that this GDDR2 memory used on graphics cards is not DDR2 per se, but rather an early midpoint between DDR and DDR2 technologies. Using "DDR2" to refer to GDDR2 is a colloquial misnomer.




        Source: DDR2 SDRAM



        Likewise, GDDR4 and GDDR5 both took design elements from DDR3. GDDR5 obviously is an improved GDDR design, when compared to GDDR4.




        Like its predecessor, GDDR4, GDDR5 is based on DDR3 SDRAM memory, which has double the data lines compared to DDR2 SDRAM. GDDR5 also uses 8-bit wide prefetch buffers similar to GDDR4 and DDR3 SDRAM.




        Source: GDDR5 SDRAM




        As I understand it, they're both just different types of SDRAM, but it seems to me that the differences could be abstracted away by the memory controller baked into CPU and GPU silicon.




        The two standards are actually vastly different. The difference in the number of bits that can be transferred over a data line is one of those differences. The GDDR specification is not compatible with Intel and AMD x86 processors. The GDDR specification is able to transfer more bits, due to it being connected to an entirely different connection, mainly PCI-e (within the specification of the various revisions of this standard).




        The current standard for system RAM is DDR4, but video cards were using GDDR4 for years before DDR4 became a thing for desktops.




        This is due to the fact that GDDR4 is based off the DDR3 specification, not the DDR2 specification. The DDR3 standard wasn't ratified until 2005. We didn't see products until 2007 due to entirely different market needs. GDDR4 was announced in 2005 and didn't see products until 2007. So you can see that while they have different names, the actual products were released together.




        • GDDR4 SDRAM

        • DDR3 SDRAM



        Video cards are now shipping with HBM RAM (GDDR5?), which is faster than DDR4 system memory.




        The current GDDR standard(s) are actually GDDR5X and GDDR6. HBM (High Bandwidth Memory) is a Hynix and Samsung DDR manufacturing process.




        Why aren't we using the same kind of RAM for both?




        The two standards are not compatible with one another.




        What makes them different?




        What makes them different is their manufacturing process and their specifications. While GDDR is based off the DDR specification, GDDR is not actually ahead of DDR, although there huge performance gaps between the two standards at this point due to the available bandwidth that GDDR has access to.






        share|improve this answer

























          up vote
          47
          down vote










          up vote
          47
          down vote










          But what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards.




          The GDDR specification, while based on the DDR standard, has its own hardware specification. The DDR specification is technically ahead of the GDDR specification, since GDDR is based on the previous DDR specification (most of the time, except when it's based on the previous GDDR specification).



          One of the reasons there is a false belief that GDDR is ahead of DDR is that there have been multiple iterations of the GDDR standard that were based on DDR3. This was also the case with GDDR2, being that its specification has design elements from both DDR and DDR2.




          However, it is important to note that this GDDR2 memory used on graphics cards is not DDR2 per se, but rather an early midpoint between DDR and DDR2 technologies. Using "DDR2" to refer to GDDR2 is a colloquial misnomer.




          Source: DDR2 SDRAM



          Likewise, GDDR4 and GDDR5 both took design elements from DDR3. GDDR5 obviously is an improved GDDR design, when compared to GDDR4.




          Like its predecessor, GDDR4, GDDR5 is based on DDR3 SDRAM memory, which has double the data lines compared to DDR2 SDRAM. GDDR5 also uses 8-bit wide prefetch buffers similar to GDDR4 and DDR3 SDRAM.




          Source: GDDR5 SDRAM




          As I understand it, they're both just different types of SDRAM, but it seems to me that the differences could be abstracted away by the memory controller baked into CPU and GPU silicon.




          The two standards are actually vastly different. The difference in the number of bits that can be transferred over a data line is one of those differences. The GDDR specification is not compatible with Intel and AMD x86 processors. The GDDR specification is able to transfer more bits, due to it being connected to an entirely different connection, mainly PCI-e (within the specification of the various revisions of this standard).




          The current standard for system RAM is DDR4, but video cards were using GDDR4 for years before DDR4 became a thing for desktops.




          This is due to the fact that GDDR4 is based off the DDR3 specification, not the DDR2 specification. The DDR3 standard wasn't ratified until 2005. We didn't see products until 2007 due to entirely different market needs. GDDR4 was announced in 2005 and didn't see products until 2007. So you can see that while they have different names, the actual products were released together.




          • GDDR4 SDRAM

          • DDR3 SDRAM



          Video cards are now shipping with HBM RAM (GDDR5?), which is faster than DDR4 system memory.




          The current GDDR standard(s) are actually GDDR5X and GDDR6. HBM (High Bandwidth Memory) is a Hynix and Samsung DDR manufacturing process.




          Why aren't we using the same kind of RAM for both?




          The two standards are not compatible with one another.




          What makes them different?




          What makes them different is their manufacturing process and their specifications. While GDDR is based off the DDR specification, GDDR is not actually ahead of DDR, although there huge performance gaps between the two standards at this point due to the available bandwidth that GDDR has access to.






          share|improve this answer















          But what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards.




          The GDDR specification, while based on the DDR standard, has its own hardware specification. The DDR specification is technically ahead of the GDDR specification, since GDDR is based on the previous DDR specification (most of the time, except when it's based on the previous GDDR specification).



          One of the reasons there is a false belief that GDDR is ahead of DDR is that there have been multiple iterations of the GDDR standard that were based on DDR3. This was also the case with GDDR2, being that its specification has design elements from both DDR and DDR2.




          However, it is important to note that this GDDR2 memory used on graphics cards is not DDR2 per se, but rather an early midpoint between DDR and DDR2 technologies. Using "DDR2" to refer to GDDR2 is a colloquial misnomer.




          Source: DDR2 SDRAM



          Likewise, GDDR4 and GDDR5 both took design elements from DDR3. GDDR5 obviously is an improved GDDR design, when compared to GDDR4.




          Like its predecessor, GDDR4, GDDR5 is based on DDR3 SDRAM memory, which has double the data lines compared to DDR2 SDRAM. GDDR5 also uses 8-bit wide prefetch buffers similar to GDDR4 and DDR3 SDRAM.




          Source: GDDR5 SDRAM




          As I understand it, they're both just different types of SDRAM, but it seems to me that the differences could be abstracted away by the memory controller baked into CPU and GPU silicon.




          The two standards are actually vastly different. The difference in the number of bits that can be transferred over a data line is one of those differences. The GDDR specification is not compatible with Intel and AMD x86 processors. The GDDR specification is able to transfer more bits, due to it being connected to an entirely different connection, mainly PCI-e (within the specification of the various revisions of this standard).




          The current standard for system RAM is DDR4, but video cards were using GDDR4 for years before DDR4 became a thing for desktops.




          This is due to the fact that GDDR4 is based off the DDR3 specification, not the DDR2 specification. The DDR3 standard wasn't ratified until 2005. We didn't see products until 2007 due to entirely different market needs. GDDR4 was announced in 2005 and didn't see products until 2007. So you can see that while they have different names, the actual products were released together.




          • GDDR4 SDRAM

          • DDR3 SDRAM



          Video cards are now shipping with HBM RAM (GDDR5?), which is faster than DDR4 system memory.




          The current GDDR standard(s) are actually GDDR5X and GDDR6. HBM (High Bandwidth Memory) is a Hynix and Samsung DDR manufacturing process.




          Why aren't we using the same kind of RAM for both?




          The two standards are not compatible with one another.




          What makes them different?




          What makes them different is their manufacturing process and their specifications. While GDDR is based off the DDR specification, GDDR is not actually ahead of DDR, although there huge performance gaps between the two standards at this point due to the available bandwidth that GDDR has access to.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited yesterday









          psmears

          45238




          45238










          answered yesterday









          Ramhound

          18.9k156082




          18.9k156082
























              up vote
              28
              down vote













              The underlying tech is more or less the same, GPUs just leverage a much wider memory bus.



              GPUs are easier to design this way as a single unit where many memory modules can be directly connected to the processing unit through a custom circuit board. This allows for a very wide memory bus, often exceeding 256-bits. HBM takes this further with with a 1024-bit bus.



              CPUs rely on a much more generalized architecture of sockets and motherboard specifications, so more than the standard two 64-bit channels is typically reserved to the high-end and server market.



              It should also be mentioned that GPU memory is tuned to trade latency performance for high bandwidth - lots of shoveling and not a lot of seeking. This is not the case with CPU memory where low latency is desired for good random access speeds.






              share|improve this answer








              New contributor




              Robert is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
              Check out our Code of Conduct.














              • 4




                Your last paragraph is, I think, the most important point: they're optimized for different things. Graphics cards need high bandwidth but aren't as concerned with latency, whereas CPUs need the best latency possible and bandwidth is a more secondary concern. There's no fundamental reason a CPU couldn't use GDDR or a GPU use regular DDR (indeed, many integrated graphics do), it's just that performance would be worse.
                – Nate Strickland
                yesterday










              • @NateStrickland CPUs do actually use GDDR as their memory on consoles. Specifically, two last generations of consoles use GDDR as a shared memory for both CPU and GPU.
                – creker
                5 hours ago















              up vote
              28
              down vote













              The underlying tech is more or less the same, GPUs just leverage a much wider memory bus.



              GPUs are easier to design this way as a single unit where many memory modules can be directly connected to the processing unit through a custom circuit board. This allows for a very wide memory bus, often exceeding 256-bits. HBM takes this further with with a 1024-bit bus.



              CPUs rely on a much more generalized architecture of sockets and motherboard specifications, so more than the standard two 64-bit channels is typically reserved to the high-end and server market.



              It should also be mentioned that GPU memory is tuned to trade latency performance for high bandwidth - lots of shoveling and not a lot of seeking. This is not the case with CPU memory where low latency is desired for good random access speeds.






              share|improve this answer








              New contributor




              Robert is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
              Check out our Code of Conduct.














              • 4




                Your last paragraph is, I think, the most important point: they're optimized for different things. Graphics cards need high bandwidth but aren't as concerned with latency, whereas CPUs need the best latency possible and bandwidth is a more secondary concern. There's no fundamental reason a CPU couldn't use GDDR or a GPU use regular DDR (indeed, many integrated graphics do), it's just that performance would be worse.
                – Nate Strickland
                yesterday










              • @NateStrickland CPUs do actually use GDDR as their memory on consoles. Specifically, two last generations of consoles use GDDR as a shared memory for both CPU and GPU.
                – creker
                5 hours ago













              up vote
              28
              down vote










              up vote
              28
              down vote









              The underlying tech is more or less the same, GPUs just leverage a much wider memory bus.



              GPUs are easier to design this way as a single unit where many memory modules can be directly connected to the processing unit through a custom circuit board. This allows for a very wide memory bus, often exceeding 256-bits. HBM takes this further with with a 1024-bit bus.



              CPUs rely on a much more generalized architecture of sockets and motherboard specifications, so more than the standard two 64-bit channels is typically reserved to the high-end and server market.



              It should also be mentioned that GPU memory is tuned to trade latency performance for high bandwidth - lots of shoveling and not a lot of seeking. This is not the case with CPU memory where low latency is desired for good random access speeds.






              share|improve this answer








              New contributor




              Robert is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
              Check out our Code of Conduct.









              The underlying tech is more or less the same, GPUs just leverage a much wider memory bus.



              GPUs are easier to design this way as a single unit where many memory modules can be directly connected to the processing unit through a custom circuit board. This allows for a very wide memory bus, often exceeding 256-bits. HBM takes this further with with a 1024-bit bus.



              CPUs rely on a much more generalized architecture of sockets and motherboard specifications, so more than the standard two 64-bit channels is typically reserved to the high-end and server market.



              It should also be mentioned that GPU memory is tuned to trade latency performance for high bandwidth - lots of shoveling and not a lot of seeking. This is not the case with CPU memory where low latency is desired for good random access speeds.







              share|improve this answer








              New contributor




              Robert is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
              Check out our Code of Conduct.









              share|improve this answer



              share|improve this answer






              New contributor




              Robert is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
              Check out our Code of Conduct.









              answered yesterday









              Robert

              26113




              26113




              New contributor




              Robert is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
              Check out our Code of Conduct.





              New contributor





              Robert is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
              Check out our Code of Conduct.






              Robert is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
              Check out our Code of Conduct.








              • 4




                Your last paragraph is, I think, the most important point: they're optimized for different things. Graphics cards need high bandwidth but aren't as concerned with latency, whereas CPUs need the best latency possible and bandwidth is a more secondary concern. There's no fundamental reason a CPU couldn't use GDDR or a GPU use regular DDR (indeed, many integrated graphics do), it's just that performance would be worse.
                – Nate Strickland
                yesterday










              • @NateStrickland CPUs do actually use GDDR as their memory on consoles. Specifically, two last generations of consoles use GDDR as a shared memory for both CPU and GPU.
                – creker
                5 hours ago














              • 4




                Your last paragraph is, I think, the most important point: they're optimized for different things. Graphics cards need high bandwidth but aren't as concerned with latency, whereas CPUs need the best latency possible and bandwidth is a more secondary concern. There's no fundamental reason a CPU couldn't use GDDR or a GPU use regular DDR (indeed, many integrated graphics do), it's just that performance would be worse.
                – Nate Strickland
                yesterday










              • @NateStrickland CPUs do actually use GDDR as their memory on consoles. Specifically, two last generations of consoles use GDDR as a shared memory for both CPU and GPU.
                – creker
                5 hours ago








              4




              4




              Your last paragraph is, I think, the most important point: they're optimized for different things. Graphics cards need high bandwidth but aren't as concerned with latency, whereas CPUs need the best latency possible and bandwidth is a more secondary concern. There's no fundamental reason a CPU couldn't use GDDR or a GPU use regular DDR (indeed, many integrated graphics do), it's just that performance would be worse.
              – Nate Strickland
              yesterday




              Your last paragraph is, I think, the most important point: they're optimized for different things. Graphics cards need high bandwidth but aren't as concerned with latency, whereas CPUs need the best latency possible and bandwidth is a more secondary concern. There's no fundamental reason a CPU couldn't use GDDR or a GPU use regular DDR (indeed, many integrated graphics do), it's just that performance would be worse.
              – Nate Strickland
              yesterday












              @NateStrickland CPUs do actually use GDDR as their memory on consoles. Specifically, two last generations of consoles use GDDR as a shared memory for both CPU and GPU.
              – creker
              5 hours ago




              @NateStrickland CPUs do actually use GDDR as their memory on consoles. Specifically, two last generations of consoles use GDDR as a shared memory for both CPU and GPU.
              – creker
              5 hours ago










              up vote
              1
              down vote













              One special feature of some types of graphics RAM is that they can be accessed by two independent or mostly independent bus systems - which makes using them as either framebuffers (the portion of video ram where the pixels sent to the screen every 1/60th or so second are kept) or texture buffers easier and doable with less access conflicts and overhead....






              share|improve this answer

























                up vote
                1
                down vote













                One special feature of some types of graphics RAM is that they can be accessed by two independent or mostly independent bus systems - which makes using them as either framebuffers (the portion of video ram where the pixels sent to the screen every 1/60th or so second are kept) or texture buffers easier and doable with less access conflicts and overhead....






                share|improve this answer























                  up vote
                  1
                  down vote










                  up vote
                  1
                  down vote









                  One special feature of some types of graphics RAM is that they can be accessed by two independent or mostly independent bus systems - which makes using them as either framebuffers (the portion of video ram where the pixels sent to the screen every 1/60th or so second are kept) or texture buffers easier and doable with less access conflicts and overhead....






                  share|improve this answer












                  One special feature of some types of graphics RAM is that they can be accessed by two independent or mostly independent bus systems - which makes using them as either framebuffers (the portion of video ram where the pixels sent to the screen every 1/60th or so second are kept) or texture buffers easier and doable with less access conflicts and overhead....







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered yesterday









                  rackandboneman

                  65036




                  65036






























                       

                      draft saved


                      draft discarded



















































                       


                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1375854%2fhow-is-graphics-ram-different-from-system-ram%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      flock() on closed filehandle LOCK_FILE at /usr/bin/apt-mirror

                      Mangá

                      Eduardo VII do Reino Unido