How to calculate the max write threads on a hard drive if X speed is desired?












0















I'm trying to establish the max write threads a hard drive can handle, for example, if the desired speed is 20KB/sec per a thread how can I test the max simultaneous writes before the drive throttles and becomes slower, and let's assume the OS, Filesystem or the Application is not apart of the bottleneck.



Each file being written is different per user.



I did read Achieve Maximum write speed on hard disk posted by another user but where is this question is different is the other question focused based on how many files a second while mine is how many based on X KB/sec.



I ran a test using HD Tune and CrystalDiskMark but sadly I think this only covers single threaded transfers or I don't know how to read the results and calculate from them.



Here's the result from CrystalDiskMark, I'm unsure if this is helpful or not.



enter image description here



Question(s)




  • How can I test a hard drive and work out how many simultaneous disks writes the drive can handle based on setting a minimal speed of 100KB/sec










share|improve this question




















  • 1





    Why not throw in an SSD and bypass the problem - SSDs dont significantly suffer from differences in sequential vs random reads and are much faster all round. (Meaning the answer becomes much closer to speed/users)

    – davidgo
    Jan 30 at 18:30
















0















I'm trying to establish the max write threads a hard drive can handle, for example, if the desired speed is 20KB/sec per a thread how can I test the max simultaneous writes before the drive throttles and becomes slower, and let's assume the OS, Filesystem or the Application is not apart of the bottleneck.



Each file being written is different per user.



I did read Achieve Maximum write speed on hard disk posted by another user but where is this question is different is the other question focused based on how many files a second while mine is how many based on X KB/sec.



I ran a test using HD Tune and CrystalDiskMark but sadly I think this only covers single threaded transfers or I don't know how to read the results and calculate from them.



Here's the result from CrystalDiskMark, I'm unsure if this is helpful or not.



enter image description here



Question(s)




  • How can I test a hard drive and work out how many simultaneous disks writes the drive can handle based on setting a minimal speed of 100KB/sec










share|improve this question




















  • 1





    Why not throw in an SSD and bypass the problem - SSDs dont significantly suffer from differences in sequential vs random reads and are much faster all round. (Meaning the answer becomes much closer to speed/users)

    – davidgo
    Jan 30 at 18:30














0












0








0








I'm trying to establish the max write threads a hard drive can handle, for example, if the desired speed is 20KB/sec per a thread how can I test the max simultaneous writes before the drive throttles and becomes slower, and let's assume the OS, Filesystem or the Application is not apart of the bottleneck.



Each file being written is different per user.



I did read Achieve Maximum write speed on hard disk posted by another user but where is this question is different is the other question focused based on how many files a second while mine is how many based on X KB/sec.



I ran a test using HD Tune and CrystalDiskMark but sadly I think this only covers single threaded transfers or I don't know how to read the results and calculate from them.



Here's the result from CrystalDiskMark, I'm unsure if this is helpful or not.



enter image description here



Question(s)




  • How can I test a hard drive and work out how many simultaneous disks writes the drive can handle based on setting a minimal speed of 100KB/sec










share|improve this question
















I'm trying to establish the max write threads a hard drive can handle, for example, if the desired speed is 20KB/sec per a thread how can I test the max simultaneous writes before the drive throttles and becomes slower, and let's assume the OS, Filesystem or the Application is not apart of the bottleneck.



Each file being written is different per user.



I did read Achieve Maximum write speed on hard disk posted by another user but where is this question is different is the other question focused based on how many files a second while mine is how many based on X KB/sec.



I ran a test using HD Tune and CrystalDiskMark but sadly I think this only covers single threaded transfers or I don't know how to read the results and calculate from them.



Here's the result from CrystalDiskMark, I'm unsure if this is helpful or not.



enter image description here



Question(s)




  • How can I test a hard drive and work out how many simultaneous disks writes the drive can handle based on setting a minimal speed of 100KB/sec







hard-drive performance






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Jan 30 at 17:57







Simon Hayter

















asked Jan 30 at 17:38









Simon HayterSimon Hayter

13711




13711








  • 1





    Why not throw in an SSD and bypass the problem - SSDs dont significantly suffer from differences in sequential vs random reads and are much faster all round. (Meaning the answer becomes much closer to speed/users)

    – davidgo
    Jan 30 at 18:30














  • 1





    Why not throw in an SSD and bypass the problem - SSDs dont significantly suffer from differences in sequential vs random reads and are much faster all round. (Meaning the answer becomes much closer to speed/users)

    – davidgo
    Jan 30 at 18:30








1




1





Why not throw in an SSD and bypass the problem - SSDs dont significantly suffer from differences in sequential vs random reads and are much faster all round. (Meaning the answer becomes much closer to speed/users)

– davidgo
Jan 30 at 18:30





Why not throw in an SSD and bypass the problem - SSDs dont significantly suffer from differences in sequential vs random reads and are much faster all round. (Meaning the answer becomes much closer to speed/users)

– davidgo
Jan 30 at 18:30










1 Answer
1






active

oldest

votes


















1














It depends entirely on whether you're doing sequential or random I/O, and how often you want / need to flush to disk...



Both 20 KB/s and 100 KB/s are negligible with today's hardware. From the CrystalDiskMark screenshot, and your concern I'd suspect you're dealing with a spinning disk... why not use an SSD?






max simultaneous writes before the drive throttles and becomes slower




It's not a matter of the drive throttling, but rather that the physical movement of the head takes time to complete. With random I/O this is exacerbated as the size of each written block shrinks, and the seek time between writes increases.




let's assume the OS, Filesystem or the Application is not a part of the bottleneck




Without knowing the state of the filesystem in terms of fragmentation and free space, you cannot assume this, and you certainly can't assume it over the life of a product or installation.





If you're suffering from performance issues, then you'll want to make use of buffered I/O - i.e: writing to a file actually collects data into a buffer, before writing a larger block to disk at once.



Writing 100 KB/s for a period of 10 seconds can be presented to the storage as any of the following (or wider):




  • a block of 1 KB every 10ms

  • a block of 10 KB every 100ms

  • a block of 100 KB every 1 second

  • a block of 1,000 KB every 10 seconds


Are we discussing the regular (red), or infrequent (green)?
Each of the colors will "write" the same amount of data over the same timeframe.



write throughput in different block sizes



Writing larger blocks at once will help with throughput and filesystem fragmentation, though there is a trade-off to consider.





  • Writing larger blocks, less frequently - will improve throughput, but requires more RAM, and in the event of power loss or crash, a larger portion of data will be lost


  • Writing smaller blocks, more frequently - will degrade throughput, but requires less RAM, and less data is held in volatile memory.


The filesystem or OS may impose rules about how frequently the file cache is written to disk, so you may need to manage this caching within the application... Start with using buffered I/O, and if that doesn't cut it, review the situation.






let's pretend 1,000 users are uploading 1GB file at 20KB/sec




You're comfortable with users uploading a 1 GB file over ~14.5 hours? With all of the issues that failures incur (i.e: re-uploading from the beginning).






share|improve this answer


























  • Sorry, random IO, new files per user.

    – Simon Hayter
    Jan 30 at 17:56











  • Basically users will be uploading files, they vary in size but it would be deemed acceptable for them to be around 20KB/sec, obviously, a mechanical drive with 20KB/sec per user with thousands of users would cause delay due to the disk head bouncing back and forth, it's that part I want to measure. So, Ideally, I want to estimate this hard drive can handle, 250 users, but I know its not as easy as this because then you got response time between the user and disk, but I want to calculate that into the estimate as well.

    – Simon Hayter
    Jan 30 at 18:03













  • Isn't there a util either in Powershell or application that I can run which fires off 1000 write threads for example?

    – Simon Hayter
    Jan 30 at 18:05













  • Unless you're dealing with a stream of data that needs to be captured in real time at 20 KB/sec, is this really an issue? Even so, I would expect your application / OS to cache a chunk of the upload, and write a large block at once (like the green in the graph). It may take many seconds to fill the cache and trigger a write to disk, depending on the configuration.

    – Attie
    Jan 30 at 18:08











  • Yes, there probably is an application that can benchmark this, but I'm not convinced that is in anyway useful information for your use case...

    – Attie
    Jan 30 at 18:09











Your Answer








StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "3"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1400188%2fhow-to-calculate-the-max-write-threads-on-a-hard-drive-if-x-speed-is-desired%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









1














It depends entirely on whether you're doing sequential or random I/O, and how often you want / need to flush to disk...



Both 20 KB/s and 100 KB/s are negligible with today's hardware. From the CrystalDiskMark screenshot, and your concern I'd suspect you're dealing with a spinning disk... why not use an SSD?






max simultaneous writes before the drive throttles and becomes slower




It's not a matter of the drive throttling, but rather that the physical movement of the head takes time to complete. With random I/O this is exacerbated as the size of each written block shrinks, and the seek time between writes increases.




let's assume the OS, Filesystem or the Application is not a part of the bottleneck




Without knowing the state of the filesystem in terms of fragmentation and free space, you cannot assume this, and you certainly can't assume it over the life of a product or installation.





If you're suffering from performance issues, then you'll want to make use of buffered I/O - i.e: writing to a file actually collects data into a buffer, before writing a larger block to disk at once.



Writing 100 KB/s for a period of 10 seconds can be presented to the storage as any of the following (or wider):




  • a block of 1 KB every 10ms

  • a block of 10 KB every 100ms

  • a block of 100 KB every 1 second

  • a block of 1,000 KB every 10 seconds


Are we discussing the regular (red), or infrequent (green)?
Each of the colors will "write" the same amount of data over the same timeframe.



write throughput in different block sizes



Writing larger blocks at once will help with throughput and filesystem fragmentation, though there is a trade-off to consider.





  • Writing larger blocks, less frequently - will improve throughput, but requires more RAM, and in the event of power loss or crash, a larger portion of data will be lost


  • Writing smaller blocks, more frequently - will degrade throughput, but requires less RAM, and less data is held in volatile memory.


The filesystem or OS may impose rules about how frequently the file cache is written to disk, so you may need to manage this caching within the application... Start with using buffered I/O, and if that doesn't cut it, review the situation.






let's pretend 1,000 users are uploading 1GB file at 20KB/sec




You're comfortable with users uploading a 1 GB file over ~14.5 hours? With all of the issues that failures incur (i.e: re-uploading from the beginning).






share|improve this answer


























  • Sorry, random IO, new files per user.

    – Simon Hayter
    Jan 30 at 17:56











  • Basically users will be uploading files, they vary in size but it would be deemed acceptable for them to be around 20KB/sec, obviously, a mechanical drive with 20KB/sec per user with thousands of users would cause delay due to the disk head bouncing back and forth, it's that part I want to measure. So, Ideally, I want to estimate this hard drive can handle, 250 users, but I know its not as easy as this because then you got response time between the user and disk, but I want to calculate that into the estimate as well.

    – Simon Hayter
    Jan 30 at 18:03













  • Isn't there a util either in Powershell or application that I can run which fires off 1000 write threads for example?

    – Simon Hayter
    Jan 30 at 18:05













  • Unless you're dealing with a stream of data that needs to be captured in real time at 20 KB/sec, is this really an issue? Even so, I would expect your application / OS to cache a chunk of the upload, and write a large block at once (like the green in the graph). It may take many seconds to fill the cache and trigger a write to disk, depending on the configuration.

    – Attie
    Jan 30 at 18:08











  • Yes, there probably is an application that can benchmark this, but I'm not convinced that is in anyway useful information for your use case...

    – Attie
    Jan 30 at 18:09
















1














It depends entirely on whether you're doing sequential or random I/O, and how often you want / need to flush to disk...



Both 20 KB/s and 100 KB/s are negligible with today's hardware. From the CrystalDiskMark screenshot, and your concern I'd suspect you're dealing with a spinning disk... why not use an SSD?






max simultaneous writes before the drive throttles and becomes slower




It's not a matter of the drive throttling, but rather that the physical movement of the head takes time to complete. With random I/O this is exacerbated as the size of each written block shrinks, and the seek time between writes increases.




let's assume the OS, Filesystem or the Application is not a part of the bottleneck




Without knowing the state of the filesystem in terms of fragmentation and free space, you cannot assume this, and you certainly can't assume it over the life of a product or installation.





If you're suffering from performance issues, then you'll want to make use of buffered I/O - i.e: writing to a file actually collects data into a buffer, before writing a larger block to disk at once.



Writing 100 KB/s for a period of 10 seconds can be presented to the storage as any of the following (or wider):




  • a block of 1 KB every 10ms

  • a block of 10 KB every 100ms

  • a block of 100 KB every 1 second

  • a block of 1,000 KB every 10 seconds


Are we discussing the regular (red), or infrequent (green)?
Each of the colors will "write" the same amount of data over the same timeframe.



write throughput in different block sizes



Writing larger blocks at once will help with throughput and filesystem fragmentation, though there is a trade-off to consider.





  • Writing larger blocks, less frequently - will improve throughput, but requires more RAM, and in the event of power loss or crash, a larger portion of data will be lost


  • Writing smaller blocks, more frequently - will degrade throughput, but requires less RAM, and less data is held in volatile memory.


The filesystem or OS may impose rules about how frequently the file cache is written to disk, so you may need to manage this caching within the application... Start with using buffered I/O, and if that doesn't cut it, review the situation.






let's pretend 1,000 users are uploading 1GB file at 20KB/sec




You're comfortable with users uploading a 1 GB file over ~14.5 hours? With all of the issues that failures incur (i.e: re-uploading from the beginning).






share|improve this answer


























  • Sorry, random IO, new files per user.

    – Simon Hayter
    Jan 30 at 17:56











  • Basically users will be uploading files, they vary in size but it would be deemed acceptable for them to be around 20KB/sec, obviously, a mechanical drive with 20KB/sec per user with thousands of users would cause delay due to the disk head bouncing back and forth, it's that part I want to measure. So, Ideally, I want to estimate this hard drive can handle, 250 users, but I know its not as easy as this because then you got response time between the user and disk, but I want to calculate that into the estimate as well.

    – Simon Hayter
    Jan 30 at 18:03













  • Isn't there a util either in Powershell or application that I can run which fires off 1000 write threads for example?

    – Simon Hayter
    Jan 30 at 18:05













  • Unless you're dealing with a stream of data that needs to be captured in real time at 20 KB/sec, is this really an issue? Even so, I would expect your application / OS to cache a chunk of the upload, and write a large block at once (like the green in the graph). It may take many seconds to fill the cache and trigger a write to disk, depending on the configuration.

    – Attie
    Jan 30 at 18:08











  • Yes, there probably is an application that can benchmark this, but I'm not convinced that is in anyway useful information for your use case...

    – Attie
    Jan 30 at 18:09














1












1








1







It depends entirely on whether you're doing sequential or random I/O, and how often you want / need to flush to disk...



Both 20 KB/s and 100 KB/s are negligible with today's hardware. From the CrystalDiskMark screenshot, and your concern I'd suspect you're dealing with a spinning disk... why not use an SSD?






max simultaneous writes before the drive throttles and becomes slower




It's not a matter of the drive throttling, but rather that the physical movement of the head takes time to complete. With random I/O this is exacerbated as the size of each written block shrinks, and the seek time between writes increases.




let's assume the OS, Filesystem or the Application is not a part of the bottleneck




Without knowing the state of the filesystem in terms of fragmentation and free space, you cannot assume this, and you certainly can't assume it over the life of a product or installation.





If you're suffering from performance issues, then you'll want to make use of buffered I/O - i.e: writing to a file actually collects data into a buffer, before writing a larger block to disk at once.



Writing 100 KB/s for a period of 10 seconds can be presented to the storage as any of the following (or wider):




  • a block of 1 KB every 10ms

  • a block of 10 KB every 100ms

  • a block of 100 KB every 1 second

  • a block of 1,000 KB every 10 seconds


Are we discussing the regular (red), or infrequent (green)?
Each of the colors will "write" the same amount of data over the same timeframe.



write throughput in different block sizes



Writing larger blocks at once will help with throughput and filesystem fragmentation, though there is a trade-off to consider.





  • Writing larger blocks, less frequently - will improve throughput, but requires more RAM, and in the event of power loss or crash, a larger portion of data will be lost


  • Writing smaller blocks, more frequently - will degrade throughput, but requires less RAM, and less data is held in volatile memory.


The filesystem or OS may impose rules about how frequently the file cache is written to disk, so you may need to manage this caching within the application... Start with using buffered I/O, and if that doesn't cut it, review the situation.






let's pretend 1,000 users are uploading 1GB file at 20KB/sec




You're comfortable with users uploading a 1 GB file over ~14.5 hours? With all of the issues that failures incur (i.e: re-uploading from the beginning).






share|improve this answer















It depends entirely on whether you're doing sequential or random I/O, and how often you want / need to flush to disk...



Both 20 KB/s and 100 KB/s are negligible with today's hardware. From the CrystalDiskMark screenshot, and your concern I'd suspect you're dealing with a spinning disk... why not use an SSD?






max simultaneous writes before the drive throttles and becomes slower




It's not a matter of the drive throttling, but rather that the physical movement of the head takes time to complete. With random I/O this is exacerbated as the size of each written block shrinks, and the seek time between writes increases.




let's assume the OS, Filesystem or the Application is not a part of the bottleneck




Without knowing the state of the filesystem in terms of fragmentation and free space, you cannot assume this, and you certainly can't assume it over the life of a product or installation.





If you're suffering from performance issues, then you'll want to make use of buffered I/O - i.e: writing to a file actually collects data into a buffer, before writing a larger block to disk at once.



Writing 100 KB/s for a period of 10 seconds can be presented to the storage as any of the following (or wider):




  • a block of 1 KB every 10ms

  • a block of 10 KB every 100ms

  • a block of 100 KB every 1 second

  • a block of 1,000 KB every 10 seconds


Are we discussing the regular (red), or infrequent (green)?
Each of the colors will "write" the same amount of data over the same timeframe.



write throughput in different block sizes



Writing larger blocks at once will help with throughput and filesystem fragmentation, though there is a trade-off to consider.





  • Writing larger blocks, less frequently - will improve throughput, but requires more RAM, and in the event of power loss or crash, a larger portion of data will be lost


  • Writing smaller blocks, more frequently - will degrade throughput, but requires less RAM, and less data is held in volatile memory.


The filesystem or OS may impose rules about how frequently the file cache is written to disk, so you may need to manage this caching within the application... Start with using buffered I/O, and if that doesn't cut it, review the situation.






let's pretend 1,000 users are uploading 1GB file at 20KB/sec




You're comfortable with users uploading a 1 GB file over ~14.5 hours? With all of the issues that failures incur (i.e: re-uploading from the beginning).







share|improve this answer














share|improve this answer



share|improve this answer








edited Jan 30 at 18:45

























answered Jan 30 at 17:49









AttieAttie

11.6k32845




11.6k32845













  • Sorry, random IO, new files per user.

    – Simon Hayter
    Jan 30 at 17:56











  • Basically users will be uploading files, they vary in size but it would be deemed acceptable for them to be around 20KB/sec, obviously, a mechanical drive with 20KB/sec per user with thousands of users would cause delay due to the disk head bouncing back and forth, it's that part I want to measure. So, Ideally, I want to estimate this hard drive can handle, 250 users, but I know its not as easy as this because then you got response time between the user and disk, but I want to calculate that into the estimate as well.

    – Simon Hayter
    Jan 30 at 18:03













  • Isn't there a util either in Powershell or application that I can run which fires off 1000 write threads for example?

    – Simon Hayter
    Jan 30 at 18:05













  • Unless you're dealing with a stream of data that needs to be captured in real time at 20 KB/sec, is this really an issue? Even so, I would expect your application / OS to cache a chunk of the upload, and write a large block at once (like the green in the graph). It may take many seconds to fill the cache and trigger a write to disk, depending on the configuration.

    – Attie
    Jan 30 at 18:08











  • Yes, there probably is an application that can benchmark this, but I'm not convinced that is in anyway useful information for your use case...

    – Attie
    Jan 30 at 18:09



















  • Sorry, random IO, new files per user.

    – Simon Hayter
    Jan 30 at 17:56











  • Basically users will be uploading files, they vary in size but it would be deemed acceptable for them to be around 20KB/sec, obviously, a mechanical drive with 20KB/sec per user with thousands of users would cause delay due to the disk head bouncing back and forth, it's that part I want to measure. So, Ideally, I want to estimate this hard drive can handle, 250 users, but I know its not as easy as this because then you got response time between the user and disk, but I want to calculate that into the estimate as well.

    – Simon Hayter
    Jan 30 at 18:03













  • Isn't there a util either in Powershell or application that I can run which fires off 1000 write threads for example?

    – Simon Hayter
    Jan 30 at 18:05













  • Unless you're dealing with a stream of data that needs to be captured in real time at 20 KB/sec, is this really an issue? Even so, I would expect your application / OS to cache a chunk of the upload, and write a large block at once (like the green in the graph). It may take many seconds to fill the cache and trigger a write to disk, depending on the configuration.

    – Attie
    Jan 30 at 18:08











  • Yes, there probably is an application that can benchmark this, but I'm not convinced that is in anyway useful information for your use case...

    – Attie
    Jan 30 at 18:09

















Sorry, random IO, new files per user.

– Simon Hayter
Jan 30 at 17:56





Sorry, random IO, new files per user.

– Simon Hayter
Jan 30 at 17:56













Basically users will be uploading files, they vary in size but it would be deemed acceptable for them to be around 20KB/sec, obviously, a mechanical drive with 20KB/sec per user with thousands of users would cause delay due to the disk head bouncing back and forth, it's that part I want to measure. So, Ideally, I want to estimate this hard drive can handle, 250 users, but I know its not as easy as this because then you got response time between the user and disk, but I want to calculate that into the estimate as well.

– Simon Hayter
Jan 30 at 18:03







Basically users will be uploading files, they vary in size but it would be deemed acceptable for them to be around 20KB/sec, obviously, a mechanical drive with 20KB/sec per user with thousands of users would cause delay due to the disk head bouncing back and forth, it's that part I want to measure. So, Ideally, I want to estimate this hard drive can handle, 250 users, but I know its not as easy as this because then you got response time between the user and disk, but I want to calculate that into the estimate as well.

– Simon Hayter
Jan 30 at 18:03















Isn't there a util either in Powershell or application that I can run which fires off 1000 write threads for example?

– Simon Hayter
Jan 30 at 18:05







Isn't there a util either in Powershell or application that I can run which fires off 1000 write threads for example?

– Simon Hayter
Jan 30 at 18:05















Unless you're dealing with a stream of data that needs to be captured in real time at 20 KB/sec, is this really an issue? Even so, I would expect your application / OS to cache a chunk of the upload, and write a large block at once (like the green in the graph). It may take many seconds to fill the cache and trigger a write to disk, depending on the configuration.

– Attie
Jan 30 at 18:08





Unless you're dealing with a stream of data that needs to be captured in real time at 20 KB/sec, is this really an issue? Even so, I would expect your application / OS to cache a chunk of the upload, and write a large block at once (like the green in the graph). It may take many seconds to fill the cache and trigger a write to disk, depending on the configuration.

– Attie
Jan 30 at 18:08













Yes, there probably is an application that can benchmark this, but I'm not convinced that is in anyway useful information for your use case...

– Attie
Jan 30 at 18:09





Yes, there probably is an application that can benchmark this, but I'm not convinced that is in anyway useful information for your use case...

– Attie
Jan 30 at 18:09


















draft saved

draft discarded




















































Thanks for contributing an answer to Super User!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1400188%2fhow-to-calculate-the-max-write-threads-on-a-hard-drive-if-x-speed-is-desired%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

flock() on closed filehandle LOCK_FILE at /usr/bin/apt-mirror

Mangá

Eduardo VII do Reino Unido