Does partitioning hard drives into a smaller partition than the actual size make it perform better objectively?
I used to work on a 80gb HDD back in 2016 and it actually felt snappy most of the time. And after purchasing a new laptop with 1TB HDD and a much bigger ram and faster CPU - the HDD actually felt much slower than my old one although it was newer generation HDD (Not sure about the RPM speed).
So lately I've decided to partition my 1TB HDD into 80gb primary partition and leave the rest of the space unallocated since I don't really need anything beyond the OS and a main web browser.
After doing it - it actually felt much snappier than before for some reason. File manager runs so quickly and the overall performance of the HDD just feels much better. I want to know if it's just a placebo or it's a legit thing that has been going on before the SSD era.
I read something about "Disk short stroking" but I'm not sure if that's the same as what I did. Maybe my BIOs partitions space on the edges first? I want to know your explanations of this.
hard-drive partitioning
add a comment |
I used to work on a 80gb HDD back in 2016 and it actually felt snappy most of the time. And after purchasing a new laptop with 1TB HDD and a much bigger ram and faster CPU - the HDD actually felt much slower than my old one although it was newer generation HDD (Not sure about the RPM speed).
So lately I've decided to partition my 1TB HDD into 80gb primary partition and leave the rest of the space unallocated since I don't really need anything beyond the OS and a main web browser.
After doing it - it actually felt much snappier than before for some reason. File manager runs so quickly and the overall performance of the HDD just feels much better. I want to know if it's just a placebo or it's a legit thing that has been going on before the SSD era.
I read something about "Disk short stroking" but I'm not sure if that's the same as what I did. Maybe my BIOs partitions space on the edges first? I want to know your explanations of this.
hard-drive partitioning
6
That's interesting question, but the RPM could be a big deal, if a HDD bought in 2016 had only 80GB storage space? That was most likely either a hybrid HDD/SSD or it was some kind of High RPM drive (10,000RPM at least, probably more) either of those would have a major impact on performance.
– Cestarian
Nov 12 '18 at 18:43
1
@Cestarian That's a good point. I am currently using a relatively low end desktop from 2015 and it came with 80 GB SSD and 1TB HDD. The last time I bought a HDD as small as 80 GB was around 2003. So the mentioned 80 GB HDD may very well not have been an HDD but rather an SSD.
– kasperd
Nov 12 '18 at 23:34
Possible duplicate of Is there a performance advantage in having my HDD split into partitions?
– bertieb
Nov 13 '18 at 12:53
add a comment |
I used to work on a 80gb HDD back in 2016 and it actually felt snappy most of the time. And after purchasing a new laptop with 1TB HDD and a much bigger ram and faster CPU - the HDD actually felt much slower than my old one although it was newer generation HDD (Not sure about the RPM speed).
So lately I've decided to partition my 1TB HDD into 80gb primary partition and leave the rest of the space unallocated since I don't really need anything beyond the OS and a main web browser.
After doing it - it actually felt much snappier than before for some reason. File manager runs so quickly and the overall performance of the HDD just feels much better. I want to know if it's just a placebo or it's a legit thing that has been going on before the SSD era.
I read something about "Disk short stroking" but I'm not sure if that's the same as what I did. Maybe my BIOs partitions space on the edges first? I want to know your explanations of this.
hard-drive partitioning
I used to work on a 80gb HDD back in 2016 and it actually felt snappy most of the time. And after purchasing a new laptop with 1TB HDD and a much bigger ram and faster CPU - the HDD actually felt much slower than my old one although it was newer generation HDD (Not sure about the RPM speed).
So lately I've decided to partition my 1TB HDD into 80gb primary partition and leave the rest of the space unallocated since I don't really need anything beyond the OS and a main web browser.
After doing it - it actually felt much snappier than before for some reason. File manager runs so quickly and the overall performance of the HDD just feels much better. I want to know if it's just a placebo or it's a legit thing that has been going on before the SSD era.
I read something about "Disk short stroking" but I'm not sure if that's the same as what I did. Maybe my BIOs partitions space on the edges first? I want to know your explanations of this.
hard-drive partitioning
hard-drive partitioning
asked Nov 12 '18 at 17:51
莫愁姓莫愁姓
1063
1063
6
That's interesting question, but the RPM could be a big deal, if a HDD bought in 2016 had only 80GB storage space? That was most likely either a hybrid HDD/SSD or it was some kind of High RPM drive (10,000RPM at least, probably more) either of those would have a major impact on performance.
– Cestarian
Nov 12 '18 at 18:43
1
@Cestarian That's a good point. I am currently using a relatively low end desktop from 2015 and it came with 80 GB SSD and 1TB HDD. The last time I bought a HDD as small as 80 GB was around 2003. So the mentioned 80 GB HDD may very well not have been an HDD but rather an SSD.
– kasperd
Nov 12 '18 at 23:34
Possible duplicate of Is there a performance advantage in having my HDD split into partitions?
– bertieb
Nov 13 '18 at 12:53
add a comment |
6
That's interesting question, but the RPM could be a big deal, if a HDD bought in 2016 had only 80GB storage space? That was most likely either a hybrid HDD/SSD or it was some kind of High RPM drive (10,000RPM at least, probably more) either of those would have a major impact on performance.
– Cestarian
Nov 12 '18 at 18:43
1
@Cestarian That's a good point. I am currently using a relatively low end desktop from 2015 and it came with 80 GB SSD and 1TB HDD. The last time I bought a HDD as small as 80 GB was around 2003. So the mentioned 80 GB HDD may very well not have been an HDD but rather an SSD.
– kasperd
Nov 12 '18 at 23:34
Possible duplicate of Is there a performance advantage in having my HDD split into partitions?
– bertieb
Nov 13 '18 at 12:53
6
6
That's interesting question, but the RPM could be a big deal, if a HDD bought in 2016 had only 80GB storage space? That was most likely either a hybrid HDD/SSD or it was some kind of High RPM drive (10,000RPM at least, probably more) either of those would have a major impact on performance.
– Cestarian
Nov 12 '18 at 18:43
That's interesting question, but the RPM could be a big deal, if a HDD bought in 2016 had only 80GB storage space? That was most likely either a hybrid HDD/SSD or it was some kind of High RPM drive (10,000RPM at least, probably more) either of those would have a major impact on performance.
– Cestarian
Nov 12 '18 at 18:43
1
1
@Cestarian That's a good point. I am currently using a relatively low end desktop from 2015 and it came with 80 GB SSD and 1TB HDD. The last time I bought a HDD as small as 80 GB was around 2003. So the mentioned 80 GB HDD may very well not have been an HDD but rather an SSD.
– kasperd
Nov 12 '18 at 23:34
@Cestarian That's a good point. I am currently using a relatively low end desktop from 2015 and it came with 80 GB SSD and 1TB HDD. The last time I bought a HDD as small as 80 GB was around 2003. So the mentioned 80 GB HDD may very well not have been an HDD but rather an SSD.
– kasperd
Nov 12 '18 at 23:34
Possible duplicate of Is there a performance advantage in having my HDD split into partitions?
– bertieb
Nov 13 '18 at 12:53
Possible duplicate of Is there a performance advantage in having my HDD split into partitions?
– bertieb
Nov 13 '18 at 12:53
add a comment |
2 Answers
2
active
oldest
votes
Yes, what you're doing is called "short-stroking".
It improves seek performance by limiting the drive's head movement. Hard drive performance is primarily limited by three factors: Seek time (the time it takes to move the heads in or out to the desired cylinder), rotational latency, and of course the actual data transfer rate.
principles
Most modern 3.5 inch hard drives have average seek times in the 9 to 10 msec range. Once a "seek" is done, then the drive has to wait for the start of the desired sector to come under the heads. The average rotational latency is simply half of the time it takes for the drive to turn one full revolution. A 7200 rpm drive, turns at 120 revs per second, so a rev takes 1/120 sec, so half a rev - the average rotational latency - is 1/240 sec, or 4.2 msec. (Note that this is the same for every 7200 rpm hard drive.) So we have an average of about 13 msec before we can start transferring data.
The data transfer rate is whatever the drive spec says. With modern drives this is almost always somewhat lower than what the physical interface, e.g. SATA 3, supports. Note that the data transfer portion of an I/O operation is generally the smallest-duration part, and with modern interfaces can almost be ignored. Even on an old ATA33 drive, transferring 4KiB only took 1.2 msec.
The seek time specification is an average of possible seek times for various head-movement distances. You can see how a seek from one cylinder to the adjacent cylinder would be much shorter than from the innermost to the outermost. (A "cylinder" is the collection of all of the tracks that are accessible from a single head position.) Both of those are atypical situations. The assumption in HD performance is that data being accessed will be fairly randomly distributed across the drive, so the usual quoted seek time of around 9 or 10 msec is an average of a number of different seek distances. On the most detailed spec sheets, some manufacturers list both the cylinder to cylinder (often labeled "track to track"), ie adjacent, seek time and the maximum (end to end) in addition to the average.
When you see drive benchmarks done with large "sequential" transfers you are seeing tests done with data access patterns that minimize both seek time and rotational latency, and maximize the effectiveness of the drive's onboard cache. i.e. reading a single large file sequentially - from start to finish - using reading e.g. 64 KiB at a time, with the file occupying one contiguous range of blocks.
so how does short-stroking work?
By creating - and only using - a partition much smaller than the drive, you are keeping all of your data in a narrow span of possible cylinders (head positions). This makes the maximum possible seek time smaller, so the average is smaller. It doesn't help the rotational latency or transfer rate.
Another way it helps is by keeping your usage of the drive to the largest-capacity cylinders. Modern HDs use "zone bit recording", meaning there are more sectors per track on the outer tracks than on the inner. So if the data's on outer cylinders, you can access more data without moving the heads as much .
does it really work?
A lot of different tech enthusiast sites have tested this. For example, see this article at Tom's Hardware. The results are impressive: Nearly doubling the I/O rate per second.
But this was done by buying a large hard drive and only using a small fraction of the drive's capacity. This radically increases your cost per GB.
However, there is a workaround. You don't have to never use the remainder of the drive to get the speed benefit. You just have to keep it out of everyday use when your system is hitting your main partition a lot. Most of us have a few files we access a lot (the OS, apps, and some data that the apps work on) and a much larger amount of data that we access not so much. For example, you could use the remainder of the drive for some sort of archival storage, or for multimedia files like music and video. Media playback is generally infrequent, sequential access to a single file and you're usually not doing much else with the machine at the time. So using the drive this way won't make media playback any worse than if everything was all spread across one big partition, and work that doesn't involve the media data should get the benefit of short-stroking.
but is a good idea?
On the other hand... The tests performed by TH were synthetic benchmarks, and to get those results they threw away very high percentages of the disk capacity. Modern operating systems do quite a bit of work to try to optimize HD performance. One example is Windows' "file placement optimization", which is described in the comments to this answer. And "short-stroking" will make this less effective. Just because someone got impressive results in a synthetic benchmark doesn't mean "short-stroking" is necessarily a good thing to do.
Think about it: A 1 TB hard drive these days costs about $50. But you're only using 80 GB of it. You say you only need the OS and a browser... well, for $63 you can get a Samsung 128 GB SSD, giving you half again the space of your 80 GB and FAR better performance no matter how far you "short-stroke" the HD. Or for $50 you can get a SanDisk SSD with 240 GB capacity. That seems like a better deal than not-using almost all of a $50 one-terabyte hard drive.
btw
btw: Your "BIOS" (or UEFI for that matter) does not create partitions and has nothing to do with where the partitions are. It's up to the operating system's partitioning utility. Every OS I've ever heard of uses the outer cylinders first. For example, in Windows' Disk Management utility, the graphical display of drive partitions within each disk shows the layout with the outermost cylinders on the left. The AOMEI disk partitioning utility does the same.
ASIDE - TRUE STORY: Back in the day when 5.25-inch form factor hard drives were sized in the tens and hundreds of MB, a company called CDC had a line of drives called the "Wren" series. (This name was no doubt a slap at the much-physically-larger Fujitsu "Eagle" drives of a slightly earlier era.) For a while they also had a slightly higher-performance model, the "WrenRunner". About 90% the capacity, 20% more cost, and a millisecond or so shaved off of the average access time. After some experiments it was clear that the "WrenRunner" was just a "Wren" with the first and last few tracks locked out in the drive's firmware. i.e. you could get the same performance and capacity out of the cheaper Wren by "short stroking", though we didn't use that term then. A friend of mine was a distributor and made good karma with his customers by telling them "spend less money - just buy the Wren and don't use all of it!"
While your explanation is reasonable, the "yes" seems bogus. If you only use 5% or 10% of the available storage, any decent OS's filesystem driver is going to allocate storage mostly or entirely from the first 5 or 10 % of the disk. If it allocates past that, it's going to be to avoid using badly-fragmented free regions, and thereby perform better. Partitioning a drive "for performance" rather than for the purpose of keeping things separate is just a ridiculous idea.
– R..
Nov 12 '18 at 23:48
@R.. or it could be to avoid repeatedly writing at the start of the disk, making the drive last longer, but i guess that's only if you replace data very often
– somebody
Nov 13 '18 at 7:06
1
@R.. Well... if you've ever looked at the usage distribution on a large HD after it's been in use for a few months or a year, you might have a different idea.
– Jamie Hanrahan
Nov 13 '18 at 7:27
2
On the other hand, there is Windows' File Placement Optimization. It deliberately moves parts of exe's and dll's that are accessed close together in time during boot, to be close together on the disk. And since this tries to move this stuff into a single contiguous space, it should be more effective where such space can be found - i.e. on a drive with lots of free space. (Naturally, WIndows doesn't bother doing this on an SSD.) So I would conclude that the answer "yes, it improves HD performance" is still valid for benchmarks - but that doesn't necessarily mean it's a good thing to do.
– Jamie Hanrahan
Nov 13 '18 at 7:29
2
Here's a writeup on Windows File Placement Optimization. Of course Windows Internals by Solomon, Russinovich, et al is the "horse's mouth" reference. autoitconsulting.com/site/performance/…
– Jamie Hanrahan
Nov 13 '18 at 7:30
add a comment |
There are a lot of factors, and I'm not sure there's a strictly canonical answer. However, a smaller partition near the outside of spinning disks might exhibit faster seek and sequential transfers, provided that your data isn't highly fragmented.
On spinning disks, the outer cylinders have more sectors and rotate faster than inner cylinders. Many modern filesystems try to place the sectors of files contiguously to reduce fragmentation, which often means that large partitions use more and more of the inner cylinders over time.
It's possible that the smaller partition forces the file system to place more of the data on the outer cylinders, and that even when lightly fragmented the data moves under the read heads faster.
You can test your drive for random access and sequential performance with different partition sizes using Linux tools like hdparm, although you might want to use a more advanced tool that takes fragmentation into account if you want more than a pragmatic answer.
Depending on what you're using your drives for, any benefit of wasting disk space is likely to be offset by the waste itself. If random access or performance under fragmentation is important to you, switching to a solid-state drive (SSD) probably makes more sense in the long run.
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "3"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1374779%2fdoes-partitioning-hard-drives-into-a-smaller-partition-than-the-actual-size-make%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
Yes, what you're doing is called "short-stroking".
It improves seek performance by limiting the drive's head movement. Hard drive performance is primarily limited by three factors: Seek time (the time it takes to move the heads in or out to the desired cylinder), rotational latency, and of course the actual data transfer rate.
principles
Most modern 3.5 inch hard drives have average seek times in the 9 to 10 msec range. Once a "seek" is done, then the drive has to wait for the start of the desired sector to come under the heads. The average rotational latency is simply half of the time it takes for the drive to turn one full revolution. A 7200 rpm drive, turns at 120 revs per second, so a rev takes 1/120 sec, so half a rev - the average rotational latency - is 1/240 sec, or 4.2 msec. (Note that this is the same for every 7200 rpm hard drive.) So we have an average of about 13 msec before we can start transferring data.
The data transfer rate is whatever the drive spec says. With modern drives this is almost always somewhat lower than what the physical interface, e.g. SATA 3, supports. Note that the data transfer portion of an I/O operation is generally the smallest-duration part, and with modern interfaces can almost be ignored. Even on an old ATA33 drive, transferring 4KiB only took 1.2 msec.
The seek time specification is an average of possible seek times for various head-movement distances. You can see how a seek from one cylinder to the adjacent cylinder would be much shorter than from the innermost to the outermost. (A "cylinder" is the collection of all of the tracks that are accessible from a single head position.) Both of those are atypical situations. The assumption in HD performance is that data being accessed will be fairly randomly distributed across the drive, so the usual quoted seek time of around 9 or 10 msec is an average of a number of different seek distances. On the most detailed spec sheets, some manufacturers list both the cylinder to cylinder (often labeled "track to track"), ie adjacent, seek time and the maximum (end to end) in addition to the average.
When you see drive benchmarks done with large "sequential" transfers you are seeing tests done with data access patterns that minimize both seek time and rotational latency, and maximize the effectiveness of the drive's onboard cache. i.e. reading a single large file sequentially - from start to finish - using reading e.g. 64 KiB at a time, with the file occupying one contiguous range of blocks.
so how does short-stroking work?
By creating - and only using - a partition much smaller than the drive, you are keeping all of your data in a narrow span of possible cylinders (head positions). This makes the maximum possible seek time smaller, so the average is smaller. It doesn't help the rotational latency or transfer rate.
Another way it helps is by keeping your usage of the drive to the largest-capacity cylinders. Modern HDs use "zone bit recording", meaning there are more sectors per track on the outer tracks than on the inner. So if the data's on outer cylinders, you can access more data without moving the heads as much .
does it really work?
A lot of different tech enthusiast sites have tested this. For example, see this article at Tom's Hardware. The results are impressive: Nearly doubling the I/O rate per second.
But this was done by buying a large hard drive and only using a small fraction of the drive's capacity. This radically increases your cost per GB.
However, there is a workaround. You don't have to never use the remainder of the drive to get the speed benefit. You just have to keep it out of everyday use when your system is hitting your main partition a lot. Most of us have a few files we access a lot (the OS, apps, and some data that the apps work on) and a much larger amount of data that we access not so much. For example, you could use the remainder of the drive for some sort of archival storage, or for multimedia files like music and video. Media playback is generally infrequent, sequential access to a single file and you're usually not doing much else with the machine at the time. So using the drive this way won't make media playback any worse than if everything was all spread across one big partition, and work that doesn't involve the media data should get the benefit of short-stroking.
but is a good idea?
On the other hand... The tests performed by TH were synthetic benchmarks, and to get those results they threw away very high percentages of the disk capacity. Modern operating systems do quite a bit of work to try to optimize HD performance. One example is Windows' "file placement optimization", which is described in the comments to this answer. And "short-stroking" will make this less effective. Just because someone got impressive results in a synthetic benchmark doesn't mean "short-stroking" is necessarily a good thing to do.
Think about it: A 1 TB hard drive these days costs about $50. But you're only using 80 GB of it. You say you only need the OS and a browser... well, for $63 you can get a Samsung 128 GB SSD, giving you half again the space of your 80 GB and FAR better performance no matter how far you "short-stroke" the HD. Or for $50 you can get a SanDisk SSD with 240 GB capacity. That seems like a better deal than not-using almost all of a $50 one-terabyte hard drive.
btw
btw: Your "BIOS" (or UEFI for that matter) does not create partitions and has nothing to do with where the partitions are. It's up to the operating system's partitioning utility. Every OS I've ever heard of uses the outer cylinders first. For example, in Windows' Disk Management utility, the graphical display of drive partitions within each disk shows the layout with the outermost cylinders on the left. The AOMEI disk partitioning utility does the same.
ASIDE - TRUE STORY: Back in the day when 5.25-inch form factor hard drives were sized in the tens and hundreds of MB, a company called CDC had a line of drives called the "Wren" series. (This name was no doubt a slap at the much-physically-larger Fujitsu "Eagle" drives of a slightly earlier era.) For a while they also had a slightly higher-performance model, the "WrenRunner". About 90% the capacity, 20% more cost, and a millisecond or so shaved off of the average access time. After some experiments it was clear that the "WrenRunner" was just a "Wren" with the first and last few tracks locked out in the drive's firmware. i.e. you could get the same performance and capacity out of the cheaper Wren by "short stroking", though we didn't use that term then. A friend of mine was a distributor and made good karma with his customers by telling them "spend less money - just buy the Wren and don't use all of it!"
While your explanation is reasonable, the "yes" seems bogus. If you only use 5% or 10% of the available storage, any decent OS's filesystem driver is going to allocate storage mostly or entirely from the first 5 or 10 % of the disk. If it allocates past that, it's going to be to avoid using badly-fragmented free regions, and thereby perform better. Partitioning a drive "for performance" rather than for the purpose of keeping things separate is just a ridiculous idea.
– R..
Nov 12 '18 at 23:48
@R.. or it could be to avoid repeatedly writing at the start of the disk, making the drive last longer, but i guess that's only if you replace data very often
– somebody
Nov 13 '18 at 7:06
1
@R.. Well... if you've ever looked at the usage distribution on a large HD after it's been in use for a few months or a year, you might have a different idea.
– Jamie Hanrahan
Nov 13 '18 at 7:27
2
On the other hand, there is Windows' File Placement Optimization. It deliberately moves parts of exe's and dll's that are accessed close together in time during boot, to be close together on the disk. And since this tries to move this stuff into a single contiguous space, it should be more effective where such space can be found - i.e. on a drive with lots of free space. (Naturally, WIndows doesn't bother doing this on an SSD.) So I would conclude that the answer "yes, it improves HD performance" is still valid for benchmarks - but that doesn't necessarily mean it's a good thing to do.
– Jamie Hanrahan
Nov 13 '18 at 7:29
2
Here's a writeup on Windows File Placement Optimization. Of course Windows Internals by Solomon, Russinovich, et al is the "horse's mouth" reference. autoitconsulting.com/site/performance/…
– Jamie Hanrahan
Nov 13 '18 at 7:30
add a comment |
Yes, what you're doing is called "short-stroking".
It improves seek performance by limiting the drive's head movement. Hard drive performance is primarily limited by three factors: Seek time (the time it takes to move the heads in or out to the desired cylinder), rotational latency, and of course the actual data transfer rate.
principles
Most modern 3.5 inch hard drives have average seek times in the 9 to 10 msec range. Once a "seek" is done, then the drive has to wait for the start of the desired sector to come under the heads. The average rotational latency is simply half of the time it takes for the drive to turn one full revolution. A 7200 rpm drive, turns at 120 revs per second, so a rev takes 1/120 sec, so half a rev - the average rotational latency - is 1/240 sec, or 4.2 msec. (Note that this is the same for every 7200 rpm hard drive.) So we have an average of about 13 msec before we can start transferring data.
The data transfer rate is whatever the drive spec says. With modern drives this is almost always somewhat lower than what the physical interface, e.g. SATA 3, supports. Note that the data transfer portion of an I/O operation is generally the smallest-duration part, and with modern interfaces can almost be ignored. Even on an old ATA33 drive, transferring 4KiB only took 1.2 msec.
The seek time specification is an average of possible seek times for various head-movement distances. You can see how a seek from one cylinder to the adjacent cylinder would be much shorter than from the innermost to the outermost. (A "cylinder" is the collection of all of the tracks that are accessible from a single head position.) Both of those are atypical situations. The assumption in HD performance is that data being accessed will be fairly randomly distributed across the drive, so the usual quoted seek time of around 9 or 10 msec is an average of a number of different seek distances. On the most detailed spec sheets, some manufacturers list both the cylinder to cylinder (often labeled "track to track"), ie adjacent, seek time and the maximum (end to end) in addition to the average.
When you see drive benchmarks done with large "sequential" transfers you are seeing tests done with data access patterns that minimize both seek time and rotational latency, and maximize the effectiveness of the drive's onboard cache. i.e. reading a single large file sequentially - from start to finish - using reading e.g. 64 KiB at a time, with the file occupying one contiguous range of blocks.
so how does short-stroking work?
By creating - and only using - a partition much smaller than the drive, you are keeping all of your data in a narrow span of possible cylinders (head positions). This makes the maximum possible seek time smaller, so the average is smaller. It doesn't help the rotational latency or transfer rate.
Another way it helps is by keeping your usage of the drive to the largest-capacity cylinders. Modern HDs use "zone bit recording", meaning there are more sectors per track on the outer tracks than on the inner. So if the data's on outer cylinders, you can access more data without moving the heads as much .
does it really work?
A lot of different tech enthusiast sites have tested this. For example, see this article at Tom's Hardware. The results are impressive: Nearly doubling the I/O rate per second.
But this was done by buying a large hard drive and only using a small fraction of the drive's capacity. This radically increases your cost per GB.
However, there is a workaround. You don't have to never use the remainder of the drive to get the speed benefit. You just have to keep it out of everyday use when your system is hitting your main partition a lot. Most of us have a few files we access a lot (the OS, apps, and some data that the apps work on) and a much larger amount of data that we access not so much. For example, you could use the remainder of the drive for some sort of archival storage, or for multimedia files like music and video. Media playback is generally infrequent, sequential access to a single file and you're usually not doing much else with the machine at the time. So using the drive this way won't make media playback any worse than if everything was all spread across one big partition, and work that doesn't involve the media data should get the benefit of short-stroking.
but is a good idea?
On the other hand... The tests performed by TH were synthetic benchmarks, and to get those results they threw away very high percentages of the disk capacity. Modern operating systems do quite a bit of work to try to optimize HD performance. One example is Windows' "file placement optimization", which is described in the comments to this answer. And "short-stroking" will make this less effective. Just because someone got impressive results in a synthetic benchmark doesn't mean "short-stroking" is necessarily a good thing to do.
Think about it: A 1 TB hard drive these days costs about $50. But you're only using 80 GB of it. You say you only need the OS and a browser... well, for $63 you can get a Samsung 128 GB SSD, giving you half again the space of your 80 GB and FAR better performance no matter how far you "short-stroke" the HD. Or for $50 you can get a SanDisk SSD with 240 GB capacity. That seems like a better deal than not-using almost all of a $50 one-terabyte hard drive.
btw
btw: Your "BIOS" (or UEFI for that matter) does not create partitions and has nothing to do with where the partitions are. It's up to the operating system's partitioning utility. Every OS I've ever heard of uses the outer cylinders first. For example, in Windows' Disk Management utility, the graphical display of drive partitions within each disk shows the layout with the outermost cylinders on the left. The AOMEI disk partitioning utility does the same.
ASIDE - TRUE STORY: Back in the day when 5.25-inch form factor hard drives were sized in the tens and hundreds of MB, a company called CDC had a line of drives called the "Wren" series. (This name was no doubt a slap at the much-physically-larger Fujitsu "Eagle" drives of a slightly earlier era.) For a while they also had a slightly higher-performance model, the "WrenRunner". About 90% the capacity, 20% more cost, and a millisecond or so shaved off of the average access time. After some experiments it was clear that the "WrenRunner" was just a "Wren" with the first and last few tracks locked out in the drive's firmware. i.e. you could get the same performance and capacity out of the cheaper Wren by "short stroking", though we didn't use that term then. A friend of mine was a distributor and made good karma with his customers by telling them "spend less money - just buy the Wren and don't use all of it!"
While your explanation is reasonable, the "yes" seems bogus. If you only use 5% or 10% of the available storage, any decent OS's filesystem driver is going to allocate storage mostly or entirely from the first 5 or 10 % of the disk. If it allocates past that, it's going to be to avoid using badly-fragmented free regions, and thereby perform better. Partitioning a drive "for performance" rather than for the purpose of keeping things separate is just a ridiculous idea.
– R..
Nov 12 '18 at 23:48
@R.. or it could be to avoid repeatedly writing at the start of the disk, making the drive last longer, but i guess that's only if you replace data very often
– somebody
Nov 13 '18 at 7:06
1
@R.. Well... if you've ever looked at the usage distribution on a large HD after it's been in use for a few months or a year, you might have a different idea.
– Jamie Hanrahan
Nov 13 '18 at 7:27
2
On the other hand, there is Windows' File Placement Optimization. It deliberately moves parts of exe's and dll's that are accessed close together in time during boot, to be close together on the disk. And since this tries to move this stuff into a single contiguous space, it should be more effective where such space can be found - i.e. on a drive with lots of free space. (Naturally, WIndows doesn't bother doing this on an SSD.) So I would conclude that the answer "yes, it improves HD performance" is still valid for benchmarks - but that doesn't necessarily mean it's a good thing to do.
– Jamie Hanrahan
Nov 13 '18 at 7:29
2
Here's a writeup on Windows File Placement Optimization. Of course Windows Internals by Solomon, Russinovich, et al is the "horse's mouth" reference. autoitconsulting.com/site/performance/…
– Jamie Hanrahan
Nov 13 '18 at 7:30
add a comment |
Yes, what you're doing is called "short-stroking".
It improves seek performance by limiting the drive's head movement. Hard drive performance is primarily limited by three factors: Seek time (the time it takes to move the heads in or out to the desired cylinder), rotational latency, and of course the actual data transfer rate.
principles
Most modern 3.5 inch hard drives have average seek times in the 9 to 10 msec range. Once a "seek" is done, then the drive has to wait for the start of the desired sector to come under the heads. The average rotational latency is simply half of the time it takes for the drive to turn one full revolution. A 7200 rpm drive, turns at 120 revs per second, so a rev takes 1/120 sec, so half a rev - the average rotational latency - is 1/240 sec, or 4.2 msec. (Note that this is the same for every 7200 rpm hard drive.) So we have an average of about 13 msec before we can start transferring data.
The data transfer rate is whatever the drive spec says. With modern drives this is almost always somewhat lower than what the physical interface, e.g. SATA 3, supports. Note that the data transfer portion of an I/O operation is generally the smallest-duration part, and with modern interfaces can almost be ignored. Even on an old ATA33 drive, transferring 4KiB only took 1.2 msec.
The seek time specification is an average of possible seek times for various head-movement distances. You can see how a seek from one cylinder to the adjacent cylinder would be much shorter than from the innermost to the outermost. (A "cylinder" is the collection of all of the tracks that are accessible from a single head position.) Both of those are atypical situations. The assumption in HD performance is that data being accessed will be fairly randomly distributed across the drive, so the usual quoted seek time of around 9 or 10 msec is an average of a number of different seek distances. On the most detailed spec sheets, some manufacturers list both the cylinder to cylinder (often labeled "track to track"), ie adjacent, seek time and the maximum (end to end) in addition to the average.
When you see drive benchmarks done with large "sequential" transfers you are seeing tests done with data access patterns that minimize both seek time and rotational latency, and maximize the effectiveness of the drive's onboard cache. i.e. reading a single large file sequentially - from start to finish - using reading e.g. 64 KiB at a time, with the file occupying one contiguous range of blocks.
so how does short-stroking work?
By creating - and only using - a partition much smaller than the drive, you are keeping all of your data in a narrow span of possible cylinders (head positions). This makes the maximum possible seek time smaller, so the average is smaller. It doesn't help the rotational latency or transfer rate.
Another way it helps is by keeping your usage of the drive to the largest-capacity cylinders. Modern HDs use "zone bit recording", meaning there are more sectors per track on the outer tracks than on the inner. So if the data's on outer cylinders, you can access more data without moving the heads as much .
does it really work?
A lot of different tech enthusiast sites have tested this. For example, see this article at Tom's Hardware. The results are impressive: Nearly doubling the I/O rate per second.
But this was done by buying a large hard drive and only using a small fraction of the drive's capacity. This radically increases your cost per GB.
However, there is a workaround. You don't have to never use the remainder of the drive to get the speed benefit. You just have to keep it out of everyday use when your system is hitting your main partition a lot. Most of us have a few files we access a lot (the OS, apps, and some data that the apps work on) and a much larger amount of data that we access not so much. For example, you could use the remainder of the drive for some sort of archival storage, or for multimedia files like music and video. Media playback is generally infrequent, sequential access to a single file and you're usually not doing much else with the machine at the time. So using the drive this way won't make media playback any worse than if everything was all spread across one big partition, and work that doesn't involve the media data should get the benefit of short-stroking.
but is a good idea?
On the other hand... The tests performed by TH were synthetic benchmarks, and to get those results they threw away very high percentages of the disk capacity. Modern operating systems do quite a bit of work to try to optimize HD performance. One example is Windows' "file placement optimization", which is described in the comments to this answer. And "short-stroking" will make this less effective. Just because someone got impressive results in a synthetic benchmark doesn't mean "short-stroking" is necessarily a good thing to do.
Think about it: A 1 TB hard drive these days costs about $50. But you're only using 80 GB of it. You say you only need the OS and a browser... well, for $63 you can get a Samsung 128 GB SSD, giving you half again the space of your 80 GB and FAR better performance no matter how far you "short-stroke" the HD. Or for $50 you can get a SanDisk SSD with 240 GB capacity. That seems like a better deal than not-using almost all of a $50 one-terabyte hard drive.
btw
btw: Your "BIOS" (or UEFI for that matter) does not create partitions and has nothing to do with where the partitions are. It's up to the operating system's partitioning utility. Every OS I've ever heard of uses the outer cylinders first. For example, in Windows' Disk Management utility, the graphical display of drive partitions within each disk shows the layout with the outermost cylinders on the left. The AOMEI disk partitioning utility does the same.
ASIDE - TRUE STORY: Back in the day when 5.25-inch form factor hard drives were sized in the tens and hundreds of MB, a company called CDC had a line of drives called the "Wren" series. (This name was no doubt a slap at the much-physically-larger Fujitsu "Eagle" drives of a slightly earlier era.) For a while they also had a slightly higher-performance model, the "WrenRunner". About 90% the capacity, 20% more cost, and a millisecond or so shaved off of the average access time. After some experiments it was clear that the "WrenRunner" was just a "Wren" with the first and last few tracks locked out in the drive's firmware. i.e. you could get the same performance and capacity out of the cheaper Wren by "short stroking", though we didn't use that term then. A friend of mine was a distributor and made good karma with his customers by telling them "spend less money - just buy the Wren and don't use all of it!"
Yes, what you're doing is called "short-stroking".
It improves seek performance by limiting the drive's head movement. Hard drive performance is primarily limited by three factors: Seek time (the time it takes to move the heads in or out to the desired cylinder), rotational latency, and of course the actual data transfer rate.
principles
Most modern 3.5 inch hard drives have average seek times in the 9 to 10 msec range. Once a "seek" is done, then the drive has to wait for the start of the desired sector to come under the heads. The average rotational latency is simply half of the time it takes for the drive to turn one full revolution. A 7200 rpm drive, turns at 120 revs per second, so a rev takes 1/120 sec, so half a rev - the average rotational latency - is 1/240 sec, or 4.2 msec. (Note that this is the same for every 7200 rpm hard drive.) So we have an average of about 13 msec before we can start transferring data.
The data transfer rate is whatever the drive spec says. With modern drives this is almost always somewhat lower than what the physical interface, e.g. SATA 3, supports. Note that the data transfer portion of an I/O operation is generally the smallest-duration part, and with modern interfaces can almost be ignored. Even on an old ATA33 drive, transferring 4KiB only took 1.2 msec.
The seek time specification is an average of possible seek times for various head-movement distances. You can see how a seek from one cylinder to the adjacent cylinder would be much shorter than from the innermost to the outermost. (A "cylinder" is the collection of all of the tracks that are accessible from a single head position.) Both of those are atypical situations. The assumption in HD performance is that data being accessed will be fairly randomly distributed across the drive, so the usual quoted seek time of around 9 or 10 msec is an average of a number of different seek distances. On the most detailed spec sheets, some manufacturers list both the cylinder to cylinder (often labeled "track to track"), ie adjacent, seek time and the maximum (end to end) in addition to the average.
When you see drive benchmarks done with large "sequential" transfers you are seeing tests done with data access patterns that minimize both seek time and rotational latency, and maximize the effectiveness of the drive's onboard cache. i.e. reading a single large file sequentially - from start to finish - using reading e.g. 64 KiB at a time, with the file occupying one contiguous range of blocks.
so how does short-stroking work?
By creating - and only using - a partition much smaller than the drive, you are keeping all of your data in a narrow span of possible cylinders (head positions). This makes the maximum possible seek time smaller, so the average is smaller. It doesn't help the rotational latency or transfer rate.
Another way it helps is by keeping your usage of the drive to the largest-capacity cylinders. Modern HDs use "zone bit recording", meaning there are more sectors per track on the outer tracks than on the inner. So if the data's on outer cylinders, you can access more data without moving the heads as much .
does it really work?
A lot of different tech enthusiast sites have tested this. For example, see this article at Tom's Hardware. The results are impressive: Nearly doubling the I/O rate per second.
But this was done by buying a large hard drive and only using a small fraction of the drive's capacity. This radically increases your cost per GB.
However, there is a workaround. You don't have to never use the remainder of the drive to get the speed benefit. You just have to keep it out of everyday use when your system is hitting your main partition a lot. Most of us have a few files we access a lot (the OS, apps, and some data that the apps work on) and a much larger amount of data that we access not so much. For example, you could use the remainder of the drive for some sort of archival storage, or for multimedia files like music and video. Media playback is generally infrequent, sequential access to a single file and you're usually not doing much else with the machine at the time. So using the drive this way won't make media playback any worse than if everything was all spread across one big partition, and work that doesn't involve the media data should get the benefit of short-stroking.
but is a good idea?
On the other hand... The tests performed by TH were synthetic benchmarks, and to get those results they threw away very high percentages of the disk capacity. Modern operating systems do quite a bit of work to try to optimize HD performance. One example is Windows' "file placement optimization", which is described in the comments to this answer. And "short-stroking" will make this less effective. Just because someone got impressive results in a synthetic benchmark doesn't mean "short-stroking" is necessarily a good thing to do.
Think about it: A 1 TB hard drive these days costs about $50. But you're only using 80 GB of it. You say you only need the OS and a browser... well, for $63 you can get a Samsung 128 GB SSD, giving you half again the space of your 80 GB and FAR better performance no matter how far you "short-stroke" the HD. Or for $50 you can get a SanDisk SSD with 240 GB capacity. That seems like a better deal than not-using almost all of a $50 one-terabyte hard drive.
btw
btw: Your "BIOS" (or UEFI for that matter) does not create partitions and has nothing to do with where the partitions are. It's up to the operating system's partitioning utility. Every OS I've ever heard of uses the outer cylinders first. For example, in Windows' Disk Management utility, the graphical display of drive partitions within each disk shows the layout with the outermost cylinders on the left. The AOMEI disk partitioning utility does the same.
ASIDE - TRUE STORY: Back in the day when 5.25-inch form factor hard drives were sized in the tens and hundreds of MB, a company called CDC had a line of drives called the "Wren" series. (This name was no doubt a slap at the much-physically-larger Fujitsu "Eagle" drives of a slightly earlier era.) For a while they also had a slightly higher-performance model, the "WrenRunner". About 90% the capacity, 20% more cost, and a millisecond or so shaved off of the average access time. After some experiments it was clear that the "WrenRunner" was just a "Wren" with the first and last few tracks locked out in the drive's firmware. i.e. you could get the same performance and capacity out of the cheaper Wren by "short stroking", though we didn't use that term then. A friend of mine was a distributor and made good karma with his customers by telling them "spend less money - just buy the Wren and don't use all of it!"
edited Nov 15 '18 at 17:50
answered Nov 12 '18 at 18:09
Jamie HanrahanJamie Hanrahan
18.3k34279
18.3k34279
While your explanation is reasonable, the "yes" seems bogus. If you only use 5% or 10% of the available storage, any decent OS's filesystem driver is going to allocate storage mostly or entirely from the first 5 or 10 % of the disk. If it allocates past that, it's going to be to avoid using badly-fragmented free regions, and thereby perform better. Partitioning a drive "for performance" rather than for the purpose of keeping things separate is just a ridiculous idea.
– R..
Nov 12 '18 at 23:48
@R.. or it could be to avoid repeatedly writing at the start of the disk, making the drive last longer, but i guess that's only if you replace data very often
– somebody
Nov 13 '18 at 7:06
1
@R.. Well... if you've ever looked at the usage distribution on a large HD after it's been in use for a few months or a year, you might have a different idea.
– Jamie Hanrahan
Nov 13 '18 at 7:27
2
On the other hand, there is Windows' File Placement Optimization. It deliberately moves parts of exe's and dll's that are accessed close together in time during boot, to be close together on the disk. And since this tries to move this stuff into a single contiguous space, it should be more effective where such space can be found - i.e. on a drive with lots of free space. (Naturally, WIndows doesn't bother doing this on an SSD.) So I would conclude that the answer "yes, it improves HD performance" is still valid for benchmarks - but that doesn't necessarily mean it's a good thing to do.
– Jamie Hanrahan
Nov 13 '18 at 7:29
2
Here's a writeup on Windows File Placement Optimization. Of course Windows Internals by Solomon, Russinovich, et al is the "horse's mouth" reference. autoitconsulting.com/site/performance/…
– Jamie Hanrahan
Nov 13 '18 at 7:30
add a comment |
While your explanation is reasonable, the "yes" seems bogus. If you only use 5% or 10% of the available storage, any decent OS's filesystem driver is going to allocate storage mostly or entirely from the first 5 or 10 % of the disk. If it allocates past that, it's going to be to avoid using badly-fragmented free regions, and thereby perform better. Partitioning a drive "for performance" rather than for the purpose of keeping things separate is just a ridiculous idea.
– R..
Nov 12 '18 at 23:48
@R.. or it could be to avoid repeatedly writing at the start of the disk, making the drive last longer, but i guess that's only if you replace data very often
– somebody
Nov 13 '18 at 7:06
1
@R.. Well... if you've ever looked at the usage distribution on a large HD after it's been in use for a few months or a year, you might have a different idea.
– Jamie Hanrahan
Nov 13 '18 at 7:27
2
On the other hand, there is Windows' File Placement Optimization. It deliberately moves parts of exe's and dll's that are accessed close together in time during boot, to be close together on the disk. And since this tries to move this stuff into a single contiguous space, it should be more effective where such space can be found - i.e. on a drive with lots of free space. (Naturally, WIndows doesn't bother doing this on an SSD.) So I would conclude that the answer "yes, it improves HD performance" is still valid for benchmarks - but that doesn't necessarily mean it's a good thing to do.
– Jamie Hanrahan
Nov 13 '18 at 7:29
2
Here's a writeup on Windows File Placement Optimization. Of course Windows Internals by Solomon, Russinovich, et al is the "horse's mouth" reference. autoitconsulting.com/site/performance/…
– Jamie Hanrahan
Nov 13 '18 at 7:30
While your explanation is reasonable, the "yes" seems bogus. If you only use 5% or 10% of the available storage, any decent OS's filesystem driver is going to allocate storage mostly or entirely from the first 5 or 10 % of the disk. If it allocates past that, it's going to be to avoid using badly-fragmented free regions, and thereby perform better. Partitioning a drive "for performance" rather than for the purpose of keeping things separate is just a ridiculous idea.
– R..
Nov 12 '18 at 23:48
While your explanation is reasonable, the "yes" seems bogus. If you only use 5% or 10% of the available storage, any decent OS's filesystem driver is going to allocate storage mostly or entirely from the first 5 or 10 % of the disk. If it allocates past that, it's going to be to avoid using badly-fragmented free regions, and thereby perform better. Partitioning a drive "for performance" rather than for the purpose of keeping things separate is just a ridiculous idea.
– R..
Nov 12 '18 at 23:48
@R.. or it could be to avoid repeatedly writing at the start of the disk, making the drive last longer, but i guess that's only if you replace data very often
– somebody
Nov 13 '18 at 7:06
@R.. or it could be to avoid repeatedly writing at the start of the disk, making the drive last longer, but i guess that's only if you replace data very often
– somebody
Nov 13 '18 at 7:06
1
1
@R.. Well... if you've ever looked at the usage distribution on a large HD after it's been in use for a few months or a year, you might have a different idea.
– Jamie Hanrahan
Nov 13 '18 at 7:27
@R.. Well... if you've ever looked at the usage distribution on a large HD after it's been in use for a few months or a year, you might have a different idea.
– Jamie Hanrahan
Nov 13 '18 at 7:27
2
2
On the other hand, there is Windows' File Placement Optimization. It deliberately moves parts of exe's and dll's that are accessed close together in time during boot, to be close together on the disk. And since this tries to move this stuff into a single contiguous space, it should be more effective where such space can be found - i.e. on a drive with lots of free space. (Naturally, WIndows doesn't bother doing this on an SSD.) So I would conclude that the answer "yes, it improves HD performance" is still valid for benchmarks - but that doesn't necessarily mean it's a good thing to do.
– Jamie Hanrahan
Nov 13 '18 at 7:29
On the other hand, there is Windows' File Placement Optimization. It deliberately moves parts of exe's and dll's that are accessed close together in time during boot, to be close together on the disk. And since this tries to move this stuff into a single contiguous space, it should be more effective where such space can be found - i.e. on a drive with lots of free space. (Naturally, WIndows doesn't bother doing this on an SSD.) So I would conclude that the answer "yes, it improves HD performance" is still valid for benchmarks - but that doesn't necessarily mean it's a good thing to do.
– Jamie Hanrahan
Nov 13 '18 at 7:29
2
2
Here's a writeup on Windows File Placement Optimization. Of course Windows Internals by Solomon, Russinovich, et al is the "horse's mouth" reference. autoitconsulting.com/site/performance/…
– Jamie Hanrahan
Nov 13 '18 at 7:30
Here's a writeup on Windows File Placement Optimization. Of course Windows Internals by Solomon, Russinovich, et al is the "horse's mouth" reference. autoitconsulting.com/site/performance/…
– Jamie Hanrahan
Nov 13 '18 at 7:30
add a comment |
There are a lot of factors, and I'm not sure there's a strictly canonical answer. However, a smaller partition near the outside of spinning disks might exhibit faster seek and sequential transfers, provided that your data isn't highly fragmented.
On spinning disks, the outer cylinders have more sectors and rotate faster than inner cylinders. Many modern filesystems try to place the sectors of files contiguously to reduce fragmentation, which often means that large partitions use more and more of the inner cylinders over time.
It's possible that the smaller partition forces the file system to place more of the data on the outer cylinders, and that even when lightly fragmented the data moves under the read heads faster.
You can test your drive for random access and sequential performance with different partition sizes using Linux tools like hdparm, although you might want to use a more advanced tool that takes fragmentation into account if you want more than a pragmatic answer.
Depending on what you're using your drives for, any benefit of wasting disk space is likely to be offset by the waste itself. If random access or performance under fragmentation is important to you, switching to a solid-state drive (SSD) probably makes more sense in the long run.
add a comment |
There are a lot of factors, and I'm not sure there's a strictly canonical answer. However, a smaller partition near the outside of spinning disks might exhibit faster seek and sequential transfers, provided that your data isn't highly fragmented.
On spinning disks, the outer cylinders have more sectors and rotate faster than inner cylinders. Many modern filesystems try to place the sectors of files contiguously to reduce fragmentation, which often means that large partitions use more and more of the inner cylinders over time.
It's possible that the smaller partition forces the file system to place more of the data on the outer cylinders, and that even when lightly fragmented the data moves under the read heads faster.
You can test your drive for random access and sequential performance with different partition sizes using Linux tools like hdparm, although you might want to use a more advanced tool that takes fragmentation into account if you want more than a pragmatic answer.
Depending on what you're using your drives for, any benefit of wasting disk space is likely to be offset by the waste itself. If random access or performance under fragmentation is important to you, switching to a solid-state drive (SSD) probably makes more sense in the long run.
add a comment |
There are a lot of factors, and I'm not sure there's a strictly canonical answer. However, a smaller partition near the outside of spinning disks might exhibit faster seek and sequential transfers, provided that your data isn't highly fragmented.
On spinning disks, the outer cylinders have more sectors and rotate faster than inner cylinders. Many modern filesystems try to place the sectors of files contiguously to reduce fragmentation, which often means that large partitions use more and more of the inner cylinders over time.
It's possible that the smaller partition forces the file system to place more of the data on the outer cylinders, and that even when lightly fragmented the data moves under the read heads faster.
You can test your drive for random access and sequential performance with different partition sizes using Linux tools like hdparm, although you might want to use a more advanced tool that takes fragmentation into account if you want more than a pragmatic answer.
Depending on what you're using your drives for, any benefit of wasting disk space is likely to be offset by the waste itself. If random access or performance under fragmentation is important to you, switching to a solid-state drive (SSD) probably makes more sense in the long run.
There are a lot of factors, and I'm not sure there's a strictly canonical answer. However, a smaller partition near the outside of spinning disks might exhibit faster seek and sequential transfers, provided that your data isn't highly fragmented.
On spinning disks, the outer cylinders have more sectors and rotate faster than inner cylinders. Many modern filesystems try to place the sectors of files contiguously to reduce fragmentation, which often means that large partitions use more and more of the inner cylinders over time.
It's possible that the smaller partition forces the file system to place more of the data on the outer cylinders, and that even when lightly fragmented the data moves under the read heads faster.
You can test your drive for random access and sequential performance with different partition sizes using Linux tools like hdparm, although you might want to use a more advanced tool that takes fragmentation into account if you want more than a pragmatic answer.
Depending on what you're using your drives for, any benefit of wasting disk space is likely to be offset by the waste itself. If random access or performance under fragmentation is important to you, switching to a solid-state drive (SSD) probably makes more sense in the long run.
answered Nov 13 '18 at 5:19
CodeGnomeCodeGnome
1,736821
1,736821
add a comment |
add a comment |
Thanks for contributing an answer to Super User!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1374779%2fdoes-partitioning-hard-drives-into-a-smaller-partition-than-the-actual-size-make%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
6
That's interesting question, but the RPM could be a big deal, if a HDD bought in 2016 had only 80GB storage space? That was most likely either a hybrid HDD/SSD or it was some kind of High RPM drive (10,000RPM at least, probably more) either of those would have a major impact on performance.
– Cestarian
Nov 12 '18 at 18:43
1
@Cestarian That's a good point. I am currently using a relatively low end desktop from 2015 and it came with 80 GB SSD and 1TB HDD. The last time I bought a HDD as small as 80 GB was around 2003. So the mentioned 80 GB HDD may very well not have been an HDD but rather an SSD.
– kasperd
Nov 12 '18 at 23:34
Possible duplicate of Is there a performance advantage in having my HDD split into partitions?
– bertieb
Nov 13 '18 at 12:53