How hard disk size affects performance. Hard disk spindle speed

The speed of data transfer over the disk interface bus is far from the only parameter that affects the speed of the hard drive as a whole. On the contrary, the performance of hard drives with the same interface type sometimes varies greatly. What is the reason?

The fact is that a hard disk is a collection of a large variety of electronic and electromechanical devices. The speed of the mechanical components of the hard drive is significantly inferior to the speed of the electronics, which also includes the bus interface. Overall disk performance is unfortunately measured by the speed of the slowest components. The "neck of the bottle" when transferring data between a drive and a computer is precisely the internal transfer rate - a parameter determined by the speed of the hard drive mechanics, which is one of the reasons for laptop repair. Therefore, in the most modern PIO 4 and UltraDMA exchange modes, the maximum possible interface bandwidth is almost never achieved during real work with the drive. To determine the speed of mechanical components, as well as the entire drive, you need to know the following parameters.

Disk rotation frequency - the number of revolutions made by the platters (individual disks) of the hard drive per minute. The higher the speed, the faster the data is written or read. The typical value of this parameter for most modern EIDE-disks is 5400 rpm. Some newer drives have discs that spin at 7200 rpm. The technical limit reached to date - 10,000 rpm - is implemented in the Seagate Cheetah series SCSI drives.

Average seek time is the average time required to position the head assembly from an arbitrary position to a specified track for reading or writing data. The typical value of this parameter for new hard drives is from 10 to 18 ms, and the access time of 11-13 ms can be considered good. The fastest SCSI models have an access time of less than 10ms.

Average access time is the average statistical time interval from the issuance of a command for an operation with a disk to the start of data exchange. This is a composite parameter that includes the average seek time, as well as half the rotation period of the disk (taking into account the fact that the data can be in an arbitrary sector on the desired track). The parameter determines the amount of delay before the start of reading the required data block, as well as the overall performance when working with a large number of small files.

Internal Transfer Rate - The rate at which data is exchanged between the disk interface and the media (platters). The values ​​for this parameter differ significantly for reading and writing. They are determined by the rotational speed of the discs, the recording density, the characteristics of the positioning mechanism and other parameters of the drive. It is this speed that has a decisive influence on the speed of the drive in the steady state (when reading a large solid block of data). Exceeding the overall transfer rate over the internal one is achieved only when data is exchanged between the interface and the cache memory of the hard drive without immediately accessing the platters. Therefore, one more parameter affects the speed of the drive, namely ...

... the amount of cache memory. Cache memory is an ordinary electronic RAM installed on a hard drive. After reading from the hard drive, data is simultaneously transferred to the computer memory and into the cache memory. If this data is required again, it will not be read from the platters, but from the cache buffer. This allows you to significantly speed up the exchange of data. To improve the efficiency of the cache, special algorithms have been developed that identify the most frequently used data and place them in the cache, which increases the likelihood that the next call will require data from the electronic RAM - a so-called "cache hit" will occur. Naturally, the larger the cache memory, the faster the disk usually performs.

When evaluating the performance of hard drives, the most important characteristic is the baud rate. At the same time, a number of factors affect speed and overall performance:

  • Connection interface - SATA / IDE / SCSI (and for external drives - USB / FireWare / eSATA). All interfaces have different baud rates.
  • Cache or buffer size hard disk... Increasing the buffer size increases the data transfer rate.
  • Support for NCQ, TCQ and other algorithms to improve performance.
  • Disk volume. The more data you can write, the more time it takes to read the information.
  • Density of information on the plates.
  • And even the file system affects the data exchange rate.

But if we take two hard disk the same volume and the same interface, then the key performance factor will be spindle rotation speed.

What is a spindle

The spindle is a single axis in the hard disk, on which several magnetic platters are installed. These plates are fixed to the spindle at a strictly defined distance. The distance should be such that when the platters rotate, the read heads can read and write to the disk, but at the same time.

For the disc to function properly, the spindle motor must be able to keep the magnetic platters running steadily for thousands of hours. Therefore, it is not surprising that sometimes disk problems are related specifically, and not at all with errors in the file system.

The motor is responsible for the rotation of the platters, and this allows the hard drive to work.

What is spindle speed

The spindle speed determines how fast the platters rotate during normal hard drive operation. Rotational speed is measured in revolutions per minute (RpM).

Rotation speed affects how quickly the computer can retrieve data from the hard drive. Before the hard drive can read the data, it must first find it.

The time it takes to go to the requested track / cylinder is called seek time (seek latency)... After the read heads move to the desired track / cylinder, you need to wait for the plates to turn so that the required sector is under the head. It is called rotational latency time and is a direct function of the spindle speed. That is, the faster the spindle speed, the less the rotation delay.

The total seek time delays and rotation delays determine the speed of data access. In many programs for estimating the speed of hdd, this parameter access to data time.

What affects the spindle speed of the hard drive

Most standard 3.5 ″ hard drives today have a spindle speed of 7200 rpm. For such disks, the time during which half a revolution is completed ( avg. rotational latency) is 4.2 ms. The average seek time for these drives is about 8.5 ms, which allows access to data in about 12.7 ms.

The WD Raptor hard drives have a rotational speed of the magnetic platters of 10,000 rpm. This reduces the average rotation delay time to 3ms. The "raptors" also have smaller diameter plates, which made it possible to reduce the average search time to ~ 5.5 ms. The resulting average data access time is approximately 8.5 ms.

There are several SCSI models (such as the Seagate Cheetah) that have spindle speeds of up to 15,000 RPM, and their platters are even slower than the WD Raptor. Their average rotational latency is 2 ms (60 sec / 15,000 RPM / 2), the average seek time is 3.8 ms, and the average data access time is 5.8 ms.

Discs with a high spindle speed have low values ​​for both seek times and spin delays (even with random access). It is clear that 5600 and 7200 spindle drives have lower performance.

At the same time, with sequential access to data in large blocks, the difference will be insignificant, since there is no delay in data access. Therefore, it is recommended to regularly defragment hard drives.

How to find out the spindle speed of a hard drive

On some models, the spindle speed is written directly on the sticker. It is not difficult to find this information, since there are few options - 5400, 7200 or 10,000 RpM.

A hard disk is a device with a low speed of work, but sufficient for everyday needs. However, due to certain factors, it can be much less, as a result of which the launch of programs slows down, reading and writing files, and in general it becomes uncomfortable to work. By performing a number of actions to increase the speed of the hard drive, you can achieve a noticeable increase in performance in the operating system. Let's take a look at how to speed up your hard drive in Windows 10 or other versions of this operating system.

Several factors affect the speed of a hard drive, ranging from how full it is to BIOS settings... Some hard drives generally have a low operating speed, which depends on the spindle speed (revolutions per minute). Old or cheap PCs usually have an HDD with a speed of 5600 rpm, and in more modern and expensive ones - 7200 rpm.

Objectively, these are very weak indicators against the background of other components and capabilities of operating systems. HDD is a very old format, and they are slowly replacing it. Earlier, we have already done their comparison and told how long SSDs last:

When one or several parameters affect the operation of the hard disk, it starts to work even slower, which becomes noticeably noticeable to the user. To increase the speed, you can use both the simplest methods associated with organizing files, and changing the operating mode of the disk by choosing a different interface.

Method 1: Cleaning the hard drive from unnecessary files and debris

This seemingly simple action can speed up disk performance. The reason why it is important to monitor the cleanliness of the HDD is very simple - overcrowding indirectly affects the speed of its operation.

There can be much more garbage on your computer than you think: old Windows restore points, temporary data of browsers, programs and the operating system itself, unnecessary installers, copies (duplicate files of the same), etc.

It is time consuming to clean it yourself, so you can use various programs that take care of the operating system. You can get acquainted with them in our other article:

If you don't want to install additional software, you can use the built-in Windows tool entitled Disk Cleanup... Of course, this is not as efficient, but it can be useful too. In this case, you will need to clear the temporary browser files yourself, of which there are also a lot.

You can also create an additional drive, where you can move files you do not really need. Thus, the main disk will be more unloaded and will start working faster.

Method 2: judicious use of the file defragmenter

One of the favorite tips for speeding up the disk (and the entire computer) is to defragment files. This is really true for HDDs, so it makes sense to use it.

What is defragmentation? We have already given a detailed answer to this question in another article.

It is very important not to overuse this process, because it will only have a negative effect. Once every 1-2 months (depending on the user's activity) is quite enough to maintain the optimal state of the files.

Method 3: cleaning startup

This method does not directly, but affects the speed of the hard drive. If you think that the PC boots slowly when turned on, programs take a long time to start, and the reason for this is the slow operation of the disk, then this is not entirely true. Due to the fact that the system is forced to launch the necessary and unnecessary programs, and the hard disk has a limited processing speed for Windows instructions, and a slowdown issue occurs.

You can deal with autoloading using our other article written on the example of Windows 8.

Method 4: Change device settings

Slow disk performance may also depend on its operating parameters. To change them, you need to use "Device Manager".

Method 5: Correction of errors and bad sectors

The speed of the hard disk depends on the state of the hard disk. If it has any file system errors, broken sectors, even simple tasks may be slower to process. There are two ways to fix existing problems: use special software from different manufacturers or built-in Windows check disks.

We have already covered how to fix HDD errors in another article.

Method 6: Change the HDD connection mode

Not even very modern motherboards support two standards: IDE mode, which is predominantly suitable for older systems, and AHCI mode, which is newer and optimized for modern use.

Attention! This method is for advanced users. Get ready for possible problems with loading the OS and other unforeseen consequences. Despite the fact that the chance of their occurrence is extremely small and tends to zero, it is still present.

While many users have the option of changing the IDE to AHCI, they often don't even know about it and put up with the low speed of the hard drive. And yet this is enough efficient way HDD acceleration.

First you need to check which mode you have, and you can do this through "Device Manager".

  1. In Windows 7, click "Start" and start typing "Device Manager".

    In Windows 8/10 click on "Start" right-click and select "Device Manager".

  2. Find a branch "IDE ATA / ATAPI Controllers" and expand it.

  3. Look at the name of the connected drives. You can often find names: "Standard Serial ATA AHCI Controller" or "Standard PCI IDE Controller"... But there are other names as well - it all depends on the user's configuration. If the name contains the words "Serial ATA", "SATA", "AHCI", then the connection is via the SATA protocol, with IDE everything is the same. The screenshot below shows that the AHCI connection is being used - keywords are highlighted in yellow.

  4. If it cannot be determined, the connection type can be viewed in the BIOS / UEFI. It's easy to determine: which setting will be written in the BIOS menu, that one is set to this moment(screenshots with the search for this setting are a little lower).

    When the IDE mode is connected, you need to start switching to AHCI with the registry editor.


    If this method didn't work for you, check out other methods for enabling AHCI on Windows at the link below.

    We have discussed the common solutions to the problem related to the slow speed of the hard drive. They can give an increase in HDD performance and make working with the operating system more responsive and enjoyable.

How much speed hard disk affects overall PC performance?

Do not feed the tester-browser of hard drives and any flash drives with bread, but let them run some fancy specific benchmark that will show how many performance "parrots" or "io-dogs" this or that model will show in it. All sorts of "iometers", "pisimarks" and other "yo! -Marks", as a rule, are specially designed to best demonstrate the difference between the disks during certain operations. directly with these discs. And they (benchmarks and reviewers :)) do an excellent job with their purpose, giving us, the readers, a lot of food for thought, which disk model to prefer in this or that case.

But disk benchmarks (and browsers too!) Tell the average user little about exactly how (and how much) it will improve (or worsen). the comfort of his daily work with personal computer if this or that disk will be installed in its system. Yes, we will know that, for example, a file / directory is twice as fast in ideal conditions will be written to our disk or read from it, or, say, 15% faster will be performed "loading Vyndovs" - or rather not herself, on our specific computer, but previously recorded on some other, completely incomprehensible to us and how as a rule, an already obsolete PC, a special pattern that can have a very distant relationship to our beloved PC. Let's say we chase after a brand new expensive model of the disk, after reading all sorts of "authoritative" reviewers, we will spend money, and we will come home and absolutely nothing, except for the consciousness that we have bought a cool thing in someone's subjective opinion, we will not feel it ... That is, our PC both "ran" and continues to "run", it did not "fly" at all. :)

And the thing is that in reality, the "return" from the speed of the disk subsystem, as a rule, is noticeably masked by the not instantaneous operation of the rest of the subsystems of our computer. As a result, even if we install a three times faster (according to profile benchmarks) hard drive, our computer, on average, in terms of sensations, will not work at all, and subjectively, at best, we will feel that the graphics editor and beloved a toy. Is this what we expected from the upgrade?

In this short article, we, by no means pretending to be a comprehensive coverage of this multifaceted issue, will try to give an answer to what, after all, in the reality wait from a disk subsystem with one or another "benchmark" performance. Hopefully, this will allow the thoughtful reader to navigate the subject and decide when and how much to spend on the next "very hard" drive.

Methodology

The best way to estimate the contribution of the speed of the disk subsystem to the real work of the PC is better ... right! - on the example of the very "real work" of this PC. The most suitable tool for this, and a generally recognized tool in the world, is now the professional benchmark BAPCo SYSmark 2007 Preview (which, by the way, costs a lot of money). This industrial test simulates the real work of a user with a computer, moreover, very active, by actually launching (often parallel) various popular applications and performing tasks typical for one or another kind of user activity - reading, editing, archiving, etc. other. Details of the device and operation of SYSmark 2007 are described many times in the computer literature and on the manufacturer's website (), so we will not be distracted by them here. We emphasize only the main thing - the ideology of this test lies in the fact that it is measured here average reaction time (response) of a computer to user actions, that is, exactly the parameter by which a person judges the comfort of his work with a PC, whether his iron friend "crawls", "runs" or "flies".

Unfortunately, SYSmark 2007 Preview was released a long time ago, and although it was regularly patched by the manufacturer (we are using version 1.06 of July 2009 here), at its core it contains applications that are by no means the most recent, about 2005. But ourselves, are we always We use the most latest versions programs? Many, for example, still feel very comfortable on Windows XP (and even test new hardware under it!), Not to mention the fact that they are not inspired by the multi-dollar "office arms race", in fact, imposed on us by one well-known Redmond company. Thus, we can assume that SYSmark 2007 is still relevant for the "average" PC user, especially since we are running it here on the latest OS - Windows 7 Ultimate x64. Well, we can only wish BAPCo to quickly overcome the consequences of the financial crisis in the IT industry and release new version SYSmark based on 2010-2011 applications

Based on the results of SYSmark 2007 Preview tests as a whole and according to its subtests E-Learning, VideoCreation, Productivity and 3D, which in this case we conducted for two modern PC system configurations (based on Intel processors Core i7 and i3) and five "reference" drives of different "disk" performance (that is, a total of 10 tested systems), in this article we will draw conclusions about how strongly a particular disk will affect the user's comfort with a PC, then there is how much it will change the average reaction time of the computer to the actions of an active user.

But of course, we will not limit ourselves to one SYSmark. In addition to checking the "disk dependence" of some individual applications, tests and complex benchmarks, we will "add" to the estimates of the impact of the disk on the overall system performance, the system tests of the more or less modern Futuremark PCMark Vantage package. Although the PCMark approach is more synthetic than that of SYSmark, nevertheless, in various patterns, it also measures the speed of the "entire" computer in typical user tasks, and the performance of the disk subsystem is also taken into account (a lot has been written about the detailed PCMark Vantage device, too, therefore we will not go into details here). We also tried to attract a brand new (this year) Intel test (). It is somewhat similar in approach to SYSmark, but in relation to working with multimedia content, although it estimates not the average user response time, but the total execution time of a complex scenario. However, the disk dependence of this test turned out to be the smallest (almost absent) and completely non-indicative, so we did not run this long-term benchmark for all configurations, and we will not demonstrate its results in this article.

Test configurations

For the first experiments, we chose two basic system desktop configurations. The first of them is based on one of the most powerful "desktop" processors Intel Core i7-975, and the second - on the youngest (at the time of this writing) desktop processor from the Intel Core i3 line - the i3-530 model with a price of just over $ 100. Thus, we will check the effect of the speed of the disk subsystem for both a top-end PC and an inexpensive modern desktop. The performance of the latter, by the way, is quite comparable to that of modern top-end notebooks, so we, along with the "two birds with one stone", "kill" the third one. :) The specific configurations looked like this:

1. Top desktop (or workstation):

  • Intel Core i7-975 processor (HT and Turbo Boost enabled);
  • ASUS P6T motherboard on Intel X58 chipset with ICH10R;
  • 6 GB triple channel DDR3-1333 memory (timings 7-7-7);

2. Cheap desktop (as well as media center or top-end laptop):

  • Intel Core i3-530 processor (2 cores + HT, 2.93 GHz);
  • Biostar TH55XE motherboard (Intel H55 chipset);
  • 4 GB dual-channel DDR3-1333 memory (timings 7-7-7);
  • video accelerator AMD Radeon HD 5770.

We chose the reference disk subsystems, which acted as system drives for these configurations, based on the fact that they were a step of approximately 50 MB / s in the maximum sequential read / write speed:

  1. typical SATA SSD on MLC memory (≈250 MB / s read, ≈200 MB / s write);
  2. a typical 3.5-inch SATA 7K with 1 TB (≈150 MB / s read / write);
  3. fast 2.5-inch SATA 7K with 500 GB (≈100 MB / s read / write);
  4. SATA - "seven-thousanders" of small capacity with read / write speed in the region of 50 MB / s;
  5. notebook SATA - "five-thousanders" with read / write speed in the region of 50 MB / s.

Such a gradation will allow us, without being tied to particular models, to compose a conditional grid of reference points, using which we can approximately predict the behavior of a particular disk as a system disk in computers of the above configurations, as well as intermediate and some old ones. In our experiments as specific models for each of the five points, the following hard drives spoke:

  1. Patriot TorqX PFZ128GS25SSD (IDX MLC SSD 128 GB);
  2. Hitachi Deskstar 7K1000.C HDS721010CLA332 (1 TB);
  3. Seagate Momentus 7200.4 ST950042AS (500 GB);
  4. Hitachi Travelstar 7K100 HTS721010G9SA00 (100 GB);
  5. Toshiba MK1246GSX (5400 rpm, 120 GB).

We emphasize that our test configurations are not aimed at assessing the impact of these specific (we used in these tests) hard drive models, but these configurations actually represent the "interests" of not only certain desktops, but also (indirectly) media centers, mini-PCs and powerful laptops. And let the model of the video card we used do not confuse you - the overwhelming majority of the benchmark results we demonstrate here insignificantly depend (or does not depend at all) on the performance of the video accelerator.

The speed of the drives themselves

Before proceeding to the results of our study of system performance disk dependence, let's take a quick look at the performance of the drives themselves, which we evaluated in our traditional way - using profile disk benchmarks. The average speed of random access to these drives is shown in the following chart.

It is clear that the SSD is out of reach with its typical 0.09 ms, the desktop 7K moves its mustache a little faster than the laptop 7K, although, for example, the Hitachi 7K100 model in average access time can compete with the 3.5 -inch "seven-thousanders" of the past years, having a similar capacity and speed of linear access. The latter for our reference discs is shown in the following diagram.

The 5K from Toshiba is slightly faster in this parameter than the 7K from Hitachi 7K100, but it is inferior to the latter in terms of random access time. Let's see what turns out to be more important for the typical work of a desktop and if there is a real difference from using these disks, in fact, of different classes.

As an interesting information along the way, we will cite the indicator by which Windows 7 with its built-in benchmark evaluates the usefulness of one or another reference drive.

We emphasize that for both test systems, Windows 7 rated the HD 5770 video accelerator at 7.4 points (in graphics and game graphics), and the processor and memory received ratings, respectively, 7.6 and 7.9 for the older one and 6.9 and 7 , 3 for the youngest of our test systems. Thus, disks are the weakest link in these systems (according to Windows 7). All the more noticeable, in theory, should be their impact on the overall system performance of the PC.

The last in this section will be the PCMark Vantage purely disk benchmark chart, showing the typical disposition of selected drives in traditional hard drive reviews, where similar tests are used by reviewers to pass their harsh verdict.

More than fivefold advantage of SSD over HDD in this particular benchmark (PCMark Vantage, HDD Score) - these are typical positions at the moment (however, in a number of other desktop benchmarks, the gap is still smaller). By the way, note that the results of disk tests are extremely weakly dependent on the system configuration - they are approximately the same for processors that differ 10 times from each other in price, and are also within the margin of error the same for x64 and x86 cases. In addition, we note that the older of the selected hard drives is approximately twice as fast as the younger in "pure disk" performance. Let's see how this 5-10 times gap in disk benchmarks will affect the real work of the PC.

System-wide test results

As well as "predicted" to us Windows index 7, there is no practical difference between systems with the two lowest of the selected reference disks, although these are disks of different classes (7200 and 5400 rpm). It is also interesting that the productive models of SATA 7K form factors are 3.5 and 2.5 inches, and differing from each other by half in capacity (read - on the older one, the heads move about half as much when performing the same system-wide test), almost one and a half times - according to the speed of linear access and noticeably - according to the speed of random access, so these two models in real PCs behave almost the same, that is, you, with all your desire, on your "human" sensations will not feel between such systems no differences in comfort when using typical applications. But after upgrading to one of them from one of our junior benchmark disk subsystems, the increase will average about 15% (recall that they differ by about half in pure disk performance!). This is quite an actual situation both for a laptop (replacing an outdated 5K model with a capacious top 7K model) and for a desktop (upgrade of an old 7K model to a new terabyte model).

But is 15% a lot or a little? The author of these lines thinks that this is, in fact, very little! In fact, this is almost the limit of our sensory differentiation (≈1 dB). We, as biological individuals, clearly feel the difference in the time of the processes (and perceive the difference in other "analog" values), if this difference is at least 30-40 percent (this approximately corresponds to 3 dB of the logarithmic scale of our perception of various external stimuli). Otherwise, we, in general, do not care. :) And it's even better if the time difference between the processes is twofold (6 dB). Then we will definitely say that the system / process has clearly accelerated. But this, alas, is far from the case of the diagram shown above with SYSmark 2007. Thus, if after upgrading the HDD you do not specifically sit with a stopwatch in your hand or run specialized disk benchmarks, then you will hardly know about the increase in the comfort of your work!

A slightly different case is with the upgrade of the HDD to SSD. Here already in the framework of the older laptop model, for example, the increase in the average system-wide performance will be about 30%. Yes, we can feel it. But we can hardly say that the system began to "fly". Even in the case of a top-end desktop PC, the use of an SSD instead of a single HDD will give us only a 20-40% reduction in the average PC response time to user actions (this is with a 5-10-fold difference in the speed of the disks themselves!). I do not at all want to say that on separate private problems related to active use disk, you won't say "wow!" But on the whole, the situation will not be so rosy as sometimes described by hard drive testers. Moreover, it is hardly advisable to use SSD in weak PCs, as we can see from this diagram - the average increase in the comfort of work will be at the level of the individual distinguishability threshold. And you will feel the greatest effect of SSDs in powerful PCs.

However, not everything is so sad! For example, analyzing the situation in different SYSmark 2007 patterns, one can come to the following conclusions. So, when performing tasks of a certain profile (in this case, working with 3D and the E-Learning scenario), there really is almost no difference which disk you use at the same time (the difference between our senior and junior rappers is 5-15% "indistinguishable" by us) ... And here there is absolutely no point in spending money on a new one fast drive! However, on the other hand, on a number of tasks (in particular, the VideoCreation script, which actively uses video and audio editing), you can still feel the "breeze in your ears": for a powerful desktop, the reduction in the average PC response time to user actions from the use of SSDs can reach the cherished 2 times (see the diagram below), and for a less powerful desktop system, as well as a top-end laptop, the benefits of using an SSD in VideoCreation and Productivity scenarios are quite obvious (in VideoCreation, by the way, top-end hard drives behave very decently). Thus, we once again come to the postulate imposed on our teeth: universal solutions does not exist, and the configuration of your PC must be selected, guided by what specific tasks you are going to solve on it.


But not a single "sismmark"! .. We also ran a sufficiently large number of traditional tests and benchmarks on our 10 reference systems to try to reveal at least some disk dependence. Unfortunately, most of these tests are designed in such a way as to neutralize the influence of the disk system on the test results. Therefore, neither in numerous games, nor in complex 3DMark Vantage, nor in SPEC viewperf and a number of other tasks, including video encoding in x264 HD Benchmark 3.0 and Intel HDxPRT 2010 tests (and even more so in different processor and memory tests) there is no "disk dependence" we didn't notice. That is, we were just honestly convinced of what we actually expected. By the way, that is why we did not use here the traditional method of testing the processors of the site site, which mainly practices benchmarks in individual applications... The results of these numerous, but "useless" tasks for the topic of this article, we naturally omit. Another thing is another comprehensive test for assessing the overall system performance of a PC - PCMark Vantage. Let's take a look at its results for our reference systems and cases of 32- and 64-bit executions of applications.




It is foolish to deny the obvious - according to the PCMark Vantage benchmark method, the advantage of systems with SSDs is undeniable and sometimes more than twice as compared to the younger of our benchmark HDDs (but still not 10 times). And the difference between fast desktop and laptop hard drives is not so obvious here either. And it is all indistinguishable in the "reality given to us", as you know, "in sensations." It is optimal in this case to focus on the "top" block "PCMark" on these diagrams, which shows the "main" index of the overall system performance of this benchmark.

Yes, one can argue that this is in a certain sense "synthetics", much less realistic than imitation of user work in tests like SYSmark. However, PCMark Vantage patterns take into account many such nuances that are not yet available in SYSmark. Therefore, they also have the right to life. And the truth, as you know, is "somewhere nearby" (and this translation, as you know, is inaccurate). :)

Conclusion

Our first study of the disk dependence of the system-wide performance of modern top-end and mid-range PCs using a dozen reference configurations as an example showed that in most traditional tasks, a simple user is unlikely to feel (in his feelings from the computer) a big difference from using a faster or slower disk from those that are now on the market or sold not so long ago. In most tasks not directly related to constant active work with a disk (copying, writing and reading a large amount of files at maximum speed), the disk dependence of system performance is either absent at all, or not so great that we really felt it (felt) by reducing the average response time of the system to our actions. On the other hand, of course, there are quite a few tasks (for example, video processing, professional work with photos, etc.) in which disk dependence is noticeable. And in this case, the use of high-performance disks and, in particular, SSDs, can positively affect our experience of a PC. But a smart drive and SSD are not a panacea. If your computer does not work fast enough, then it makes sense to approach the upgrade strictly in accordance with the tasks that are supposed to be solved with the help of this PC. In order not to suddenly experience the frustration of spending money without real use.

  • Translation

This is a translation of the superuser.com answer to the question of the impact of free disk space on performance. translator

Does freeing up disk space speed up your computer?

Freeing up disk space doesn't speed up your computer, at least not by itself. This is a really common myth. This myth is so common because filling up your hard drive often occurs concurrently with other processes that traditionally may slow down * your computer. SSD performance may decrease as it fills, however this is comparatively new problem typical of SSD, and, in fact, unobtrusive to ordinary users. In general, the lack of free space is just a red rag for the bull ( distracts attention - approx. translator).

Approx. author: * "Slowdown" is a very broad term. Here I use it in relation to processes related to I / O (i.e. if your computer is engaged in purely calculations, the contents of the disk have no effect), or related to the processor and competing with processes that consume a lot of processor resources (i.e. antivirus scanning a large number of files)

For example, such phenomena as:

  • File fragmentation. File fragmentation is a problem **, however, the lack of free space, although it is one of the many factors is not the only one cause fragmentation. Basic moments:
    Approx. author: ** Fragmentation influences on SSDs due to the fact that sequential read operations are usually significantly faster than random access, although SSDs do not have the same limitations as for mechanical devices (even in this case, the absence of fragmentation does not guarantee sequential access due to wear distribution and similar processes). However, in almost any typical use case, this is not a problem. Differences in SSD performance due to fragmentation are usually invisible to the processes of launching applications, booting a computer, and others.
    • The likelihood of file fragmentation is not related to the amount of free disk space. It depends on the size of the largest contiguous block of free space on the disk (that is, the "gaps" of free space), which bounded from above the amount of free space. Another dependency is the method used by the file system when placing files (more on that later).
      For example: If 95% of the space is occupied on the disk and everything that is free is represented by one continuous block, then new file will be fragmented with a probability of 0% ( unless, of course, the normal file system does not fragment files on purpose - approx. the author) (also the possibility of fragmentation of the file being expanded does not depend on the amount of free space). On the other hand, a disk filled with 5% of data evenly distributed over it has a very high probability of fragmentation.
    • Please note that file fragmentation affects performance only when these files are being accessed. For example: You have a nice, defragmented disk with a lot of free "spaces" on it. A typical situation. Everything works well. However, at some point you come to a situation where there are no more large free blocks left. You are downloading a large movie and the file is very fragmented. It won't slow down your computer... Your application files and others that were in perfect order will not become instantly fragmented. The movie can of course take longer to load (however, the typical bitrates of movies are so much lower than the reading speed hard drives it will probably go unnoticed), it can also affect I / O performance while the movie is loading, but nothing else will change.
    • While fragmentation is a problem, it is often compensated for by caching and buffering from the operating system and hardware... Lazy write, read-ahead, and more can help solve problems caused by fragmentation. In general, you do not notice anything until the fragmentation level gets too high (I would even venture to say that as long as your swap file is not fragmented, you won't notice anything)
  • Another example is Search Indexing. Suppose you have automatic indexing enabled and the operating system does not implement it very well. As you save more and more indexed files to your computer (documents and the like), indexing starts to take longer and can start to have a noticeable effect on the observed performance of the computer as it runs, eating up I / O and CPU time at the same time. This is not related to free space, but related to the amount of data being indexed. However, the depletion of disk space occurs simultaneously with the saving more content, so many establish the wrong relationship.
  • Antiviruses. This is very similar to the search index example. Let's say you have an antivirus scanning your disk in the background. You have more and more more files for scanning, searching begins to consume more and more I / O and processor resources, possibly interfering with work. Again, the problem has to do with the amount of crawled content. More content means less free space, but lack of free space is not the cause of the problem.
  • Installed programs. Suppose you have a lot of programs installed that start when your computer boots up, which increases the boot time. This slowdown occurs because a lot of programs are being loaded. At the same time, installed programs take up disk space. Consequently, the amount of free space decreases simultaneously with deceleration, which can lead to incorrect conclusions.
  • Many other similar examples can be cited, which give illusion connection of disk space exhaustion and performance degradation.

The above illustrates another reason for the prevalence of this myth: although running out of free space is not directly responsible for the slowdown, uninstalling various applications, deleting indexed and crawled content, etc. sometimes (but not always, such cases are outside the scope of this text) leads to increase performance for reasons other than free space. This frees up disk space naturally. Consequently, there is also a false connection between "more free space" and "faster computer".

Look: if your computer is running slowly due to a large number of installed programs and so on, and you clone, exactly, your hard drive to a larger hard drive, and then expand the partitions to get more free space, the computer won't get faster by a wave of the hand. The same programs are loaded, the same files are fragmented in the same way, the same indexing service works, nothing changes despite the increase in free space.

Does it have something to do with finding a place to put the files?

No, not connected. There are two important points here:

  1. Your hard drive is not looking for a place to put files. HDD stupid. He is nothing. It is a large block of addressable storage that blindly obeys operating system in matters of accommodation. Modern drives are equipped with sophisticated caching and buffering mechanisms designed to predict operating system requests based on human experience (some drives even know about file systems Oh). But in essence, the disk should be thought of as a big, silly data storage brick, sometimes with performance-enhancing features.
  2. Your operating system is also not looking for placement. There is no "search"... Great efforts have been made to solve this problem as it is critical to the performance of file systems. The data is located on your disk as defined by the file system, For example, FAT32 (old computers with DOS and Windows), NTFS (newer Windows systems), HFS + (Mac), ext4 (some Linux systems) and many others. Even the concept of a "file" or "directory" ( "Folders" - approx. translator) is just the fruit of a typical file system: hard drives do not know about such beasts as "files". Details are outside the scope of this text. However, in essence, all common file systems contain a way to keep track of free space on a disk, and therefore "searching" for free space, under normal circumstances (ie, under normal file system state), is not necessary. Examples:
    • NTFS contains a master file table which includes special files (such as $ Bitmap) and a lot of metadata describing the disk. Essentially, it keeps track of subsequent free blocks so that files can be written to disk without having to scan the disk every time.
    • Another example, ext4 has an entity called the "bitmap allocator", an improvement over ext2 and ext3 that helps to directly determine the location of free blocks, instead of scanning the free list. Ext4 also supports "lazy allocation," which is essentially the operating system buffering data into RAM before writing to disk in order to accept best solution on placement to reduce fragmentation.
    • Lots of other examples.
Could it be about moving files back and forth to allocate long enough contiguous space when saving?

No, this is not happening. At least in none of the file systems I know. The files are simply fragmented.

The process of "moving files back and forth to select a long, contiguous block" is called defragmentation... This does not happen when writing files. This happens when you run the disk defragmentation program. at least in new Windows systems this happens automatically on a schedule, but writing a file is never a reason to start this process.

Possibility avoid The need to move files in this manner is key to the performance of file systems, and the reason why fragmentation occurs and defragmentation is a separate step.

How much free space should you leave on disk?

This is a more difficult question, and I have already written a lot.

Basic rules to follow:

  • For all types of discs:
    • The most important thing is to leave enough space for use the computer yourself efficiently... If you're running out of space, you may need a larger disc.
    • Many disk defragmentation utilities require some minimum free space (it seems that it comes bundled with Windows in worst case requires 15% free space) for its work. They use this space to temporarily store fragmented files while other objects are being moved.
    • Leave room for other operating system functions. For example, if your computer does not have a lot of physical RAM and virtual memory is enabled with a dynamic paging file, you should leave enough free space to accommodate the paging file. maximum size... If you have a laptop that you are hibernating, you will need enough free space to save the hibernation file. Such are the things.
  • About SSD:
    • For optimal reliability (and to a lesser extent performance), there should be some free space on the SSD, which, without going into details, is used to distribute data evenly across the disk to avoid constantly writing to the same place (which leads to resource depletion) ... The concept of free space reservation is called over-provisioning. This is important, but many SSDs have the required spare space already allocated... That is, disks often have tens of gigabytes more space than they show to the operating system. Cheaper drives often require you to leave some of the space unallocated. But at work with disks that have forced spares, this is not required... It is important to note that extra space is often only taken from unallocated areas... That's why not always the option will work when your partition takes up the entire disk, and you leave some free space on it. Manual re-mortgage requires you to make your partition smaller than the disk size. Check your SSD user manual. TRIM and garbage collection and similar things also have an impact, but they are beyond the scope of this text.

Personally, I usually buy new disc larger when I have about 20-25% free space left. This has nothing to do with performance, just when I get to this point - it means that soon the space will run out, which means it's time to buy a new disk.

More important than keeping track of free space is to make sure scheduled defragmentation is enabled where it needs to be (not on an SSD), so you never come to a point where it is large enough to have a noticeable impact.

Afterword

There is one more thing that should be mentioned. One of the other answers to this question mentions that SATA half-duplex does not provide the ability to read and write at the same time. While this is indeed the case, this is a gross oversimplification and is largely unrelated to the performance issues discussed here. In reality, it simply means that data cannot be transmitted. by wire simultaneously in two directions. but