Does a full hard drive affect performance? HDD spindle speed

A hard drive is a device with a low, but sufficient speed for everyday needs. However, due to certain factors, it can be much less, as a result of which the launch of programs, reading and writing files slows down, and overall work becomes uncomfortable. By taking a number of steps to increase the speed of the hard drive, you can achieve a noticeable increase in the performance of the operating system. Let's look at how to speed up your work hard drive in Windows 10 or other versions of this operating system.

The speed of a hard drive is affected by several factors, ranging from how full it is to BIOS settings. Some hard drives, in principle, have a low operating speed, which depends on the spindle speed (revolutions per minute). Old or cheap PCs usually have a HDD with a speed of 5600 rpm, and more modern and expensive ones - 7200 rpm.

Objectively, these are very weak indicators compared to other components and capabilities of operating systems. HDD is a very old format, and it is slowly being replaced. Previously, we have already compared them and told how long SSDs last:

When one or more parameters affect the performance of the hard drive, it begins to work even slower, which becomes noticeably noticeable to the user. To increase speed, you can use both the simplest methods related to file systematization, and changing the disk operating mode by choosing a different interface.

Method 1: Cleaning your hard drive from unnecessary files and junk

This seemingly simple action can speed up the disk. The reason why it is important to keep your HDD clean is very simple - overcrowding indirectly affects its speed.

There may be a lot more junk on your computer than you think: old dots Windows recovery, temporary data from browsers, programs and the operating system itself, unnecessary installers, copies (duplicates of the same files), etc.

Cleaning it up yourself is time-consuming, so you can use various programs that take care of the operating system. You can get acquainted with them in our other article:

If you do not want to install additional software, you can use the built-in Windows tool entitled "Disk Cleanup". Of course, this is not as effective, but it can also be useful. In this case, you will need to clean out temporary browser files yourself, of which there are also a lot.

You can also create an additional drive where you can move files that you don’t particularly need. Thus, the main disk will be more unloaded and will start working faster.

Method 2: Smart Use of a File Defragmenter

One of the favorite tips for speeding up the disk (and the entire computer) is defragmenting files. This is really relevant for HDDs, so it makes sense to use it.

What is defragmentation? We have already given a detailed answer to this question in another article.

It is very important not to abuse this process, because it will only have a negative effect. Once every 1-2 months (depending on user activity) is enough to maintain the optimal state of the files.

Method 3: Cleaning startup

This method does not directly affect the speed of the hard drive. If you think that the PC boots slowly when turned on, programs take a long time to launch, and the slow disk is to blame, then this is not entirely true. Due to the fact that the system is forced to launch the necessary and unnecessary programs, A HDD has a limited speed in processing Windows instructions and there is a speed degradation issue.

You can understand startup using our other article, written using Windows 8 as an example.

Method 4: Change device settings

Slow operation of a disk may also depend on its operating parameters. To change them you need to use "Device Manager".

Method 5: Correcting errors and bad sectors

Its operating speed depends on the condition of the hard drive. If it has any file system errors, bad sectors, then processing of even simple tasks may be slower. To correct existing problems There are two options: use special software from various manufacturers or built into Windows check disks.

We have already talked about how to resolve HDD errors in another article.

Method 6: Changing the hard drive connection mode

Not even very modern motherboards support two standards: IDE mode, which is mainly suitable for the old system, and AHCI mode, which is newer and optimized for modern use.

Attention! This method is intended for experienced users. Get ready for possible problems with OS loading and other unforeseen consequences. Despite the fact that the chance of their occurrence is extremely small and tends to zero, it is still present.

While many users have the option of changing the IDE to AHCI, they often don’t even know about it and put up with the low speed of the hard drive. Meanwhile, this is quite effective way HDD acceleration.

First you need to check what mode you have, and you can do this through "Device Manager".

  1. On Windows 7, click "Start" and start typing "Device Manager".

    On Windows 8/10, click on "Start" right click and select "Device Manager".

  2. Find a thread "IDE ATA/ATAPI Controllers" and unfold it.

  3. Look at the names of the connected drives. You can often find names: "Standard Serial ATA AHCI Controller" or "Standard PCI controller IDE". But there are other names - it all depends on the user’s configuration. If the name contains the words “Serial ATA”, “SATA”, “AHCI”, then a connection via the SATA protocol is used; with IDE everything is the same. In the screenshot below you can see that an AHCI connection is being used - the keywords are highlighted in yellow.

  4. If you cannot determine the connection type, you can look in the BIOS/UEFI. This is easy to determine: what setting will be specified in the BIOS menu will be set to this moment(screenshots with the search for this setting are a little lower).

    When IDE mode is connected, switching it to AHCI must begin with the registry editor.


    If this method does not help you, check out other methods AHCI enable on Windows using the link below.

    We talked about common ways to solve the problem associated with low hard drive speed. They can increase HDD performance and make working with the operating system more responsive and enjoyable.

I am often asked What determines the speed of a computer?? I decided to write an article on this topic in order to consider in detail the factors that affect the performance of the system. After all, understanding this topic makes it possible to speed up your PC.

Iron

The speed of the computer directly depends on your PC configuration. The quality of components affects the performance of the PC, namely...

HDD

The operating system accesses the hard drive thousands of times. Moreover, the OS itself is on a hard drive. The larger its volume, spindle rotation speed, and cache memory, the faster the system will work. The amount of free space on drive C (Windows is usually installed there) is also important. If less than 10% of the total volume, the OS slows down. We wrote an article... See if it's there unnecessary files, programs, . Once a month it is advisable to defragment the hard drive. Please note, much more productive than HDD.

RAM

The amount of RAM is the most important factor influencing computer speed. Temporary memory stores intermediate data, machine codes and instructions. In a word - the more, the better. You can use the memtest86 utility.

CPU

The computer brain is as important as RAM. It is worth paying attention to the clock speed, cache memory and number of cores. All in all, The speed of your computer depends on clock speed and cache. And the number of cores ensures multitasking.

Cooling system

High temperature of components has a negative impact on computer speed. Overheating can damage your PC. Therefore, the cooling system plays a big role.

Video memory

Motherboard

Software

The speed of your computer also depends on installed software and OS. For example, if you have a weak PC, Windows XP or They are less demanding on resources will be more suitable for you. For medium and high configurations, any OS is suitable.

Many startup programs load the system, resulting in freezes and slowdowns. To achieve high performance, close unnecessary applications. Keep your PC clean and tidy, update it on time.

Malware slows down Windows operation. Use antivirus software and conduct deep scans regularly. You can read how. Good luck.

  • Translation

This is a translation of the answer to the question about the effect of free disk space on performance from the site superuser.com - approx. translator

Does freeing up disk space speed up your computer?

Freeing up disk space does not speed up your computer, at least not on its own. This is a really common myth. This myth is so common because filling up your hard drive often happens at the same time as other processes that would traditionally can slow down* your computer. SSD performance may degrade as it fills up, but this is comparatively new problem, characteristic of SSDs, and, in fact, hardly noticeable to ordinary users. In general, a lack of free space is just a red rag for a bull ( distracts attention - approx. translator).

Note author: * “Slowdown” is a term with a very broad interpretation. Here I use it in relation to processes that are I/O bound (i.e. if your computer is doing pure computing, the disk contents have no effect), or CPU bound and competing with processes that consume a lot of CPU resources (i.e. antivirus scanning a large number of files)

For example, such phenomena as:

  • File fragmentation. File fragmentation is a problem**, but lack of free space, although one of the many factors is not the only one cause of fragmentation. Basic moments:
    Note by: ** Fragmentation influences on SSDs due to the fact that sequential read operations are usually much faster than random access, although SSDs do not have the same restrictions as mechanical devices (even in this case, the absence of fragmentation does not guarantee sequential access due to wear distribution and similar processes). However, in almost any typical use case, this is not a problem. Performance Differences SSD related with fragmentation are usually invisible to the processes of launching applications, booting the computer, and others.
    • The likelihood of file fragmentation is not related to the amount of free disk space. It depends on the size of the largest contiguous block of free space on the disk (i.e., "blanks" of free space), which limited from above the amount of free space. Another dependency is the method the file system uses to allocate files (more on this later).
      For example: If 95% of the disk space is occupied and everything that is freely represented is one continuous block, then new file will be fragmented with a 0% probability ( unless, of course, a normal file system specifically fragments files - approx. author) (also, the probability of fragmentation of the file being expanded does not depend on the amount of free space). On the other hand, a disk filled with 5% data evenly distributed over it has a very high probability of fragmentation.
    • Please note that file fragmentation affects performance only when these files are accessed. For example: You have a good, defragmented disk with big amount free “spaces” on it. Typical situation. Everything works well. However, at some point you come to a situation where there are no more large free blocks left. You download a large movie, and the file turns out to be highly fragmented. It won't slow down your computer. Your application files and other files that were in perfect order will not immediately become fragmented. The movie may, of course, take longer to load (however, typical movie bitrates are so much lower than the read speed of hard drives that this will probably go unnoticed), and this may also affect I/O performance while the movie is loading, but nothing else will change.
    • Although fragmentation is a problem, the problem is often compensated for by caching and buffering on the part of the operating system and hardware. Lazy write, read ahead, etc. help solve problems caused by fragmentation. In general you don't notice anything until the fragmentation level gets too high (I'd even go so far as to say that as long as your pagefile isn't fragmented, you won't notice anything)
  • Another example is Search Indexing. Let's say you have automatic indexing enabled and the operating system doesn't implement it very well. As you save more and more indexed files onto your computer (documents and the like), indexing begins to take longer and can begin to have a noticeable impact on the observed performance of the computer as it runs, eating up both I/O and CPU time. This has nothing to do with free space, but it does have to do with the amount of data being indexed. However, running out of disk space occurs at the same time as more content is being stored, so many people make the wrong relationship.
  • Antiviruses. Everything is very similar to the example with search index. Let's say you have an antivirus scanning your disk in the background. You are getting more and more more files to scan, the search begins to consume more and more I/O and CPU resources, possibly interfering with performance. Again, the problem is related to the amount of content being scanned. More content means less free space, but lack of free space is not the cause of the problem.
  • Installed programs. Let's say you have a lot of programs installed that start when your computer boots, which increases boot time. This slowdown occurs because a lot of programs are loading. At the same time, installed programs take up disk space. Consequently, the amount of free space decreases simultaneously with the deceleration, which can lead to incorrect conclusions.
  • Many other similar examples can be given that give illusion connection between disk space exhaustion and performance degradation.

The above illustrates another reason for the prevalence of this myth: although running out of free space is not directly the cause of slowdowns, uninstalling various applications, deleting indexed and crawled content, etc. sometimes (but not always, such cases are beyond the scope of this text) leads to increase performance for reasons unrelated to the amount of free space. This frees up disk space naturally. Consequently, here too the false connection between “more free space” and “faster computer” appears.

Look: if your computer is running slowly due to a large number installed programs etc., and you clone exactly your hard drive to a larger hard drive, and then expand the partitions to get more free space, the computer will not suddenly become faster. The same programs load, the same files are fragmented in the same way, the same indexing service runs, nothing changes despite the increase in free space.

Does this have something to do with finding a place to put the files?

No, not related. There are two important points here:

  1. Your hard drive is not looking for space to put files. The hard drive is stupid. He's nothing. This is a large block of addressable storage that blindly obeys operating system in matters of accommodation. Modern drives are equipped with sophisticated caching and buffering mechanisms designed to predict operating system requests based on human experience (some drives even know file systems Oh). But essentially, you should think of a disk as a big, dumb brick of data storage, sometimes with performance-enhancing features.
  2. Your operating system also doesn't look for a place to put it. There is no "search". Great efforts have been made to solve this problem because... it is critical to the performance of file systems. Data is located on your disk as determined by the file system. For example, FAT32 (older DOS and Windows computers), NTFS (newer Windows systems), HFS+ (Mac), ext4 (some Linux systems) and many others. Even the concept of a "file" or "directory" ( “folders” - approx. translator) is just the fruit of a typical file system: hard disks they don’t know about such animals as “files”. The details are beyond the scope of this text. However, in essence, all common file systems contain a way to keep track of free space on the disk and therefore "searching" for free space, under normal circumstances (i.e. when the file system is in normal state), is not necessary. Examples:
    • NTFS contains a master file table, which includes special files (such as $Bitmap) and a lot of metadata that describes the disk. Essentially, it keeps track of subsequent free blocks so that files can be written to disk without having to scan the disk each time.
    • Another example, ext4 has an entity called "bitmap allocator", an improvement over ext2 and ext3 that helps directly determine the location of free blocks, instead of scanning the list of free blocks. Ext4 also supports "lazy allocation", which is essentially the operating system buffering data into RAM before writing it to disk in order to accept best solution on placement to reduce fragmentation.
    • Many other examples.
Maybe it's a matter of moving files back and forth to allocate a long enough contiguous space when saving?

No, that's not happening. At least not in any of the file systems I'm familiar with. The files are simply fragmented.

The process of "moving files back and forth to allocate a long contiguous block" is called defragmentation. This does not happen when writing files. This happens when you run a disk defragmenter program. at least on newer Windows systems this happens automatically on a schedule, but writing a file is never the reason to start this process.

Opportunity avoid The need to move files around in this manner is key to the performance of file systems, and is why fragmentation occurs and defragmentation is a separate step.

How much free space should you leave on your disk?

This is a more complex question, and I have already written a lot.

Basic rules to follow:

  • For all disk types:
    • The most important thing is to leave enough space to use the computer effectively. If you're running out of space, you may need a larger disk.
    • Many disk defragmentation utilities require a certain minimum of free space (it seems to come bundled with Windows in worst case requires 15% free space) for its operation. They use this location to temporarily store fragmented files while other objects are moved.
    • Leave room for other operating system functions. For example, if your computer does not have a lot of physical random access memory, and virtual memory is enabled with a dynamic size page file, you should leave enough free space to accommodate the page file maximum size. If you have a laptop that you put into hibernation, you will need enough free space to save the hibernation state file. These are the things.
  • Regarding SSD:
    • For optimal reliability (and to a lesser extent performance), the SSD should have some free space, which, without going into detail, is used to evenly distribute data across the disk to avoid constantly writing to the same place (which leads to resource depletion) . The concept of reserving free space is called over-provisionning. It's important, but Many SSDs have the required backup space already allocated. That is, disks often have several tens of gigabytes more space than they show to the operating system. Cheaper drives often require you to leave some space unallocated. But when working with drives that have forced redundancy this is not required. It's important to note that additional space is often taken only from unallocated areas. That's why not always The option will work when your partition occupies the entire disk, and you leave some free space on it. Manual reshaping requires you to make your partition smaller than the size of the disk. Check your SSD's user manual. TRIM and garbage collection and similar things also have an impact, but they are beyond the scope of this text.

Personally, I usually buy new disk larger size when I have about 20-25% free space left. This has nothing to do with performance, it's just that when I get to this point - it means that the space will soon run out, which means it's time to buy a new drive.

More important matter Rather than keeping an eye on free space, the best thing to do is to make sure that scheduled defragmentation is enabled where it needs to be (not on the SSD), so that you never get to the point where it is large enough to have a noticeable impact.

Afterword

There is one more thing worth mentioning. One of the other answers to this question mentions that SATA half-duplex mode does not allow reading and writing at the same time. While this is true, it is a gross oversimplification and is largely unrelated to the performance issues discussed here. In reality, it simply means that data cannot be transferred by wire simultaneously in two directions. However

Bus baud rate disk interface- this is far from the only parameter that affects the performance of the hard drive as a whole. On the contrary, productivity hard drives with the same type of interface sometimes differs very significantly. What is the reason?

The fact is that a hard drive is a collection of a large number of different electronic and electromechanical devices. The performance of the mechanical components of the hard drive is significantly inferior to the performance of the electronics, which also includes a bus interface. Overall disk performance, unfortunately, is determined by the speed of the slowest components. The “neck of the bottle” when transferring data between the drive and the computer is precisely the internal transfer speed - a parameter determined by the speed of the hard drive mechanics, which is one of the reasons for repairing laptops. Therefore, in the most modern regimes PIO 4 and UltraDMA exchange maximum possible throughput interface during actual work with the drive is almost never achieved. To determine the performance of mechanical components, as well as the entire drive, you need to know the following parameters.

Disk rotation speed is the number of revolutions made by the plates (individual disks) of the hard drive per minute. The higher the rotation speed, the faster the data is written or read. The typical value of this parameter for most modern EIDE drives is 5400 rpm. Some newer drives have drives spinning at 7,200 rpm. The technical limit reached today - 10,000 rpm - is implemented in the Seagate Cheetah series SCSI drives.

Average seek time is the average time required to position a head unit from an arbitrary position to a given track for reading or writing data. The typical value of this parameter for new hard drives is from 10 to 18 ms, and an access time of 11-13 ms can be considered good. In the fastest SCSI models, the access time is less than 10 ms.

Average access time is the average period of time from issuing a command to operate on a disk until the start of data exchange. This is a composite parameter that includes the average search time, as well as half the disk rotation period (taking into account the fact that the data can be in an arbitrary sector on the desired track). The parameter determines the delay before the required data block begins to be read, as well as the overall performance when working with a large number of small files.

Internal transfer rate is the speed at which data is exchanged between the disk interface and the media (platters). The meaning of this parameter differs significantly for read and write. They are determined by the disk rotation speed, recording density, characteristics of the positioning mechanism and other drive parameters. It is this speed that has a decisive influence on the performance of the drive in steady state (when reading a large solid block of data). Exceeding the total transfer speed over the internal one is achieved only when data is exchanged between the interface and the hard drive cache without immediately accessing the platters. Therefore, another parameter affects the performance of the drive, namely...

...cache memory size. Cache memory is a regular electronic RAM installed on a hard drive. Data, after being read from the hard drive, simultaneously with its transfer to the computer’s memory, also ends up in the cache memory. If this data is needed again, it will not be read from the platters, but from the cache buffer. This allows you to significantly speed up data exchange. To increase the efficiency of cache memory, special algorithms have been developed that identify the most frequently used data and place it in the cache, which increases the likelihood that the next time the data is accessed, it will be requested from the electronic RAM - a so-called “cache hit” will occur. Naturally, the larger the cache memory, the faster the disk usually runs.

How much does hard drive speed affect overall PC performance?

Don’t feed the tester-reviewer of hard drives and all flash drives bread, but let them run some sophisticated specific benchmark that will show how many “parrots” of performance or “io-dogs” this or that model will show in it. All sorts of “iometers”, “pisimarki” and other “yo!-marks”, as a rule, are specially designed to best demonstrate the difference between disks during certain operations directly with these discs. And they (benchmarks and reviewers :)) do an excellent job of their purpose, giving us, the readers, rich food for thought about which disk model to prefer in this or that case.

But disk benchmarks (and browsers!) tell the average user little about exactly how (and how much) the performance will improve (or worsen). comfort of his daily work With personal computer, if this or that disk is installed on his system. Yes, we will know that, for example, a file/directory is twice as fast in ideal conditions will be written to our disk or read from it, or, say, “Windows loading” will be performed 15% faster - or rather, not itself, on our specific computer, and previously recorded on some other, completely incomprehensible to us and, as a rule, already obsolete PC, a special pattern that may have a very distant relation to our beloved PC. Let’s say we’re chasing a brand new, expensive disk model, having read all sorts of “reputable” reviewers, spend some money, and come home and absolutely Nothing, except for the consciousness that we bought a cool little thing in someone’s subjective opinion... That is, our PC “ran” and continues to “run”, it did not “fly” at all. :)

But the whole point is that in reality, the “return” from the speed of the disk subsystem, as a rule, is noticeably masked by the far from instantaneous operation of the other subsystems of our computer. As a result, even if we install a hard drive that is three times faster (according to specialized benchmarks), our computer, on average, will not feel three times faster at all, and subjectively, at best, we will feel that it starts up just a little faster graphics editor and favorite toy. Is this what we expected from the upgrade?

In this short article, without at all pretending to cover this multifaceted issue comprehensively, we will try to give an answer to what it is all about in real expect from a disk subsystem with one or another “reference” performance. We hope that this will allow the thoughtful reader to navigate the subject and decide when and how much to spend on the next “very hard” drive.

Methodology

The best way to assess the contribution of disk subsystem speed to actual PC performance is better... right! - using the example of the “real work” of this PC. The most suitable tool for this, and a universally recognized tool in the world, is now the professional benchmark BAPCo SYSmark 2007 Preview (which, by the way, costs a lot of money). This industrial test simulates real user work with a computer, and very active one, by actually launching (often in parallel) various popular applications and performing tasks typical for a particular type of user activity - reading, editing, archiving, and many others. etc. Details of the design and operation of SYSmark 2007 are described many times in computer literature and on the manufacturer’s website (), so we will not be distracted by them here. Let us emphasize only the main thing - the ideology of this test is that what is measured here average reaction time of a computer to user actions, that is, exactly the parameter by which a person judges the comfort of his work with a PC, whether his iron friend “crawls,” “runs,” or “flies.”

Unfortunately, SYSmark 2007 Preview was released a long time ago and although it was regularly patched by the manufacturer (here we are using version 1.06 from July 2009), at its core it contains applications that are by no means the latest, from about 2005. But we ourselves are always "we use" the most latest versions programs? Many, for example, still feel very comfortable on Windows XP (and even test new hardware under it!), not to mention the fact that they are not inspired by the multi-hundred-dollar “office arms race”, essentially imposed on us by one well-known Redmont company. Thus, we can assume that SYSmark 2007 is still relevant for the “average” PC user, especially since we are running it here on the latest OS - Windows 7 Ultimate x64. Well, we can only wish BAPCo to quickly overcome the consequences of the financial crisis in the IT industry and release new version SYSmark based on applications from 2010-2011.

Based on the results of the SYSmark 2007 Preview tests as a whole and its E-Learning, VideoCreation, Productivity and 3D subtests, which we ran in this case for two modern PC system configurations (based on Intel processors Core i7 and i3) and five “reference” drives of different “disk” performance (that is, a total of 10 tested systems), in this article we will draw conclusions about how much a particular disk will affect the user’s comfort with a PC, then there is how much it will change average the computer's reaction time to the actions of the active user.

But, of course, we won’t limit ourselves to just SYSmark. In addition to checking the “disk dependency” of some individual applications, tests and complex benchmarks, we will “add” the indicators of system tests from the more or less modern Futuremark PCMark Vantage package to the assessments of the disk’s impact on overall system performance. Although the PCMark approach is more synthetic than that of SYSmark, nevertheless, in various patterns it also measures the speed of the “entire” computer in typical user tasks, and the performance of the disk subsystem is also taken into account (plenty has also been written about the detailed PCMark Vantage device, Therefore, we will not go into details here). We also tried to use the new (this year) Intel test (). It is somewhat reminiscent of SYSmark in its approach, but in relation to working with multimedia content, although it evaluates not the average user reaction time, but total time execution of one or another complex scenario. However, the disk dependency of this test turned out to be very minimal (almost absent) and completely non-indicative, so we did not “run” this lengthy benchmark for all configurations, and we do not demonstrate its results in this article.

Test configurations

For our first experiments, we chose two basic system desktop configurations. The first of them is based on one of the most productive “desktop” processors Intel Core i7-975, and the second - on the youngest (at the time of writing) desktop processor from the Intel Core i3 line - the i3-530 model, priced just above $100. Thus, we will check the effect of disk subsystem speed on both a top-end PC and an inexpensive modern desktop. The performance of the latter, by the way, is quite comparable to that of modern top-end laptops, so at the same time we are “killing” the third with “two birds with one stone”. :) Specific configurations looked like this:

1. Top desktop (or workstation):

  • Intel Core i7-975 processor (HT and Turbo Boost activated);
  • maternal ASUS board P6T on Intel X58 chipset with ICH10R;
  • 6 GB of three-channel DDR3-1333 memory (timings 7-7-7);

2. Cheap desktop (as well as a media center or high-end laptop):

  • Intel Core i3-530 processor (2 cores + HT, 2.93 GHz);
  • Biostar TH55XE motherboard (Intel H55 chipset);
  • 4 GB dual-channel DDR3-1333 memory (timings 7-7-7);
  • video accelerator AMD Radeon HD 5770.

We selected the reference disk subsystems that acted as system drives for these configurations based on the fact that they would have a step size of approximately 50 MB/s maximum speed sequential read/write:

  1. typical SATA SSD on MLC memory (≈250 MB/s read, ≈200 MB/s write);
  2. typical 3.5-inch SATA seven-thousander 1 TB (≈150 MB/s read/write);
  3. fast 2.5-inch SATA seven-thousander with 500 GB (≈100 MB/s read/write);
  4. SATA-"seven thousand" small capacity with read/write speeds of around 50 MB/s;
  5. a SATA “five-thousander” laptop with a read/write speed of around 50 MB/s.

This gradation will allow us, without being tied to particular models, to create a conditional grid of reference points, using which we can approximately predict the behavior of a particular disk as a system one in computers of the configurations described above, as well as intermediate and some old ones. In our experiments, as specific models The following hard drives performed for each of the five points:

  1. Patriot TorqX PFZ128GS25SSD (IDX MLC SSD 128 GB);
  2. Hitachi Deskstar 7K1000.C HDS721010CLA332 (1 TB);
  3. Seagate Momentus 7200.4 ST950042AS (500 GB);
  4. Hitachi Travelstar 7K100 HTS721010G9SA00 (100 GB);
  5. Toshiba MK1246GSX (5400 rpm, 120 GB).

We emphasize that our test configurations are not aimed at assessing the impact of these specific (used in these tests) hard drive models, but these configurations actually represent the “interests” of not only certain desktops, but also (indirectly) media centers, mini-PCs and powerful laptops. And don’t let the video card model we used confuse you - the vast majority of the benchmark results we demonstrate here depend insignificantly (or not at all) on the performance of the video accelerator.

Performance of the drives themselves

Before moving on to the results of our study of disk dependence of system performance, let's take a brief look at the performance of the drives themselves, which we evaluated with our traditional way- using profile disk benchmarks. average speed random access to these drives is shown in the following diagram.

It is clear that the SSD is out of reach with its typical 0.09 ms, the desktop “seven-thousander” moves its “mustache” a little faster than the laptop “seven-thousander”, although, for example, the Hitachi 7K100 model in terms of average access time can compete with a number of 3.5 -inch “seven-thousanders” of previous years, having similar capacity and linear access speed. The latter for our reference disks is shown in the following diagram.

Toshiba's "five thousandth" is slightly faster in this parameter than the "seven thousandth" Hitachi 7K100, but is inferior to the latter in terms of random access time. Let's see what is more important for typical desktop operation and whether there is a real difference from using these disks, which are essentially different classes.

As interesting information Along the way, we will give an indicator with which Windows 7, a built-in benchmark, evaluates the usefulness of a particular reference drive.

We emphasize that for both test Windows systems 7 rated the HD 5770 video accelerator at 7.4 points (for graphics and game graphics), and the processor and memory received scores of 7.6 and 7.9, respectively, for the older and 6.9 and 7.3 for the younger of our test systems . Thus, disks are the weakest link in these systems (according to Windows 7). All the more noticeable, in theory, should be their influence on the overall system performance of the PC.

The last in this paragraph will be a diagram with the results of purely PCMark Vantage disk tests, showing the typical disposition of selected drives in traditional hard drive reviews, where reviewers use similar tests to make their harsh verdict.

More than five times advantage of SSD above the HDD in this particular benchmark (PCMark Vantage, HDD Score) - this is the typical position at the moment (however, in a number of other desktop benchmarks the gap is still smaller). By the way, please note that the results of disk tests depend extremely little on the system configuration - they are approximately the same for processors that are 10 times different in price, and also within the error are the same for x64 and x86 cases. In addition, we note that the older HDD we selected is approximately twice as fast as the younger one in terms of “pure disk” performance. Let's see how this 5-10 times gap in disk benchmarks will affect the actual performance of the PC.

System-wide test results

Just as he “predicted” to us Windows index 7, there is no practical difference between systems with the two youngest of the reference disks we selected, although these are disks of different classes (7200 and 5400 rpm). It is also interesting that the productive models of SATA seven-thousandth form factors 3.5 and 2.5 inches, which differ from each other by half in capacity (read - on the older one, the heads move about half as much when performing the same system-wide test), almost one and a half times - in terms of linear access speed and noticeably - in terms of random access speed, so these two models behave almost identically in real PCs, that is, even if you want, you will not feel the difference between such systems with your “human” sensations no differences in comfort during typical work with applications. But after upgrading to one of them from one of our junior reference disk subsystems, the increase on average will be about 15% (remember that in terms of pure disk performance they differ by about half!). This is a completely relevant situation both for a laptop (replacing an outdated five-thousandth unit with a capacious top-end seven-thousander) and for a desktop (upgrading an old seven-thousander to a new terabyte).

But is 15% a lot or a little? The author of these lines thinks that this is, in fact, very little! In fact, this is almost the limit of our differentiation in sensations (≈1 dB). We, as biological individuals, clearly feel the difference in the time of processes (and perceive the difference in other “analog” quantities) if this difference is at least 30-40 percent (this approximately corresponds to 3 dB on the logarithmic scale of our perception of various external stimuli). Otherwise, we don't really care. :) And it’s even better if the time difference between the processes is twofold (6 dB). Then we can say for sure that the system/process has clearly accelerated. But this, alas, is far from the case of the diagram shown above from SYSmark 2007. Thus, if after upgrading the HDD you do not deliberately sit with a stopwatch in your hand or run specialized disk benchmarks, then you are unlikely to know about the increase in the comfort of your work!

A slightly different case is with the upgrade of HDD to SSD. Here, already within the framework of an older laptop model, for example, the increase in average system-wide performance will be about 30%. Yes, we can feel it. But we can hardly say that the system began to “fly.” Even in the case of a top-end desktop PC, using an SSD instead of a single HDD will give us only a 20-40% reduction in the average PC response time to user actions (this is with a 5-10-fold difference in the speed of the disks themselves!). I do not at all want to say that on individual particular problems related to active use disk, you won’t say “wow!” But in general, the situation will not be as rosy as is sometimes described by hard drive testers. Moreover, using SSDs in weak PCs, as we see from this diagram, is hardly very advisable - the average increase in operating comfort will be at the level of the threshold of individual discernibility. And you will feel the greatest effect of SSDs in powerful PCs.

However, not everything is so sad! For example, by analyzing the position in different SYSmark 2007 patterns, one can come to the following conclusions. So, when performing tasks of a certain profile (in this case, working with 3D and the E-Learning scenario), there really is almost no difference which disk you use (the difference between our senior and junior benchmarks is 5-15% “indistinguishable” by us) . And there is absolutely no point in spending money on a new one. fast disk! However, on the other hand, on a number of tasks (in particular, the VideoCreation script, which actively uses video and audio editing), you can still feel the “breeze in your ears”: for a powerful desktop, the reduction in the average PC response time to user actions from using an SSD can reach the cherished 2 times (see diagram below), and even for a less powerful desktop system, as well as a top-end laptop, the benefits of using an SSD in the VideoCreation and Productivity scenarios are quite obvious (in VideoCreation, by the way, top-end HDDs behave very decently). Thus, we once again come to a postulate that has become ingrained in our teeth: universal solutions does not exist, and the configuration of your PC must be selected based on what specific tasks you are going to solve on it.


But not just Sismark!.. We also ran a fairly large number of traditional tests and benchmarks on our 10 reference systems to try to identify at least some kind of disk dependency. Unfortunately, most of these tests are designed in such a way as to neutralize the influence of the disk system on the test result. Therefore, neither in numerous games, nor in complex 3DMark Vantage, nor in SPEC viewperf and a number of other tasks, including video encoding in the x264 HD Benchmark 3.0 and Intel HDxPRT 2010 tests (and even more so in various processor and memory tests) there is no “disk dependence” we didn't notice. That is, we were simply honestly convinced of what we actually expected. By the way, this is why we did not use here the traditional method of testing website processors, which practices mainly benchmarks in individual applications. We naturally omit the results of these numerous, but “useless” tasks for the topic of this article. Another thing is another comprehensive test for assessing system-wide PC performance - PCMark Vantage. Let's take a look at its results for our reference systems and cases of 32- and 64-bit application executions.




It’s foolish to deny the obvious - according to the PCMark Vantage test evaluation methodology, the advantage of systems with SSDs is undeniable and sometimes more than twofold compared to the youngest of our reference HDDs (but still not 10-fold). And the difference between fast desktop and laptop hard drives is not so obvious here either. And everything is indistinguishable in “the reality given to us,” as we know, “in sensations.” In this case, it is optimal to focus on the “top” block “PCMark” on these diagrams, which shows the “main” index of system-wide performance of this benchmark.

Yes, one can argue that this is, in a certain sense, “synthetic”, much less realistic than simulating user work in tests like SYSmark. However, PCMark Vantage patterns take into account many nuances that are not yet available in SYSmark. Therefore, they also have the right to life. And the truth, as we know, is “out there” (and this translation, as we know, is inaccurate). :)

Conclusion

Our first study of the disk dependence of system-wide performance of modern high-end and mid-range PCs using the example of a dozen reference configurations showed that in most traditional tasks a simple user is unlikely to feel (in his feelings about the computer) a big difference from using a faster or slower disk from those that are currently on the market or were sold not so long ago. In most tasks not directly related to permanent active work with a disk (copying, writing and reading a large volume of files at maximum speed), the disk dependence of system performance is either completely absent or not so great that we really feel it (perceive it) by reducing the average response time of the system to our actions. On the other hand, of course, there are many tasks (for example, video processing, professional photo work, etc.) in which disk addiction is noticeable. And in this case, the use of high-performance disks and, in particular, SSDs can have a positive effect on our experience of the PC. But a fast disk and SSD are not a panacea. If your computer is not fast enough, then it makes sense to approach the upgrade strictly in accordance with the tasks that are supposed to be solved with the help of this PC. So as not to suddenly experience disappointment from money spent without real benefit.