Find a Question:
Facts | How caching works?
Facts | How caching works?
What is cache memory we already know . It is time to consider the principle of caching technology. Many users believe that viewing a web page directly “out there” in the distant and mysterious online. In fact, each openable page you first saved to the hard drive of your computer. How can a small cache to accelerate the work of memory, which is much more? The fact that even a huge of application consists of individual bytes. And only a few of them are used at the same time.
How caching works in the browser?
Internet connection is the slowest source of information on your computer. Therefore, the Web browser stores web pages in a special folder on your hard disk. That is, you are browsing from your computer, rather than “from the Internet”, as they say.
When you first request a web page, the browser finds her at the indicated address and saves a copy of your hard drive. The next time the browser will check if the page is refreshed on a remote server over its cached copy. If the data has not changed, the browser opens the page you requested from the hard drive and load it again from the Internet. In this situation, the hard drive of your computer acts as a smaller but relatively fast memory. Of course, with respect to the endless internet as compared to other memory modules local computer hard disk is very slow. It will be greater than, for example, random access memory (RAM, RAM).
The cache also embedded into the peripheral device. Modern hard drives are equipped with an extra fast memory. It is small and may be 8 or 16 megabytes. Until a few years ago, the cache hard drive was only 512 kilobytes. The computer can not directly use this memory. Work with it only knows how to hard disk controller. From the standpoint of computer memory and the chips are disc.
When a computer requests data from your hard drive, this drive controller first checks its cache, and only then turns to its mechanical components. If the requested computer data can be found in the cache, the hard disk, the controller will forward them to the computer without using the disc itself. This saves you a lot of time.
If you have survived a floppy drive, you can see for yourself how caching works. Open the relatively large file, such as a 300-kilobyte text file. First, the indicator flashes the floppy drive. This historic drive extremely slow and you have to wait twenty seconds of the first download the file. Now close the text file and then open it again. Just do not wait too long, and open at once.
The operating system checks its cache memory reserved for a floppy drive, and find there the file you want. As a result, you will not have to wait for the whole twenty seconds. Found in the memory subsystem data will be loaded much faster. The fact that a single cycle access to a floppy disk ranks 120 thousandths of a second (120 milliseconds), while the reference to the RAM takes about 60 billionths of a second (60 nanoseconds). Even if you do not carry out complex mathematical calculations, you can immediately see that the system memory is much faster than a floppy disk drive. The same is the case and the hard disk, but it is less noticeable delay in loading, as the drive operates at a faster rate than the drive for magnetic disks.
Although the hierarchy of computer memory, we already know , it makes sense to re-examine those of its stages, which are directly relevant to our present story. The first level cache (L1) smaller but faster than the second level cache (L2), which is smaller but faster system RAM. She, in turn, gives way to hard disk drive, but surpasses it in speed. A hard drive – even the largest – is able to store much less data than a giant global network.
Many question immediately arises: “Why not equip the fastest computer memory, such as that used in the first-level cache? After all, if caching is no longer needed! “From a technical point of view it is possible, but the cost would be incredibly expensive. The idea of caching is precisely the use of a small memory module comparatively expensive to accelerate a cheaper but less rapid data storage.
As examples, we will use outdated computers, whose technical characteristics are less complicated for calculations. Before the designers working on the creation of computers, the challenge is to ensure the processor at its maximum speed while minimizing costs. 500-MHz processor can execute in one second 500 million cycles. For one cycle of the (highly outdated) processor takes only two nanoseconds. Without the cache memory of the first and second levels of access to the RAM will go 60 nanoseconds, that is, you will lose about 30 cycles to access memory.
However, even the small size of the module is capable of high-speed memory to increase the performance of the interaction of the processor with the RAM. 256-Kbyte second-level cache (L2) cache is enough for 64 megabytes of RAM. That is ~ 256,000 byte cache can effectively 64,000,000 bytes. How is it can work?
In computer science, there is a theoretical concept, as a locality of reference (locality of reference). This means that only a small part of a large program is used in the same time. At least this is the case in most cases. The principle of locality of reference to the great majority of programs. If the executable file is 10 megabytes, only a few bytes are used at a time. And rarely occurs a situation in which this number is greatly increased. The concept of locality of reference deserves to examine it in more detail, since it is there that is based caching in general.
The principle of locality of reference
Let’s see how the concept of locality of reference of a simple pseudocode:
Output to screen « Enter a number between 1 and 100 »
Read input from user
Put value from user in variable X
Put value 100 in variable Y
Put value 1 in variable Z
Loop Y number of time
Divide Z by X
If the remainder of the division = 0
then output « Z is a multiple of X »
Add 1 to Z
Return to loop
This small program asks the user to enter a number between 1 and 100. It reads its value. The program then divides each number is a number between 1 and 100 on the user entered a divisor. The program checks to see if a number is zero (which, as we know from school mathematics, the division is not possible). After all this, the program exits.
Lines from the seventh to the ninth is a loop that executes a hundred times. All other lines are executed only once. Due to caching, the lines from the seventh to ninth run much faster.
This program is very small and will fit in even the smallest first-level cache. But let’s imagine that this is a huge program. Most of the actions carried out within the program cycles. Text editor holds 95% of the time while waiting for user input and is engaged at the time of their conclusion on the screen. This part of the word processor is in the cache.
This is the approximate ratio of 95% to 5%, and is what we call the locality of reference. Due to this order of things cache works efficiently. This is the answer to the question of how small cache optimizes a great memory. Therefore, the creation of a very expensive computer with ultra-fast memory does not make sense. Modern user and so gets 95% efficiency of this fantastic miracle of the computer, and for a relatively reasonable price.
Not always computer resource capacity leads to a significant increase in efficiency. Our readers already know that adding RAM to improve performance is not in all cases.
According to the materials computer.howstuffworks.com
Back Kazam – a new brand in the market of smartphones from HTC former managers
Next # & Support | What is better: HTC One or Samsung Galaxy S4?
Tags: Browsers , hard drive , memory modules .
Answer this Question
You must be Logged In to post an Answer.
Not a member yet? Sign Up Now »
Star Points Scale
Earn points for Asking and Answering Questions!