Find a Question:
Facts | What is cache memory?
Facts | What is cache memory?
Today let’s talk about how caching works. Our readers are already familiar with the hierarchy of computer memory, and the fact that the cache is in its very high step. It is time to get acquainted with the very principle of caching. Modern computers boast not only the first cache (L1) and second (L2), but the third level (L3). Consider the tasks that performs computer cache, the example of the working day in the usual librarian, slightly old-fashioned library, which stores the accumulated human knowledge in the form of paper books.
The main difficulty is that the “computer cache” concept is too valued and sometimes one and the same word means different things. Suffice it to say that there is not only a cache memory, but also the hardware and software disk cache, the page cache, and many other computer processes, united by a common name. For example, virtual memory is also a form of caching. Therefore, the processor cache is not only available in the computer cache. Caching plays a huge role in the work of the entire computer.
Caching is an example routine library
Caching technology is based on a computer’s memory subsystem. The main objective of this important technology is to make your computer run faster. Even if this is not the most powerful (and therefore expensive) computer. By caching increases the speed of your computer problems.
To understand the basic idea of the technology, we turn to a simple example of life. Imagine a librarian who gives readers a book on their requests. The reader does not have the very long hours to wander through the giant hall of the library and look for the right book.
A visitor to the library and asked to give him a textbook on algebra. The librarian goes into the hall, taking the book off the shelf, back to their workplace and gives the book to the reader. It takes time and a man comes out of the reading room at the table librarian to return the book. Librarian takes the book and return it to the shelf. And once again sits at his desk, waiting for the next reader.
New visitor might need all the same textbook on algebra. Librarian would have to go to the library, take a book off the shelf, return to work and give her the person to whom it is needed.
As we can see from this example, in order to give the reader to the literature, the librarian have to travel the same specific sequence of actions, even if it is a book that is in high demand.
Is there a way to ease the librarian of his labor? Yes! To do this, create a “library cache”, and then we will see how to do it.
To do this, the librarian should give the bag, which will fit, say, ten books. As part of computer terminology, we can say that the librarian has a “10-book cache.” In this bag will add a librarian returned to him by readers of the book, but no more than ten. This means that for the most popular does not have to go every time to the repository, as they are always at hand.
At the beginning of the working day “cash bag” librarian empty. Comes first reader and asks algebra textbook. Librarian goes to the library and returned with the requested book. There is nothing new, all as in the previous example. After a while, the reader returns a textbook librarian, but he does not put it on the shelf in the store, and send in a bag. Thus, the cache is not empty, there is the content.
Comes another reader who needs algebra textbook. Instead of once again make their way from my desk to the shelf where the book, the librarian checks her bag and finds her there. It is enough to remove the book from the bag and give it to the reader. Tedious journey in the book depository canceled! Too much time was spent and the reader got the book much faster than in the previous example.
But it may happen that the visitor will need a book that is not in the bag. In this case, the cache will increase the search time as the librarian will need to first check your bag, and only then (after making sure that the necessary book there) to go to the store to the appropriate shelf. One of the most challenging engineering problems is to reduce the delay caused by testing to minimize cache. Even in our example, the time it takes checking bags (latency time) is very small compared to the long journey to the storage of books and back. In this case, the cache is small (of 10 books). By the way, latency is one of the major limitations of computer memory, which we previously wrote .
These simple and clear examples of each of us to reveal some facts that you should know about caching technologies:
In the technology involved fast cache memory is relatively small volume. It works in conjunction with a volume, but slower memory
Using the cache involves checking whether it required data. If they’re found, we speak of “hits» (cache hit). If not, then it is called a “finding who fail when accessing the cache” or simply “slip» (cache miss). In the latter case, the computer will have to appeal to a larger slow memory
The maximum cache size is much smaller than that of a more capacious storage media (eg, RAM, or, especially, the hard drive)
There can be multiple levels of cache. In the example of a librarian, a smaller but fast type of memory is his bag, and the library acts as a volume, but the relatively slow memory. This is an example of a single-level cache. Can be added extra layers of cache, for example in the form of shelves in a hundred books directly from the librarian’s desk. First, the librarian will check the bag (first-level cache, L1), then located at hand shelf. And just in case the desired book is not detected nor there, nor there, he goes to the store. This method of organization is called a two-level cache
Modern computer running at an incredibly high speed. When the processor accesses the memory (RAM, RAM), it requires a few nanoseconds, i.e. billionths of a second. Suppose that one call processor to memory takes 60 nanoseconds. It is very fast, but the processor is even faster. Even a fairly slow and outdated processor order one measure would require only 2 nanoseconds. Next, we will build on these figures, which are conditional and only help us to make the story more specific. By the way, our readers already know that increasing the RAM does not always lead to an increase in computer performance.
What happens if you embed the motherboard special storage information, a small but relatively fast (say, a cycle being referred to it will take only 30 nanoseconds). It is two times faster than the time it would take for an appeal to the RAM. This cache is called the second-level cache (or cache L2).
And if you integrate an even smaller, but more high-speed memory directly to the processor chip? Get first-level cache. Refer to such a memory processor can have at their own speed. As an example, consider an outdated Pentium processor with a clock speed of 233 megahertz. Its first level cache (L1) was faster second level cache (L2) is 3.5 times. And that, in turn, is twice as fast access time RAM.
In modern processors, the two-level cache is often mounted directly into their chips. In such cases there is on the motherboard of the third level cache (L3), which plays the role of a buffer information between the microprocessor and system memory modules.
A computer is a complex device comprising a plurality of sub-systems. If you build the cache between a number of them, the productivity increase. Consider this particular example. So, we have a processor (the fastest computer component). This is followed by first-level cache, then second-level cache that caches data in memory. But she plays the role of memory cache for slow devices, which include hard disk and optical drives.
And the hard drive is also sometimes necessary to take over the functions of the cache (temporary storage) in relation to your Internet connection. After all, the Internet can also be seen as a huge, but not too fast, “memory”. The data warehouse, located on a step higher in the hierarchy of computer memory, the data cache can act with respect to a slower drive.
Now that we’ve covered how caching works, take a brief break and will return to this subject again to learn about caching technologies and subsystems.
To be continued
According to the materials computer.howstuffworks.com
Back Conquest of the icy surface of Jupiter will be engaged in a tiny submarine
Next X-point: NASA is looking for space portals
Tags: Memory modules , Memory , Processors .
Answer this Question
You must be Logged In to post an Answer.
Not a member yet? Sign Up Now »
Star Points Scale
Earn points for Asking and Answering Questions!