16.2.6 MMU Improvements | Summary and Q&A

TL;DR
Implementing a hierarchical page map in MMU can improve efficiency by reducing the number of physical pages required to hold the page table.
Key Insights
- 📟 Hierarchical page maps reduce the physical memory resources required for page tables.
- ❓ Multiple contexts in MMUs can have significant demands on physical memory.
- 🥳 Context switches impact the TLB hit ratio and average memory access time.
- ®️ Including a context-number register in MMUs can reduce the impact of context switches.
- ❓ Caching physical addresses instead of virtual addresses can improve cache performance during context switches.
- ⌛ Performing MMU translation and cache lookup in parallel minimizes the memory access time impact.
- 🫥 Increasing cache associativity can increase cache capacity without affecting the line number address bits.
Transcript
There are a few MMU implementation details we can tweak for more efficiency or functionality. In our simple page-map implementation, the full page map occupies some number of physical pages. Using the numbers shown here, if each page map entry occupies one word of main memory, we'd need 2^20 words (or 2^10 pages) to hold the page table. If we have ... Read More
Questions & Answers
Q: How does the hierarchical page map work in MMU implementation?
The hierarchical page map uses the top 10 bits of the virtual address to access a "page directory" that indicates the physical page holding the page map for that segment of the virtual address space. This allows for efficient memory management by only loading the necessary page map segments.
Q: What impact do context switches have on the TLB hit ratio?
Context switches require reloading the page-table pointer and invalidating all entries in the TLB cache. This significantly affects the TLB hit ratio and increases the average memory access time until the TLB is refilled.
Q: How can MMUs reduce the impact of context switches?
Some MMUs include a context-number register that, when concatenated with the virtual page number, forms the query to the TLB. This removes the need to invalidate the TLB entries during a context switch, reducing the impact on average memory access time.
Q: How can a cache be incorporated into the memory system with an MMU?
Placing the cache between the MMU and main memory, caching physical addresses instead of virtual addresses, avoids cache invalidations during context switches. Although this slightly increases the average memory access time, parallel MMU translation and cache lookup minimize the impact.
Summary & Key Takeaways
-
It is possible to tweak MMU implementation details for better efficiency and functionality.
-
Using a hierarchical page map reduces the number of physical pages needed to hold the page table.
-
With multiple contexts, the demands on physical memory resources can become large.
Share This Summary 📚
Explore More Summaries from MIT OpenCourseWare 📚





