Disabling disk cache in linux
Categories:
Disabling Disk Cache in Linux: Understanding and Control

Learn how to effectively disable or clear the disk cache in Linux, understand its implications, and when it's necessary for specific use cases like benchmarking or low-memory environments.
Linux systems, like most modern operating systems, extensively use disk caching to improve performance. The kernel intelligently caches frequently accessed data in RAM, reducing the need to read from slower disk storage. While this is generally beneficial, there are specific scenarios where you might need to disable or clear the disk cache. This article will guide you through the methods to achieve this, explain the underlying mechanisms, and discuss the implications of such actions.
Understanding Linux Disk Caching
Before diving into disabling the cache, it's crucial to understand how Linux manages it. The kernel uses available RAM to store disk blocks that have been recently read or written. This cache is often referred to as the 'page cache' or 'buffer cache'. When an application requests data, the kernel first checks if it's in the cache. If it is (a 'cache hit'), the data is delivered much faster than if it had to be read from the physical disk. The kernel dynamically adjusts the size of the cache based on system memory pressure and application demands. It's important to note that 'free' memory reported by tools like free -h
often includes memory used by the disk cache, which is readily reclaimable by applications.
flowchart LR A[Application Request] --> B{Data in Cache?} B -->|Yes| C[Serve from RAM (Fast)] B -->|No| D[Read from Disk (Slow)] D --> E[Store in Cache] E --> C
Simplified Linux Disk Cache Flow
Why Disable or Clear the Disk Cache?
While caching is a performance booster, there are legitimate reasons to clear or temporarily disable it:
- Benchmarking: To get accurate disk I/O performance metrics, you often need to ensure that data is read directly from the disk, not from RAM. Clearing the cache before each benchmark run provides a consistent 'cold' start.
- Low Memory Environments: In extremely memory-constrained systems, if the cache grows too large, it might starve other critical applications of RAM, leading to excessive swapping.
- Testing Disk Integrity: When testing new disk drives or file systems, you might want to ensure that all read/write operations hit the physical disk.
- Specific Application Requirements: Some specialized applications might manage their own caching and require the OS cache to be minimized or cleared to avoid conflicts or ensure data consistency.
Methods to Clear the Disk Cache
Linux provides mechanisms to clear the page cache without rebooting. There isn't a direct way to 'disable' it permanently in the same sense as turning off a service, as it's a fundamental part of the kernel's memory management. However, you can force the kernel to drop cached data.
sync
echo 1 > /proc/sys/vm/drop_caches
Clearing only the page cache
sync
echo 2 > /proc/sys/vm/drop_caches
Clearing dentries and inodes
sync
echo 3 > /proc/sys/vm/drop_caches
Clearing page cache, dentries, and inodes (most common for full clear)
The sync
command is crucial before dropping caches. It flushes all pending buffered writes to disk, ensuring that no data is lost when the cache is cleared. The /proc/sys/vm/drop_caches
file accepts three values:
1
: Frees pagecache.2
: Frees dentries and inodes.3
: Frees pagecache, dentries, and inodes.
After executing one of these commands, the memory previously used by the specified caches will be marked as free and available for other applications. You can verify the effect using free -h
before and after the command.
sudo
).