Argument list too long error for rm, cp, mv commands
Categories:
Resolving 'Argument list too long' Errors in Linux Commands

Learn how to overcome the 'Argument list too long' error when using rm
, cp
, mv
, and other commands on Linux and Unix-like systems.
Have you ever tried to delete, copy, or move a large number of files using commands like rm
, cp
, or mv
only to be met with the cryptic error message "Argument list too long"? This common issue arises when the total length of the arguments passed to a command exceeds the system's ARG_MAX
limit. This article will explain why this happens and provide several effective strategies to manage and resolve this problem, ensuring your file operations complete successfully.
Understanding the 'Argument list too long' Error
The "Argument list too long" error, often seen as E2BIG
in system calls, occurs because the operating system has a limit on the maximum size of the command line arguments and environment variables that can be passed to a new process. This limit, typically defined by ARG_MAX
, varies between systems but is usually in the range of 128KB to 2MB. When you use wildcard characters (like *
) with commands like rm
, the shell expands these wildcards into a list of all matching filenames. If this list is excessively long, it can easily exceed the ARG_MAX
limit, causing the command to fail.
flowchart TD A[User executes command with wildcard] --> B{Shell expands wildcard to file list} B --> C{Is file list length > ARG_MAX?} C -- Yes --> D["Error: Argument list too long"] C -- No --> E[Command executes successfully]
Process leading to 'Argument list too long' error
Common Scenarios and Solutions
This error most frequently appears when dealing with directories containing thousands or tens of thousands of files. While the immediate solution might seem to be to reduce the number of files, the practical approach involves using alternative methods to process them in smaller batches or by piping file lists to commands.
ARG_MAX
limit using getconf ARG_MAX
. On many modern Linux systems, this value is quite large (e.g., 2097152 bytes or 2MB), but it can still be hit with enough files.Solution 1: Using find
with -exec
The find
command is one of the most robust ways to handle a large number of files. Its -exec
option allows you to execute a command on each found file (or a batch of files), bypassing the shell's argument expansion limit. The +
at the end of the -exec
command tells find
to build up the argument list for the specified command as much as possible, executing it multiple times if necessary, but always staying within the ARG_MAX
limit.
find . -type f -name "*.log" -exec rm {} +
Deleting all '.log' files in the current directory and subdirectories using find
and rm
.
find /path/to/source -type f -name "*.txt" -exec cp {} /path/to/destination \;
Copying all '.txt' files from a source to a destination using find
and cp
. Note the \;
for single execution per file.
-exec command {} +
, find
will pass as many arguments as possible to command
at once. When using -exec command {} \;
, find
will execute command
once for each file found. The +
variant is generally more efficient for operations like rm
, cp
, mv
.Solution 2: Using find
with xargs
Another powerful combination is find
piped to xargs
. The xargs
command reads items from standard input, delimited by blanks or newlines, and executes the specified command one or more times with these items as arguments. This is particularly useful because xargs
is designed to build argument lists that respect the ARG_MAX
limit.
find . -type f -name "*.tmp" -print0 | xargs -0 rm
Deleting all '.tmp' files using find
, print0
, and xargs
.
find /path/to/source -type f -name "*.bak" -print0 | xargs -0 mv -t /path/to/destination
Moving all '.bak' files to a destination directory using find
, print0
, and xargs
.
find ... -print0 | xargs -0 ...
when dealing with filenames that might contain spaces or special characters. The -print0
option makes find
output filenames separated by null characters, and -0
tells xargs
to expect null-terminated input, preventing issues with whitespace in filenames.Solution 3: Iterating with a for
loop (for smaller batches)
For situations where the number of files is large but not astronomically so, or when you need more control over each file, a simple for
loop in the shell can be effective. This approach processes files one by one or in small, manageable groups, avoiding the ARG_MAX
limit. However, it can be slower than find -exec +
or xargs
for very large sets of files.
for f in *.old; do rm "$f"; done
Deleting all '.old' files in the current directory using a for
loop.
for file in /path/to/files/*; do
if [ -f "$file" ]; then
cp "$file" /path/to/destination/
fi
done
Copying files from a directory to a destination using a for
loop, checking if it's a regular file.
for
loops, always quote your variables (e.g., "$f"
) to correctly handle filenames containing spaces or other special characters.Choosing the Right Solution
The best solution depends on your specific needs:
find ... -exec ... +
: Generally the most efficient and recommended method forrm
,cp
,mv
when dealing with many files, as it batches arguments intelligently.find ... | xargs ...
: Very flexible and powerful, especially with-print0
and-0
for handling tricky filenames. Excellent for more complex operations or when the target command doesn't support multiple arguments directly.for
loop: Suitable for smaller sets of files or when you need per-file logic. Can be slower due to spawning a new process for each file/operation.
By understanding the ARG_MAX
limit and employing these robust command-line tools, you can effectively manage large numbers of files without encountering the "Argument list too long" error, making your Linux file operations more reliable and efficient.