From a79dbb97bfb2fff7905221f2437da19472b9ba23 Mon Sep 17 00:00:00 2001 From: Pádraig Brady Date: Sun, 19 Mar 2017 22:36:23 -0700 Subject: split: ensure input is processed when filters exit early commit v8.25-4-g62e7af0 introduced the issue as it broke out of the processing loop irrespective of the value of new_file_flag which was used to indicate a finite number of filters or not. For example, this ran forever (as it should): $ yes | split --filter="head -c1 >/dev/null" -b 1000 However this exited immediately due to EPIPE being propagated back through cwrite and the loop not considering new filters: $ yes | split --filter="head -c1 >/dev/null" -b 100000 Similarly processing would exit early for a bounded number of output files, resulting in empty data sent to all but the first: $ truncate -s10T big.in $ split --filter='head -c1 >$FILE' -n 2 big.in $ echo $(stat -c%s x??) 1 0 I was alerted to this code by clang-analyzer, which indicated dead assigments, which is often an indication of code that hasn't considered all cases. * src/split.c (bytes_split): Change the last condition in the processing loop to also consider the number of files before breaking out of the processing loop. * tests/split/filter.sh: Add a test case. * NEWS: Mention the bug fix. --- src/split.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) (limited to 'src/split.c') diff --git a/src/split.c b/src/split.c index 966233691..85bc052a8 100644 --- a/src/split.c +++ b/src/split.c @@ -654,6 +654,7 @@ bytes_split (uintmax_t n_bytes, char *buf, size_t bufsize, size_t initial_read, { /* If filter no longer accepting input, stop reading. */ n_read = to_read = 0; + eof = true; break; } bp_out += w; @@ -666,7 +667,7 @@ bytes_split (uintmax_t n_bytes, char *buf, size_t bufsize, size_t initial_read, opened += new_file_flag; to_write -= to_read; new_file_flag = false; - if (!cwrite_ok) + if (!cwrite_ok && opened == max_files) { /* If filter no longer accepting input, stop reading. */ n_read = 0; -- cgit v1.2.3