I have one input file and run some command but want the output to be saved with the same name as the input file has.
I tried below command but it makes the output file blank:
cat file1 | grep "YISHA" > file1 On GNU system, you could use sed (the GNU implementation):
sed -i -n '/YISHA/p' file1 The FreeBSD or OS/X equivalent:
sed -i '' -n '/YISHA/p' file1 or using sponge from moreutils:
grep "YISHA" file1 | sponge file1 When removing data, you can write the file over itself and truncate it afterwards:
{ grep YISHA perl -e 'truncate STDOUT, tell STDOUT' } < file 1<> file Of course here, you can do everything in perl:
perl -ne 'print if /YISHA/; END{truncate STDOUT, tell STDOUT}' < file 1<> file perl also has a -i option for in-place editing (the one that GNU sed copied):
perl -ni -e 'print if /YISHA/' file But note that like sed, it creates a new file with the same name, it doesn't really rewrite the file in-place, which means inode numbers and other attributes of the file could be affected in the process. It will also break symlinks.
Your shell will likely get you a secure temp file on request:
grep "YISHA" <<IN > file $(cat file) IN That will drop blank lines from tail of file though (which shouldn't be relevant unless you're grepping for blank lines). If that should matter, then just echo . after cat in the command substitution and drop the last line.
Another option available to you is dd. For example:
seq 5000000 >/tmp/temp -rw-r--r-- 1 mikeserv mikeserv 38888896 Mar 11 04:20 /tmp/temp Just a dummy file large enough to outweigh any pipe buffer.
</tmp/temp grep 5\$ | dd bs=4k of=/tmp/temp conv=notrunc,sync You can see that I exceeded the size of any possible pipe buffer:
949+1 records in 950+0 records out 3891200 bytes (3.9 MB) copied, 0.164471 s, 23.7 MB/s When the notrunc conversion is specified, dd doesn't touch the output file except to write over what it reads in. With seek= you could even put that input data at some other offset in the file if you liked. But... the file still needs truncating. You can see that dd flushed its last input buffer as well: 949+1 records were read in, but 950 were written out - dd synced it's last input block to the full 4k size w/ nulls (which is a generally reasonable block size to choose when accepting piped input from tools that use stdio - such as grep).
So...
ls -l /tmp/temp; tail /tmp/temp -rw-r--r-- 1 mikeserv mikeserv 38888896 Mar 11 04:22 /tmp/temp 4999991 4999992 4999993 4999994 4999995 4999996 4999997 4999998 4999999 5000000 It's still the same file for everything beyond what dd wrote.
But...
dd if=/dev/null bs=4k seek=950 of=/tmp/temp ...we can truncate it to the point that dd wrote to it, and...
0+0 records in 0+0 records out 0 bytes (0 B) copied, 0.000153783 s, 0.0 kB/s ...it looks like nothing happened, except that...
ls -l /tmp/temp; tail /tmp/temp -rw-r--r-- 1 mikeserv mikeserv 3891200 Mar 11 04:25 /tmp/temp 4999915 4999925 4999935 4999945 4999955 4999965 4999975 4999985 4999995 dd cuts it short that time. In truth, though, there is that last synced partial block at the tail of the file, so...
tail /tmp/temp | wc -c 2383 ...there are a bunch of nulls at the end.
zsh, that won't work with binary files (files containing NUL bytes). Like sponge, that stores the whole file in memory (and in a temp files with most shells). /dev/fd/num links though. I sometimes do cmd 3<<! 3<<!\n!\n$(cmd >/dev/fd/3)\n!\n but different shells handle that differently. Thanks for the edit. I'm really sleepy, I guess.
grep "YISHA" file1 > file1.tmp ; mv file1{tmp,}catandgrep: it needs to connect the output ofcatto the input ofgrepand the output ofgrepto the output file before the pipeline is run, sofile1is already empty and ready for writing into, whencatopens it.