The reason people say never do something isn't necessarily because it absolutely positively cannot be done correctly. We may be able to do so, but it may be more complicated, less efficient both space- or time-wise. For example it would be perfectly fine to say "Never build a large e-commerce backend in x86 assembly".
So now to the issue at hand: As you've demonstrated you can create a solution that parses ls and gives the right result - so correctness isn't an issue.
Is it more complicated? Yes, but we can hide that behind a helper function.
So now to efficiency:
Space-efficiency: Your solution relies on uniq to filter out duplicates, consequently we cannot generate the results lazily. So either O(1) vs. O(n) or both have O(n).
Time-efficiency: Best case uniq uses a hashmap approach so we still have a O(n) algorithm in the number of elements procured, probably though it's O(n log n).
Now the real problem: While your algorithm is still not looking too bad I was really careful to use elements procured and not elements for n. Because that does make a big difference. Say you have a file \n\n that will result in a glob for ?? so match every 2 character file in the listing. Funnily if you have another file \n\r that will also result in ?? and also return all 2 character files.. see where this is going? Exponential instead of linear behavior certainly qualifies as "worse runtime behavior".. it's the difference between a practical algorithm and one you write papers in theoretical CS journals about.
Everybody loves examples right? Here we go. Make a folder called "test" and use this python script in the same directory where the folder is.
#!/usr/bin/env python3 import itertools dir = "test/" filename_length = 3 options = "\a\b\t\n\v\f\r" for filename in itertools.product(options, repeat=filename_length): open(dir + ''.join(filename), "a").close()
Only thing this does is generate all permutations of length 3 of 7 characters. High school math tells us that ought to be 343 files. Well that ought to be really quick to print, so let's see:
time for f in *; do stat --format='%n' "./$f" >/dev/null; done real 0m0.508s user 0m0.051s sys 0m0.480s
Now let's try your first solution, because I really can't get this
eval set -- $(ls -1qrR ././ | tr ' ' '?' | sed -e '\|^\(\.\{,1\}\)/\.\(/.*\):|{' -e \ 's//\1\2/;\|/$|!s|.*|&/|;h;s/.*//;b}' -e \ '/..*/!d;G;s/\(.*\)\n\(.*\)/\2\1/' -e \ "s/'/'\\\''/g;s/.*/'&'/;s/?/'[\"?\$IFS\"]'/g" | uniq)
thing here to work on Linux mint (which I think speaks volumes for the usability of this method).
Anyhow since the above pretty much only filters the result after it gets it, the earlier solution should be at least as quick as itself (no inode tricks - which are unreliable anyhow as far as I can see).
So now how long does
time for f in $(ls -1q | tr " " "?") ; do stat --format='%n' "./$f" >/dev/null; done
take? Well I really don't know, it takes a while to check 343^343 file names - I'll tell you after the heat death of the universe.