How can I check if a file can be created or truncated/overwritten in bash?
up vote
13
down vote
favorite
The user calls my script with a file path that will be either be created or overwritten at some point in the script, like foo.sh file.txt
or foo.sh dir/file.txt
.
The create-or-overwrite behavior is much like the requirements for putting the file on the right side of the >
output redirect operator, or passing it as an argument to tee
(in fact, passing it as an argument to tee
is exactly what I'm doing).
Before I get into the guts of the script, I want to make a reasonable check if the file can be created/overwritten, but not actually create it. This check doesn't have to be perfect, and yes I realize that the situation can change between the check and the point where the file is actually written - but here I'm OK with a best effort type solution so I can bail out early in the case that the file path is invalid.
Examples of reasons the file couldn't created:
- the file contains a directory component, like
dir/file.txt
but the directorydir
doesn't exist - the user doens't have write permissions in the specified directory (or the CWD if no directory was specified
Yes, I realize that checking permissions "up front" is not The UNIX Way™, rather I should just try the operation and ask forgiveness later. In my particular script however, this leads to a bad user experience and I can't change the responsible component.
bash files error-handling
add a comment |
up vote
13
down vote
favorite
The user calls my script with a file path that will be either be created or overwritten at some point in the script, like foo.sh file.txt
or foo.sh dir/file.txt
.
The create-or-overwrite behavior is much like the requirements for putting the file on the right side of the >
output redirect operator, or passing it as an argument to tee
(in fact, passing it as an argument to tee
is exactly what I'm doing).
Before I get into the guts of the script, I want to make a reasonable check if the file can be created/overwritten, but not actually create it. This check doesn't have to be perfect, and yes I realize that the situation can change between the check and the point where the file is actually written - but here I'm OK with a best effort type solution so I can bail out early in the case that the file path is invalid.
Examples of reasons the file couldn't created:
- the file contains a directory component, like
dir/file.txt
but the directorydir
doesn't exist - the user doens't have write permissions in the specified directory (or the CWD if no directory was specified
Yes, I realize that checking permissions "up front" is not The UNIX Way™, rather I should just try the operation and ask forgiveness later. In my particular script however, this leads to a bad user experience and I can't change the responsible component.
bash files error-handling
Will the argument always be based on the current directory or could the user specify a full path?
– Jesse_b
yesterday
@Jesse_b - I suppose the user could specify an absolute path like/foo/bar/file.txt
. Basically I pass the path totee
liketee $OUT_FILE
whereOUT_FILE
is passed on the command line. That should "just work" with both absolute and relative paths, right?
– BeeOnRope
yesterday
@BeeOnRope, no you'd needtee -- "$OUT_FILE"
at least. What if the file already exists or exists but is not a regular file (directory, symlink, fifo)?
– Stéphane Chazelas
yesterday
@StéphaneChazelas - well I am usingtee "$OUT_FILE.tmp"
. If the file already exists,tee
overwrites, which is the desired behavior in this case. If it's a directory,tee
will fail (I think). symlink I'm not 100% sure?
– BeeOnRope
yesterday
add a comment |
up vote
13
down vote
favorite
up vote
13
down vote
favorite
The user calls my script with a file path that will be either be created or overwritten at some point in the script, like foo.sh file.txt
or foo.sh dir/file.txt
.
The create-or-overwrite behavior is much like the requirements for putting the file on the right side of the >
output redirect operator, or passing it as an argument to tee
(in fact, passing it as an argument to tee
is exactly what I'm doing).
Before I get into the guts of the script, I want to make a reasonable check if the file can be created/overwritten, but not actually create it. This check doesn't have to be perfect, and yes I realize that the situation can change between the check and the point where the file is actually written - but here I'm OK with a best effort type solution so I can bail out early in the case that the file path is invalid.
Examples of reasons the file couldn't created:
- the file contains a directory component, like
dir/file.txt
but the directorydir
doesn't exist - the user doens't have write permissions in the specified directory (or the CWD if no directory was specified
Yes, I realize that checking permissions "up front" is not The UNIX Way™, rather I should just try the operation and ask forgiveness later. In my particular script however, this leads to a bad user experience and I can't change the responsible component.
bash files error-handling
The user calls my script with a file path that will be either be created or overwritten at some point in the script, like foo.sh file.txt
or foo.sh dir/file.txt
.
The create-or-overwrite behavior is much like the requirements for putting the file on the right side of the >
output redirect operator, or passing it as an argument to tee
(in fact, passing it as an argument to tee
is exactly what I'm doing).
Before I get into the guts of the script, I want to make a reasonable check if the file can be created/overwritten, but not actually create it. This check doesn't have to be perfect, and yes I realize that the situation can change between the check and the point where the file is actually written - but here I'm OK with a best effort type solution so I can bail out early in the case that the file path is invalid.
Examples of reasons the file couldn't created:
- the file contains a directory component, like
dir/file.txt
but the directorydir
doesn't exist - the user doens't have write permissions in the specified directory (or the CWD if no directory was specified
Yes, I realize that checking permissions "up front" is not The UNIX Way™, rather I should just try the operation and ask forgiveness later. In my particular script however, this leads to a bad user experience and I can't change the responsible component.
bash files error-handling
bash files error-handling
edited 6 hours ago
asked yesterday
BeeOnRope
21219
21219
Will the argument always be based on the current directory or could the user specify a full path?
– Jesse_b
yesterday
@Jesse_b - I suppose the user could specify an absolute path like/foo/bar/file.txt
. Basically I pass the path totee
liketee $OUT_FILE
whereOUT_FILE
is passed on the command line. That should "just work" with both absolute and relative paths, right?
– BeeOnRope
yesterday
@BeeOnRope, no you'd needtee -- "$OUT_FILE"
at least. What if the file already exists or exists but is not a regular file (directory, symlink, fifo)?
– Stéphane Chazelas
yesterday
@StéphaneChazelas - well I am usingtee "$OUT_FILE.tmp"
. If the file already exists,tee
overwrites, which is the desired behavior in this case. If it's a directory,tee
will fail (I think). symlink I'm not 100% sure?
– BeeOnRope
yesterday
add a comment |
Will the argument always be based on the current directory or could the user specify a full path?
– Jesse_b
yesterday
@Jesse_b - I suppose the user could specify an absolute path like/foo/bar/file.txt
. Basically I pass the path totee
liketee $OUT_FILE
whereOUT_FILE
is passed on the command line. That should "just work" with both absolute and relative paths, right?
– BeeOnRope
yesterday
@BeeOnRope, no you'd needtee -- "$OUT_FILE"
at least. What if the file already exists or exists but is not a regular file (directory, symlink, fifo)?
– Stéphane Chazelas
yesterday
@StéphaneChazelas - well I am usingtee "$OUT_FILE.tmp"
. If the file already exists,tee
overwrites, which is the desired behavior in this case. If it's a directory,tee
will fail (I think). symlink I'm not 100% sure?
– BeeOnRope
yesterday
Will the argument always be based on the current directory or could the user specify a full path?
– Jesse_b
yesterday
Will the argument always be based on the current directory or could the user specify a full path?
– Jesse_b
yesterday
@Jesse_b - I suppose the user could specify an absolute path like
/foo/bar/file.txt
. Basically I pass the path to tee
like tee $OUT_FILE
where OUT_FILE
is passed on the command line. That should "just work" with both absolute and relative paths, right?– BeeOnRope
yesterday
@Jesse_b - I suppose the user could specify an absolute path like
/foo/bar/file.txt
. Basically I pass the path to tee
like tee $OUT_FILE
where OUT_FILE
is passed on the command line. That should "just work" with both absolute and relative paths, right?– BeeOnRope
yesterday
@BeeOnRope, no you'd need
tee -- "$OUT_FILE"
at least. What if the file already exists or exists but is not a regular file (directory, symlink, fifo)?– Stéphane Chazelas
yesterday
@BeeOnRope, no you'd need
tee -- "$OUT_FILE"
at least. What if the file already exists or exists but is not a regular file (directory, symlink, fifo)?– Stéphane Chazelas
yesterday
@StéphaneChazelas - well I am using
tee "$OUT_FILE.tmp"
. If the file already exists, tee
overwrites, which is the desired behavior in this case. If it's a directory, tee
will fail (I think). symlink I'm not 100% sure?– BeeOnRope
yesterday
@StéphaneChazelas - well I am using
tee "$OUT_FILE.tmp"
. If the file already exists, tee
overwrites, which is the desired behavior in this case. If it's a directory, tee
will fail (I think). symlink I'm not 100% sure?– BeeOnRope
yesterday
add a comment |
7 Answers
7
active
oldest
votes
up vote
7
down vote
The obvious test would be:
if touch /path/to/file; then
: it can be created
fi
But it does actually create the file if it's not already there. We could clean up after ourselves:
if touch /path/to/file; then
rm /path/to/file
fi
But this would remove a file that already existed, which you probably don't want.
We do, however, have a way around this:
if mkdir /path/to/file; then
rmdir /path/to/file
fi
You can't have a directory with the same name as another object in that directory. I can't think of a situation in which you'd be able to create a directory but not create a file. After this test, your script would be free to create a conventional /path/to/file
and do whatever it pleases with it.
That very neatly handles so many issues! A code comment explaining & justifying that unusual solution might well exceed the length of your answer. :-)
– jpaugh
4 hours ago
add a comment |
up vote
7
down vote
From what I'm gathering, you want to check that when using
tee -- "$OUT_FILE"
(note the --
or it wouldn't work for file names that start with -), tee
would succeed to open the file for writing.
That is that:
- the length of the file path doesn't exceed the PATH_MAX limit
- the file exists (after symlink resolution) and is not of type directory and you have write permission to it.
- if the file doesn't exist, the dirname of the file exists (after symlink resolution) as a directory and you have write and search permission to it and the filename length doesn't exceed the NAME_MAX limit of the filesystem that directory resides in.
- or the file is a symlink that points to a file that doesn't exist and is not a symlink loop but meets the criteria just above
We'll ignore for now filesystems like vfat, ntfs or hfsplus that have limitations on what byte values file names may contain, disk quota, process limit, selinux, apparmor or other security mechanism in the way, full filesystem, no inode left, device files that can't be opened that way for a reason or another, files that are executables currently mapped in some process address space all of which could also affect the ability to open or create the file.
With zsh
:
zmodload zsh/system
tee_would_likely_succeed()
local file=$1 ERRNO=0 LC_ALL=C
if [ -d "$file" ]; then
return 1 # directory
elif [ -w "$file" ]; then
return 0 # writable non-directory
elif [ -e "$file" ]; then
return 1 # exists, non-writable
elif [ "$errnos[ERRNO]" != ENOENT ]; then
return 1 # only ENOENT error can be recovered
else
local dir=$file:P:h base=$file:t
[ -d "$dir" ] && # directory
[ -w "$dir" ] && # writable
[ -x "$dir" ] && # and searchable
(($#base <= $(getconf -- NAME_MAX "$dir")))
return
fi
In bash
or any Bourne-like shell, just replace the
zmodload zsh/system
tee_would_likely_succeed()
<zsh-code>
with:
tee_would_likely_succeed()
zsh -s -- "$@" << 'EOF'
zmodload zsh/system
<zsh-code>
EOF
Yes, you understood my intent correctly. In fact, the original question was unclear: I originally spoke about creating the file, but actually I am using it withtee
so the requirement is really that the file can be created if it doesn't exist, or can be truncated to zero and overwritten if it does (or however elsetee
handles that).
– BeeOnRope
yesterday
Question's tagged asbash
, though, which doesn't have$errnos
equivalent AFAIK.
– bishop
5 hours ago
Looks like your last edit wiped the formatting
– D. Ben Knoble
2 hours ago
Thanks, @bishop, @D.BenKnoble, see edit.
– Stéphane Chazelas
1 hour ago
Your Bash example requires that zsh is installed. If so, just use zsh directly.
– Dennis Williamson
36 mins ago
add a comment |
up vote
6
down vote
One option you might want to consider is creating the file early on but only populating it later in your script. You can use the exec
command to open the file in a file descriptor (such as 3, 4, etc.) and then later use a redirection to a file descriptor (>&3
, etc.) to write contents to that file.
Something like:
#!/bin/bash
# Open the file for read/write, so it doesn't get
# truncated just yet (to preserve the contents in
# case the initial checks fail.)
exec 3<>dir/file.txt ||
echo "Error creating dir/file.txt" >&2
exit 1
# Long checks here...
check_ok ||
echo "Failed checks" >&2
# cleanup file before bailing out
rm -f dir/file.txt
exit 1
# We're ready to write, first truncate the file.
# Use "truncate(1)" from coreutils, and pass it
# /dev/fd/3 so the file doesn't need to be reopened.
truncate -s 0 /dev/fd/3
# Now populate the file, use a redirection to write
# to the previously opened file descriptor.
populate_contents >&3
You can also use a trap
to clean up the file on error, that's a common practice.
This way, you get a real check for permissions that you'll be able to create the file, while at the same time being able to perform it early enough that if that fails you haven't spent time waiting for the long checks.
UPDATED: In order to avoid clobbering the file in case the checks fail, use bash's fd<>file
redirection which does not truncate the file right away. (We don't care about reading from the file, this is just a workaround so we don't truncate it. Appending with >>
would probably just work too, but I tend to find this one a bit more elegant, keeping the O_APPEND flag out of the picture.)
When we're ready to replace the contents, we need to truncate the file first (otherwise if we're writing fewer bytes than there were in the file before, the trailing bytes would stay there.) We can use the truncate(1) command from coreutils for that purpose, and we can pass it the open file descriptor we have (using the /dev/fd/3
pseudo-file) so it doesn't need to reopen the file. (Again, technically something simpler like : >dir/file.txt
would probably work, but not having to reopen the file is a more elegant solution.)
I had considered this, but the problem is that if another innocent error occurs somewhere between when I create the file and the point some time later where I would write to it, the user will probably be upset to find out that the specified file was overwritten.
– BeeOnRope
yesterday
Hmmm, interesting... I had this idea of usingset -o noclobber
, in which case it would fail if the file already existed... But then I don't think it's possible to differentiate failure due to permissions (abort script) from failure due to file already existing (in which case you'd like to proceed and later overwrite the file.) If this was C or Python, etc., it's just a matter of opening the file with O_EXCL and looking for error EEXIST (permissions are fine, but file already exists) from EACCES (permission denied)... But not sure that can be done in bash...
– Filipe Brandenburger
yesterday
1
@BeeOnRope another option that won't clobber the file is to try to append nothing to it:echo -n >>file
ortrue >> file
. Of course, ext4 has append-only files, but you could live with that false positive.
– mosvy
12 hours ago
1
@BeeOnRope If you need to preserve either (a) the existing file or (b) the (complete) new file, that's a different question than you have asked. A good answer to it is to move the existing file to a new name (e.g.my-file.txt.backup
) before creating the new one. Another solution is to write the new file to a temporary file in the same folder, and then copy it over the old file after the rest of the script succeeds --- if that last operation failed, the user could manually fix the issue without losing his or her progress.
– jpaugh
5 hours ago
1
@BeeOnRope If you go for the "atomic replace" route (which is a good one!) then typically you'll want to write the temporary file in the same directory as the final file (they need to be in the same filesystem for rename(2) to succeed) and use a unique name (so if there are two instances of the script running, they won't clobber each other's temporary files.) Opening a temporary file for writing in the same directory is usually a good indication you'll be able to rename it later (well, except if the final target exists and is a directory), so to some extent it also addresses your question.
– Filipe Brandenburger
4 hours ago
|
show 6 more comments
up vote
3
down vote
I think DopeGhoti's solution is better but this should also work:
file=$1
if [[ "$file:0:1" == '/' ]]; then
dir=$file%/*
elif [[ "$file" =~ .*/.* ]]; then
dir="$(PWD)/$file%/*"
else
dir=$(PWD)
fi
if [[ -w "$dir" ]]; then
echo "writable"
#do stuff with writable file
else
echo "not writable"
#do stuff without writable file
fi
The first if construct checks if the argument is a full path (starts with /
) and sets the dir
variable to the directory path up to the last /
. Otherwise if the argument does not start with a /
but does contain a /
(specifying a sub directory) it will set dir
to the present working directory + the sub directory path. Otherwise it assumes the present working directory. It then checks if that directory is writable.
add a comment |
up vote
3
down vote
You mentioned user experience was driving your question. I'll answer from a UX angle, since you've got good answers on the technical side.
Rather than performing the check up-front, how about writing the results into a temporary file then at the very end, placing the results into the user's desired file? Like:
userfile=$1:?Where would you like the file written?
tmpfile=$(mktemp)
# ... all the complicated stuff, writing into "$tmpfile"
# fill user's file, keeping existing permissions or creating anew
# while respecting umask
cat "$tmpfile" > "$userfile"
if [ 0 -eq $? ]; then
rm "$tmpfile"
else
echo "Couldn't write results into $userfile." >&2
echo "Results available in $tmpfile." >&2
exit 1
fi
The good with this approach: it produces the desired operation in the normal happy path scenario, side-steps the test-and-set atomicity issue, preserves permissions of the target file while creating if necessary, and is dead simple to implement.
Note: had we used mv
, we'd be keeping the permissions of the temporary file -- we don't want that, I think: we want to keep the permissions as set on the target file.
Now, the bad: it requires twice the space (cat .. >
construct), forces the user to do some manual work if the target file wasn't writable at the time it needed to be, and leaves the temporary file laying around (which might have security or maintenance issues).
In fact, this is more or less what I'm doing now. I write most of the results to a temporary file and then at the end do the final processing step and write the results to the final file. The problem is I want to bail out early (at the start of the script) if that final step is likely to fail. The script may run unattended for minutes or hours, so you really want to know up front that it is doomed to fail!
– BeeOnRope
9 hours ago
Sure, but there are so many ways this could fail: disk could fill, upstream directory could be removed, permissions could change, target file might be used for some other important stuff and the user forgot he assigned that same file to be destroyed by this operation. If we talk about this from a pure UX perspective, then perhaps the right thing to do is treat it like a job submission: at the end, when you know it worked correctly to completion, just tell the user where the resulting content resides and offer a suggested command for them to move it themselves.
– bishop
9 hours ago
In theory, yes there are infinite ways this could fail. In practice, the overwhelming majority of the time this fails is because the path provided is not valid. I can't reasonably prevent some rogue user from concurrently modifying the FS to break the job in the middle of the operation, but I certainly can check the #1 failure cause of an invalid or not writable path.
– BeeOnRope
8 hours ago
add a comment |
up vote
1
down vote
What about using normal test
command like outlined below?
FILE=$1
DIR=$(dirname $FILE) # $DIR now contains '.' for file names only, 'foo' for 'foo/bar'
if [ -d $DIR ] ; then
echo "base directory $DIR for file exists"
if [ -e $FILE ] ; then
if [ -w $FILE ] ; then
echo "file exists, is writeable"
else
echo "file exists, NOT writeable"
fi
elif [ -w $DIR ] ; then
echo "directory is writeable"
else
echo "directory is NOT writeable"
fi
else
echo "can NOT create file in non-existent directory $DIR "
fi
add a comment |
up vote
0
down vote
TL;DR:
: >> "$userfile"
From the OP:
I want to make a reasonable check if the file can be created/overwritten, but not actually create it.
And from your comment to my answer from a UX perspective:
The overwhelming majority of the time this fails is because the path provided is not valid. I can't reasonably prevent some rogue user some concurrently modifying the FS to break the job in the middle of the operation, but I certainly can check the #1 failure cause of an invalid or not writable path.
The only reliable test is to open(2)
the file, because only that resolves every question about the writeability: path, ownership, filesystem, network, security context, etc. Any other test will address some part of writeability, but not others. If you want a subset of tests, you'll ultimately have to choose what's important to you.
But here's another thought. From what I understand:
- the content creation process is long-running, and
- the target file should be left in a consistent state.
You're wanting to do this pre-check because of #1, and you don't want to overwrite an existing file because of #2. So why don't you just ask the shell to open the file for appending, but don't actually append anything?
$ tree -ps
.
├── [dr-x------ 4096] dir_r
├── [drwx------ 4096] dir_w
├── [-r-------- 0] file_r
└── [-rw------- 0] file_w
$ for p in file_r dir_r/foo file_w dir_w/foo; do : >> $p; done
-bash: file_r: Permission denied
-bash: dir_r/foo: Permission denied
$ tree -ps
.
├── [dr-x------ 4096] dir_r
├── [drwx------ 4096] dir_w
│ └── [-rw-rw-r-- 0] foo
├── [-r-------- 0] file_r
└── [-rw------- 0] file_w
Under the hood, this resolves the writeability question exactly as wanted:
open("dir_w/foo", O_WRONLY|O_CREAT|O_APPEND, 0666) = 3
but without modifying the file's contents. Now, yes, this approach:
- adjusts the file's modification time: you could mitigate that by storing the current value (from
stat
) then re-applying (viatouch
). - doesn't tell you if the file is append only, which might be a problem when you go about updating it at the end of your content creation. You can detect this, to a degree, with
lsattr
and react accordingly. - creates a file that didn't previously exist, if such is the case: mitigate this with a selective
rm
.
While I contend (in my other answer) that the most user-friendly approach is to create a temporary file the user has to move, I think this is the least user-hostile approach to fully vet their input.
add a comment |
7 Answers
7
active
oldest
votes
7 Answers
7
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
7
down vote
The obvious test would be:
if touch /path/to/file; then
: it can be created
fi
But it does actually create the file if it's not already there. We could clean up after ourselves:
if touch /path/to/file; then
rm /path/to/file
fi
But this would remove a file that already existed, which you probably don't want.
We do, however, have a way around this:
if mkdir /path/to/file; then
rmdir /path/to/file
fi
You can't have a directory with the same name as another object in that directory. I can't think of a situation in which you'd be able to create a directory but not create a file. After this test, your script would be free to create a conventional /path/to/file
and do whatever it pleases with it.
That very neatly handles so many issues! A code comment explaining & justifying that unusual solution might well exceed the length of your answer. :-)
– jpaugh
4 hours ago
add a comment |
up vote
7
down vote
The obvious test would be:
if touch /path/to/file; then
: it can be created
fi
But it does actually create the file if it's not already there. We could clean up after ourselves:
if touch /path/to/file; then
rm /path/to/file
fi
But this would remove a file that already existed, which you probably don't want.
We do, however, have a way around this:
if mkdir /path/to/file; then
rmdir /path/to/file
fi
You can't have a directory with the same name as another object in that directory. I can't think of a situation in which you'd be able to create a directory but not create a file. After this test, your script would be free to create a conventional /path/to/file
and do whatever it pleases with it.
That very neatly handles so many issues! A code comment explaining & justifying that unusual solution might well exceed the length of your answer. :-)
– jpaugh
4 hours ago
add a comment |
up vote
7
down vote
up vote
7
down vote
The obvious test would be:
if touch /path/to/file; then
: it can be created
fi
But it does actually create the file if it's not already there. We could clean up after ourselves:
if touch /path/to/file; then
rm /path/to/file
fi
But this would remove a file that already existed, which you probably don't want.
We do, however, have a way around this:
if mkdir /path/to/file; then
rmdir /path/to/file
fi
You can't have a directory with the same name as another object in that directory. I can't think of a situation in which you'd be able to create a directory but not create a file. After this test, your script would be free to create a conventional /path/to/file
and do whatever it pleases with it.
The obvious test would be:
if touch /path/to/file; then
: it can be created
fi
But it does actually create the file if it's not already there. We could clean up after ourselves:
if touch /path/to/file; then
rm /path/to/file
fi
But this would remove a file that already existed, which you probably don't want.
We do, however, have a way around this:
if mkdir /path/to/file; then
rmdir /path/to/file
fi
You can't have a directory with the same name as another object in that directory. I can't think of a situation in which you'd be able to create a directory but not create a file. After this test, your script would be free to create a conventional /path/to/file
and do whatever it pleases with it.
answered yesterday
DopeGhoti
42.2k55180
42.2k55180
That very neatly handles so many issues! A code comment explaining & justifying that unusual solution might well exceed the length of your answer. :-)
– jpaugh
4 hours ago
add a comment |
That very neatly handles so many issues! A code comment explaining & justifying that unusual solution might well exceed the length of your answer. :-)
– jpaugh
4 hours ago
That very neatly handles so many issues! A code comment explaining & justifying that unusual solution might well exceed the length of your answer. :-)
– jpaugh
4 hours ago
That very neatly handles so many issues! A code comment explaining & justifying that unusual solution might well exceed the length of your answer. :-)
– jpaugh
4 hours ago
add a comment |
up vote
7
down vote
From what I'm gathering, you want to check that when using
tee -- "$OUT_FILE"
(note the --
or it wouldn't work for file names that start with -), tee
would succeed to open the file for writing.
That is that:
- the length of the file path doesn't exceed the PATH_MAX limit
- the file exists (after symlink resolution) and is not of type directory and you have write permission to it.
- if the file doesn't exist, the dirname of the file exists (after symlink resolution) as a directory and you have write and search permission to it and the filename length doesn't exceed the NAME_MAX limit of the filesystem that directory resides in.
- or the file is a symlink that points to a file that doesn't exist and is not a symlink loop but meets the criteria just above
We'll ignore for now filesystems like vfat, ntfs or hfsplus that have limitations on what byte values file names may contain, disk quota, process limit, selinux, apparmor or other security mechanism in the way, full filesystem, no inode left, device files that can't be opened that way for a reason or another, files that are executables currently mapped in some process address space all of which could also affect the ability to open or create the file.
With zsh
:
zmodload zsh/system
tee_would_likely_succeed()
local file=$1 ERRNO=0 LC_ALL=C
if [ -d "$file" ]; then
return 1 # directory
elif [ -w "$file" ]; then
return 0 # writable non-directory
elif [ -e "$file" ]; then
return 1 # exists, non-writable
elif [ "$errnos[ERRNO]" != ENOENT ]; then
return 1 # only ENOENT error can be recovered
else
local dir=$file:P:h base=$file:t
[ -d "$dir" ] && # directory
[ -w "$dir" ] && # writable
[ -x "$dir" ] && # and searchable
(($#base <= $(getconf -- NAME_MAX "$dir")))
return
fi
In bash
or any Bourne-like shell, just replace the
zmodload zsh/system
tee_would_likely_succeed()
<zsh-code>
with:
tee_would_likely_succeed()
zsh -s -- "$@" << 'EOF'
zmodload zsh/system
<zsh-code>
EOF
Yes, you understood my intent correctly. In fact, the original question was unclear: I originally spoke about creating the file, but actually I am using it withtee
so the requirement is really that the file can be created if it doesn't exist, or can be truncated to zero and overwritten if it does (or however elsetee
handles that).
– BeeOnRope
yesterday
Question's tagged asbash
, though, which doesn't have$errnos
equivalent AFAIK.
– bishop
5 hours ago
Looks like your last edit wiped the formatting
– D. Ben Knoble
2 hours ago
Thanks, @bishop, @D.BenKnoble, see edit.
– Stéphane Chazelas
1 hour ago
Your Bash example requires that zsh is installed. If so, just use zsh directly.
– Dennis Williamson
36 mins ago
add a comment |
up vote
7
down vote
From what I'm gathering, you want to check that when using
tee -- "$OUT_FILE"
(note the --
or it wouldn't work for file names that start with -), tee
would succeed to open the file for writing.
That is that:
- the length of the file path doesn't exceed the PATH_MAX limit
- the file exists (after symlink resolution) and is not of type directory and you have write permission to it.
- if the file doesn't exist, the dirname of the file exists (after symlink resolution) as a directory and you have write and search permission to it and the filename length doesn't exceed the NAME_MAX limit of the filesystem that directory resides in.
- or the file is a symlink that points to a file that doesn't exist and is not a symlink loop but meets the criteria just above
We'll ignore for now filesystems like vfat, ntfs or hfsplus that have limitations on what byte values file names may contain, disk quota, process limit, selinux, apparmor or other security mechanism in the way, full filesystem, no inode left, device files that can't be opened that way for a reason or another, files that are executables currently mapped in some process address space all of which could also affect the ability to open or create the file.
With zsh
:
zmodload zsh/system
tee_would_likely_succeed()
local file=$1 ERRNO=0 LC_ALL=C
if [ -d "$file" ]; then
return 1 # directory
elif [ -w "$file" ]; then
return 0 # writable non-directory
elif [ -e "$file" ]; then
return 1 # exists, non-writable
elif [ "$errnos[ERRNO]" != ENOENT ]; then
return 1 # only ENOENT error can be recovered
else
local dir=$file:P:h base=$file:t
[ -d "$dir" ] && # directory
[ -w "$dir" ] && # writable
[ -x "$dir" ] && # and searchable
(($#base <= $(getconf -- NAME_MAX "$dir")))
return
fi
In bash
or any Bourne-like shell, just replace the
zmodload zsh/system
tee_would_likely_succeed()
<zsh-code>
with:
tee_would_likely_succeed()
zsh -s -- "$@" << 'EOF'
zmodload zsh/system
<zsh-code>
EOF
Yes, you understood my intent correctly. In fact, the original question was unclear: I originally spoke about creating the file, but actually I am using it withtee
so the requirement is really that the file can be created if it doesn't exist, or can be truncated to zero and overwritten if it does (or however elsetee
handles that).
– BeeOnRope
yesterday
Question's tagged asbash
, though, which doesn't have$errnos
equivalent AFAIK.
– bishop
5 hours ago
Looks like your last edit wiped the formatting
– D. Ben Knoble
2 hours ago
Thanks, @bishop, @D.BenKnoble, see edit.
– Stéphane Chazelas
1 hour ago
Your Bash example requires that zsh is installed. If so, just use zsh directly.
– Dennis Williamson
36 mins ago
add a comment |
up vote
7
down vote
up vote
7
down vote
From what I'm gathering, you want to check that when using
tee -- "$OUT_FILE"
(note the --
or it wouldn't work for file names that start with -), tee
would succeed to open the file for writing.
That is that:
- the length of the file path doesn't exceed the PATH_MAX limit
- the file exists (after symlink resolution) and is not of type directory and you have write permission to it.
- if the file doesn't exist, the dirname of the file exists (after symlink resolution) as a directory and you have write and search permission to it and the filename length doesn't exceed the NAME_MAX limit of the filesystem that directory resides in.
- or the file is a symlink that points to a file that doesn't exist and is not a symlink loop but meets the criteria just above
We'll ignore for now filesystems like vfat, ntfs or hfsplus that have limitations on what byte values file names may contain, disk quota, process limit, selinux, apparmor or other security mechanism in the way, full filesystem, no inode left, device files that can't be opened that way for a reason or another, files that are executables currently mapped in some process address space all of which could also affect the ability to open or create the file.
With zsh
:
zmodload zsh/system
tee_would_likely_succeed()
local file=$1 ERRNO=0 LC_ALL=C
if [ -d "$file" ]; then
return 1 # directory
elif [ -w "$file" ]; then
return 0 # writable non-directory
elif [ -e "$file" ]; then
return 1 # exists, non-writable
elif [ "$errnos[ERRNO]" != ENOENT ]; then
return 1 # only ENOENT error can be recovered
else
local dir=$file:P:h base=$file:t
[ -d "$dir" ] && # directory
[ -w "$dir" ] && # writable
[ -x "$dir" ] && # and searchable
(($#base <= $(getconf -- NAME_MAX "$dir")))
return
fi
In bash
or any Bourne-like shell, just replace the
zmodload zsh/system
tee_would_likely_succeed()
<zsh-code>
with:
tee_would_likely_succeed()
zsh -s -- "$@" << 'EOF'
zmodload zsh/system
<zsh-code>
EOF
From what I'm gathering, you want to check that when using
tee -- "$OUT_FILE"
(note the --
or it wouldn't work for file names that start with -), tee
would succeed to open the file for writing.
That is that:
- the length of the file path doesn't exceed the PATH_MAX limit
- the file exists (after symlink resolution) and is not of type directory and you have write permission to it.
- if the file doesn't exist, the dirname of the file exists (after symlink resolution) as a directory and you have write and search permission to it and the filename length doesn't exceed the NAME_MAX limit of the filesystem that directory resides in.
- or the file is a symlink that points to a file that doesn't exist and is not a symlink loop but meets the criteria just above
We'll ignore for now filesystems like vfat, ntfs or hfsplus that have limitations on what byte values file names may contain, disk quota, process limit, selinux, apparmor or other security mechanism in the way, full filesystem, no inode left, device files that can't be opened that way for a reason or another, files that are executables currently mapped in some process address space all of which could also affect the ability to open or create the file.
With zsh
:
zmodload zsh/system
tee_would_likely_succeed()
local file=$1 ERRNO=0 LC_ALL=C
if [ -d "$file" ]; then
return 1 # directory
elif [ -w "$file" ]; then
return 0 # writable non-directory
elif [ -e "$file" ]; then
return 1 # exists, non-writable
elif [ "$errnos[ERRNO]" != ENOENT ]; then
return 1 # only ENOENT error can be recovered
else
local dir=$file:P:h base=$file:t
[ -d "$dir" ] && # directory
[ -w "$dir" ] && # writable
[ -x "$dir" ] && # and searchable
(($#base <= $(getconf -- NAME_MAX "$dir")))
return
fi
In bash
or any Bourne-like shell, just replace the
zmodload zsh/system
tee_would_likely_succeed()
<zsh-code>
with:
tee_would_likely_succeed()
zsh -s -- "$@" << 'EOF'
zmodload zsh/system
<zsh-code>
EOF
edited 1 hour ago
answered yesterday
Stéphane Chazelas
292k54545886
292k54545886
Yes, you understood my intent correctly. In fact, the original question was unclear: I originally spoke about creating the file, but actually I am using it withtee
so the requirement is really that the file can be created if it doesn't exist, or can be truncated to zero and overwritten if it does (or however elsetee
handles that).
– BeeOnRope
yesterday
Question's tagged asbash
, though, which doesn't have$errnos
equivalent AFAIK.
– bishop
5 hours ago
Looks like your last edit wiped the formatting
– D. Ben Knoble
2 hours ago
Thanks, @bishop, @D.BenKnoble, see edit.
– Stéphane Chazelas
1 hour ago
Your Bash example requires that zsh is installed. If so, just use zsh directly.
– Dennis Williamson
36 mins ago
add a comment |
Yes, you understood my intent correctly. In fact, the original question was unclear: I originally spoke about creating the file, but actually I am using it withtee
so the requirement is really that the file can be created if it doesn't exist, or can be truncated to zero and overwritten if it does (or however elsetee
handles that).
– BeeOnRope
yesterday
Question's tagged asbash
, though, which doesn't have$errnos
equivalent AFAIK.
– bishop
5 hours ago
Looks like your last edit wiped the formatting
– D. Ben Knoble
2 hours ago
Thanks, @bishop, @D.BenKnoble, see edit.
– Stéphane Chazelas
1 hour ago
Your Bash example requires that zsh is installed. If so, just use zsh directly.
– Dennis Williamson
36 mins ago
Yes, you understood my intent correctly. In fact, the original question was unclear: I originally spoke about creating the file, but actually I am using it with
tee
so the requirement is really that the file can be created if it doesn't exist, or can be truncated to zero and overwritten if it does (or however else tee
handles that).– BeeOnRope
yesterday
Yes, you understood my intent correctly. In fact, the original question was unclear: I originally spoke about creating the file, but actually I am using it with
tee
so the requirement is really that the file can be created if it doesn't exist, or can be truncated to zero and overwritten if it does (or however else tee
handles that).– BeeOnRope
yesterday
Question's tagged as
bash
, though, which doesn't have $errnos
equivalent AFAIK.– bishop
5 hours ago
Question's tagged as
bash
, though, which doesn't have $errnos
equivalent AFAIK.– bishop
5 hours ago
Looks like your last edit wiped the formatting
– D. Ben Knoble
2 hours ago
Looks like your last edit wiped the formatting
– D. Ben Knoble
2 hours ago
Thanks, @bishop, @D.BenKnoble, see edit.
– Stéphane Chazelas
1 hour ago
Thanks, @bishop, @D.BenKnoble, see edit.
– Stéphane Chazelas
1 hour ago
Your Bash example requires that zsh is installed. If so, just use zsh directly.
– Dennis Williamson
36 mins ago
Your Bash example requires that zsh is installed. If so, just use zsh directly.
– Dennis Williamson
36 mins ago
add a comment |
up vote
6
down vote
One option you might want to consider is creating the file early on but only populating it later in your script. You can use the exec
command to open the file in a file descriptor (such as 3, 4, etc.) and then later use a redirection to a file descriptor (>&3
, etc.) to write contents to that file.
Something like:
#!/bin/bash
# Open the file for read/write, so it doesn't get
# truncated just yet (to preserve the contents in
# case the initial checks fail.)
exec 3<>dir/file.txt ||
echo "Error creating dir/file.txt" >&2
exit 1
# Long checks here...
check_ok ||
echo "Failed checks" >&2
# cleanup file before bailing out
rm -f dir/file.txt
exit 1
# We're ready to write, first truncate the file.
# Use "truncate(1)" from coreutils, and pass it
# /dev/fd/3 so the file doesn't need to be reopened.
truncate -s 0 /dev/fd/3
# Now populate the file, use a redirection to write
# to the previously opened file descriptor.
populate_contents >&3
You can also use a trap
to clean up the file on error, that's a common practice.
This way, you get a real check for permissions that you'll be able to create the file, while at the same time being able to perform it early enough that if that fails you haven't spent time waiting for the long checks.
UPDATED: In order to avoid clobbering the file in case the checks fail, use bash's fd<>file
redirection which does not truncate the file right away. (We don't care about reading from the file, this is just a workaround so we don't truncate it. Appending with >>
would probably just work too, but I tend to find this one a bit more elegant, keeping the O_APPEND flag out of the picture.)
When we're ready to replace the contents, we need to truncate the file first (otherwise if we're writing fewer bytes than there were in the file before, the trailing bytes would stay there.) We can use the truncate(1) command from coreutils for that purpose, and we can pass it the open file descriptor we have (using the /dev/fd/3
pseudo-file) so it doesn't need to reopen the file. (Again, technically something simpler like : >dir/file.txt
would probably work, but not having to reopen the file is a more elegant solution.)
I had considered this, but the problem is that if another innocent error occurs somewhere between when I create the file and the point some time later where I would write to it, the user will probably be upset to find out that the specified file was overwritten.
– BeeOnRope
yesterday
Hmmm, interesting... I had this idea of usingset -o noclobber
, in which case it would fail if the file already existed... But then I don't think it's possible to differentiate failure due to permissions (abort script) from failure due to file already existing (in which case you'd like to proceed and later overwrite the file.) If this was C or Python, etc., it's just a matter of opening the file with O_EXCL and looking for error EEXIST (permissions are fine, but file already exists) from EACCES (permission denied)... But not sure that can be done in bash...
– Filipe Brandenburger
yesterday
1
@BeeOnRope another option that won't clobber the file is to try to append nothing to it:echo -n >>file
ortrue >> file
. Of course, ext4 has append-only files, but you could live with that false positive.
– mosvy
12 hours ago
1
@BeeOnRope If you need to preserve either (a) the existing file or (b) the (complete) new file, that's a different question than you have asked. A good answer to it is to move the existing file to a new name (e.g.my-file.txt.backup
) before creating the new one. Another solution is to write the new file to a temporary file in the same folder, and then copy it over the old file after the rest of the script succeeds --- if that last operation failed, the user could manually fix the issue without losing his or her progress.
– jpaugh
5 hours ago
1
@BeeOnRope If you go for the "atomic replace" route (which is a good one!) then typically you'll want to write the temporary file in the same directory as the final file (they need to be in the same filesystem for rename(2) to succeed) and use a unique name (so if there are two instances of the script running, they won't clobber each other's temporary files.) Opening a temporary file for writing in the same directory is usually a good indication you'll be able to rename it later (well, except if the final target exists and is a directory), so to some extent it also addresses your question.
– Filipe Brandenburger
4 hours ago
|
show 6 more comments
up vote
6
down vote
One option you might want to consider is creating the file early on but only populating it later in your script. You can use the exec
command to open the file in a file descriptor (such as 3, 4, etc.) and then later use a redirection to a file descriptor (>&3
, etc.) to write contents to that file.
Something like:
#!/bin/bash
# Open the file for read/write, so it doesn't get
# truncated just yet (to preserve the contents in
# case the initial checks fail.)
exec 3<>dir/file.txt ||
echo "Error creating dir/file.txt" >&2
exit 1
# Long checks here...
check_ok ||
echo "Failed checks" >&2
# cleanup file before bailing out
rm -f dir/file.txt
exit 1
# We're ready to write, first truncate the file.
# Use "truncate(1)" from coreutils, and pass it
# /dev/fd/3 so the file doesn't need to be reopened.
truncate -s 0 /dev/fd/3
# Now populate the file, use a redirection to write
# to the previously opened file descriptor.
populate_contents >&3
You can also use a trap
to clean up the file on error, that's a common practice.
This way, you get a real check for permissions that you'll be able to create the file, while at the same time being able to perform it early enough that if that fails you haven't spent time waiting for the long checks.
UPDATED: In order to avoid clobbering the file in case the checks fail, use bash's fd<>file
redirection which does not truncate the file right away. (We don't care about reading from the file, this is just a workaround so we don't truncate it. Appending with >>
would probably just work too, but I tend to find this one a bit more elegant, keeping the O_APPEND flag out of the picture.)
When we're ready to replace the contents, we need to truncate the file first (otherwise if we're writing fewer bytes than there were in the file before, the trailing bytes would stay there.) We can use the truncate(1) command from coreutils for that purpose, and we can pass it the open file descriptor we have (using the /dev/fd/3
pseudo-file) so it doesn't need to reopen the file. (Again, technically something simpler like : >dir/file.txt
would probably work, but not having to reopen the file is a more elegant solution.)
I had considered this, but the problem is that if another innocent error occurs somewhere between when I create the file and the point some time later where I would write to it, the user will probably be upset to find out that the specified file was overwritten.
– BeeOnRope
yesterday
Hmmm, interesting... I had this idea of usingset -o noclobber
, in which case it would fail if the file already existed... But then I don't think it's possible to differentiate failure due to permissions (abort script) from failure due to file already existing (in which case you'd like to proceed and later overwrite the file.) If this was C or Python, etc., it's just a matter of opening the file with O_EXCL and looking for error EEXIST (permissions are fine, but file already exists) from EACCES (permission denied)... But not sure that can be done in bash...
– Filipe Brandenburger
yesterday
1
@BeeOnRope another option that won't clobber the file is to try to append nothing to it:echo -n >>file
ortrue >> file
. Of course, ext4 has append-only files, but you could live with that false positive.
– mosvy
12 hours ago
1
@BeeOnRope If you need to preserve either (a) the existing file or (b) the (complete) new file, that's a different question than you have asked. A good answer to it is to move the existing file to a new name (e.g.my-file.txt.backup
) before creating the new one. Another solution is to write the new file to a temporary file in the same folder, and then copy it over the old file after the rest of the script succeeds --- if that last operation failed, the user could manually fix the issue without losing his or her progress.
– jpaugh
5 hours ago
1
@BeeOnRope If you go for the "atomic replace" route (which is a good one!) then typically you'll want to write the temporary file in the same directory as the final file (they need to be in the same filesystem for rename(2) to succeed) and use a unique name (so if there are two instances of the script running, they won't clobber each other's temporary files.) Opening a temporary file for writing in the same directory is usually a good indication you'll be able to rename it later (well, except if the final target exists and is a directory), so to some extent it also addresses your question.
– Filipe Brandenburger
4 hours ago
|
show 6 more comments
up vote
6
down vote
up vote
6
down vote
One option you might want to consider is creating the file early on but only populating it later in your script. You can use the exec
command to open the file in a file descriptor (such as 3, 4, etc.) and then later use a redirection to a file descriptor (>&3
, etc.) to write contents to that file.
Something like:
#!/bin/bash
# Open the file for read/write, so it doesn't get
# truncated just yet (to preserve the contents in
# case the initial checks fail.)
exec 3<>dir/file.txt ||
echo "Error creating dir/file.txt" >&2
exit 1
# Long checks here...
check_ok ||
echo "Failed checks" >&2
# cleanup file before bailing out
rm -f dir/file.txt
exit 1
# We're ready to write, first truncate the file.
# Use "truncate(1)" from coreutils, and pass it
# /dev/fd/3 so the file doesn't need to be reopened.
truncate -s 0 /dev/fd/3
# Now populate the file, use a redirection to write
# to the previously opened file descriptor.
populate_contents >&3
You can also use a trap
to clean up the file on error, that's a common practice.
This way, you get a real check for permissions that you'll be able to create the file, while at the same time being able to perform it early enough that if that fails you haven't spent time waiting for the long checks.
UPDATED: In order to avoid clobbering the file in case the checks fail, use bash's fd<>file
redirection which does not truncate the file right away. (We don't care about reading from the file, this is just a workaround so we don't truncate it. Appending with >>
would probably just work too, but I tend to find this one a bit more elegant, keeping the O_APPEND flag out of the picture.)
When we're ready to replace the contents, we need to truncate the file first (otherwise if we're writing fewer bytes than there were in the file before, the trailing bytes would stay there.) We can use the truncate(1) command from coreutils for that purpose, and we can pass it the open file descriptor we have (using the /dev/fd/3
pseudo-file) so it doesn't need to reopen the file. (Again, technically something simpler like : >dir/file.txt
would probably work, but not having to reopen the file is a more elegant solution.)
One option you might want to consider is creating the file early on but only populating it later in your script. You can use the exec
command to open the file in a file descriptor (such as 3, 4, etc.) and then later use a redirection to a file descriptor (>&3
, etc.) to write contents to that file.
Something like:
#!/bin/bash
# Open the file for read/write, so it doesn't get
# truncated just yet (to preserve the contents in
# case the initial checks fail.)
exec 3<>dir/file.txt ||
echo "Error creating dir/file.txt" >&2
exit 1
# Long checks here...
check_ok ||
echo "Failed checks" >&2
# cleanup file before bailing out
rm -f dir/file.txt
exit 1
# We're ready to write, first truncate the file.
# Use "truncate(1)" from coreutils, and pass it
# /dev/fd/3 so the file doesn't need to be reopened.
truncate -s 0 /dev/fd/3
# Now populate the file, use a redirection to write
# to the previously opened file descriptor.
populate_contents >&3
You can also use a trap
to clean up the file on error, that's a common practice.
This way, you get a real check for permissions that you'll be able to create the file, while at the same time being able to perform it early enough that if that fails you haven't spent time waiting for the long checks.
UPDATED: In order to avoid clobbering the file in case the checks fail, use bash's fd<>file
redirection which does not truncate the file right away. (We don't care about reading from the file, this is just a workaround so we don't truncate it. Appending with >>
would probably just work too, but I tend to find this one a bit more elegant, keeping the O_APPEND flag out of the picture.)
When we're ready to replace the contents, we need to truncate the file first (otherwise if we're writing fewer bytes than there were in the file before, the trailing bytes would stay there.) We can use the truncate(1) command from coreutils for that purpose, and we can pass it the open file descriptor we have (using the /dev/fd/3
pseudo-file) so it doesn't need to reopen the file. (Again, technically something simpler like : >dir/file.txt
would probably work, but not having to reopen the file is a more elegant solution.)
edited 22 hours ago
answered yesterday
Filipe Brandenburger
5,1971624
5,1971624
I had considered this, but the problem is that if another innocent error occurs somewhere between when I create the file and the point some time later where I would write to it, the user will probably be upset to find out that the specified file was overwritten.
– BeeOnRope
yesterday
Hmmm, interesting... I had this idea of usingset -o noclobber
, in which case it would fail if the file already existed... But then I don't think it's possible to differentiate failure due to permissions (abort script) from failure due to file already existing (in which case you'd like to proceed and later overwrite the file.) If this was C or Python, etc., it's just a matter of opening the file with O_EXCL and looking for error EEXIST (permissions are fine, but file already exists) from EACCES (permission denied)... But not sure that can be done in bash...
– Filipe Brandenburger
yesterday
1
@BeeOnRope another option that won't clobber the file is to try to append nothing to it:echo -n >>file
ortrue >> file
. Of course, ext4 has append-only files, but you could live with that false positive.
– mosvy
12 hours ago
1
@BeeOnRope If you need to preserve either (a) the existing file or (b) the (complete) new file, that's a different question than you have asked. A good answer to it is to move the existing file to a new name (e.g.my-file.txt.backup
) before creating the new one. Another solution is to write the new file to a temporary file in the same folder, and then copy it over the old file after the rest of the script succeeds --- if that last operation failed, the user could manually fix the issue without losing his or her progress.
– jpaugh
5 hours ago
1
@BeeOnRope If you go for the "atomic replace" route (which is a good one!) then typically you'll want to write the temporary file in the same directory as the final file (they need to be in the same filesystem for rename(2) to succeed) and use a unique name (so if there are two instances of the script running, they won't clobber each other's temporary files.) Opening a temporary file for writing in the same directory is usually a good indication you'll be able to rename it later (well, except if the final target exists and is a directory), so to some extent it also addresses your question.
– Filipe Brandenburger
4 hours ago
|
show 6 more comments
I had considered this, but the problem is that if another innocent error occurs somewhere between when I create the file and the point some time later where I would write to it, the user will probably be upset to find out that the specified file was overwritten.
– BeeOnRope
yesterday
Hmmm, interesting... I had this idea of usingset -o noclobber
, in which case it would fail if the file already existed... But then I don't think it's possible to differentiate failure due to permissions (abort script) from failure due to file already existing (in which case you'd like to proceed and later overwrite the file.) If this was C or Python, etc., it's just a matter of opening the file with O_EXCL and looking for error EEXIST (permissions are fine, but file already exists) from EACCES (permission denied)... But not sure that can be done in bash...
– Filipe Brandenburger
yesterday
1
@BeeOnRope another option that won't clobber the file is to try to append nothing to it:echo -n >>file
ortrue >> file
. Of course, ext4 has append-only files, but you could live with that false positive.
– mosvy
12 hours ago
1
@BeeOnRope If you need to preserve either (a) the existing file or (b) the (complete) new file, that's a different question than you have asked. A good answer to it is to move the existing file to a new name (e.g.my-file.txt.backup
) before creating the new one. Another solution is to write the new file to a temporary file in the same folder, and then copy it over the old file after the rest of the script succeeds --- if that last operation failed, the user could manually fix the issue without losing his or her progress.
– jpaugh
5 hours ago
1
@BeeOnRope If you go for the "atomic replace" route (which is a good one!) then typically you'll want to write the temporary file in the same directory as the final file (they need to be in the same filesystem for rename(2) to succeed) and use a unique name (so if there are two instances of the script running, they won't clobber each other's temporary files.) Opening a temporary file for writing in the same directory is usually a good indication you'll be able to rename it later (well, except if the final target exists and is a directory), so to some extent it also addresses your question.
– Filipe Brandenburger
4 hours ago
I had considered this, but the problem is that if another innocent error occurs somewhere between when I create the file and the point some time later where I would write to it, the user will probably be upset to find out that the specified file was overwritten.
– BeeOnRope
yesterday
I had considered this, but the problem is that if another innocent error occurs somewhere between when I create the file and the point some time later where I would write to it, the user will probably be upset to find out that the specified file was overwritten.
– BeeOnRope
yesterday
Hmmm, interesting... I had this idea of using
set -o noclobber
, in which case it would fail if the file already existed... But then I don't think it's possible to differentiate failure due to permissions (abort script) from failure due to file already existing (in which case you'd like to proceed and later overwrite the file.) If this was C or Python, etc., it's just a matter of opening the file with O_EXCL and looking for error EEXIST (permissions are fine, but file already exists) from EACCES (permission denied)... But not sure that can be done in bash...– Filipe Brandenburger
yesterday
Hmmm, interesting... I had this idea of using
set -o noclobber
, in which case it would fail if the file already existed... But then I don't think it's possible to differentiate failure due to permissions (abort script) from failure due to file already existing (in which case you'd like to proceed and later overwrite the file.) If this was C or Python, etc., it's just a matter of opening the file with O_EXCL and looking for error EEXIST (permissions are fine, but file already exists) from EACCES (permission denied)... But not sure that can be done in bash...– Filipe Brandenburger
yesterday
1
1
@BeeOnRope another option that won't clobber the file is to try to append nothing to it:
echo -n >>file
or true >> file
. Of course, ext4 has append-only files, but you could live with that false positive.– mosvy
12 hours ago
@BeeOnRope another option that won't clobber the file is to try to append nothing to it:
echo -n >>file
or true >> file
. Of course, ext4 has append-only files, but you could live with that false positive.– mosvy
12 hours ago
1
1
@BeeOnRope If you need to preserve either (a) the existing file or (b) the (complete) new file, that's a different question than you have asked. A good answer to it is to move the existing file to a new name (e.g.
my-file.txt.backup
) before creating the new one. Another solution is to write the new file to a temporary file in the same folder, and then copy it over the old file after the rest of the script succeeds --- if that last operation failed, the user could manually fix the issue without losing his or her progress.– jpaugh
5 hours ago
@BeeOnRope If you need to preserve either (a) the existing file or (b) the (complete) new file, that's a different question than you have asked. A good answer to it is to move the existing file to a new name (e.g.
my-file.txt.backup
) before creating the new one. Another solution is to write the new file to a temporary file in the same folder, and then copy it over the old file after the rest of the script succeeds --- if that last operation failed, the user could manually fix the issue without losing his or her progress.– jpaugh
5 hours ago
1
1
@BeeOnRope If you go for the "atomic replace" route (which is a good one!) then typically you'll want to write the temporary file in the same directory as the final file (they need to be in the same filesystem for rename(2) to succeed) and use a unique name (so if there are two instances of the script running, they won't clobber each other's temporary files.) Opening a temporary file for writing in the same directory is usually a good indication you'll be able to rename it later (well, except if the final target exists and is a directory), so to some extent it also addresses your question.
– Filipe Brandenburger
4 hours ago
@BeeOnRope If you go for the "atomic replace" route (which is a good one!) then typically you'll want to write the temporary file in the same directory as the final file (they need to be in the same filesystem for rename(2) to succeed) and use a unique name (so if there are two instances of the script running, they won't clobber each other's temporary files.) Opening a temporary file for writing in the same directory is usually a good indication you'll be able to rename it later (well, except if the final target exists and is a directory), so to some extent it also addresses your question.
– Filipe Brandenburger
4 hours ago
|
show 6 more comments
up vote
3
down vote
I think DopeGhoti's solution is better but this should also work:
file=$1
if [[ "$file:0:1" == '/' ]]; then
dir=$file%/*
elif [[ "$file" =~ .*/.* ]]; then
dir="$(PWD)/$file%/*"
else
dir=$(PWD)
fi
if [[ -w "$dir" ]]; then
echo "writable"
#do stuff with writable file
else
echo "not writable"
#do stuff without writable file
fi
The first if construct checks if the argument is a full path (starts with /
) and sets the dir
variable to the directory path up to the last /
. Otherwise if the argument does not start with a /
but does contain a /
(specifying a sub directory) it will set dir
to the present working directory + the sub directory path. Otherwise it assumes the present working directory. It then checks if that directory is writable.
add a comment |
up vote
3
down vote
I think DopeGhoti's solution is better but this should also work:
file=$1
if [[ "$file:0:1" == '/' ]]; then
dir=$file%/*
elif [[ "$file" =~ .*/.* ]]; then
dir="$(PWD)/$file%/*"
else
dir=$(PWD)
fi
if [[ -w "$dir" ]]; then
echo "writable"
#do stuff with writable file
else
echo "not writable"
#do stuff without writable file
fi
The first if construct checks if the argument is a full path (starts with /
) and sets the dir
variable to the directory path up to the last /
. Otherwise if the argument does not start with a /
but does contain a /
(specifying a sub directory) it will set dir
to the present working directory + the sub directory path. Otherwise it assumes the present working directory. It then checks if that directory is writable.
add a comment |
up vote
3
down vote
up vote
3
down vote
I think DopeGhoti's solution is better but this should also work:
file=$1
if [[ "$file:0:1" == '/' ]]; then
dir=$file%/*
elif [[ "$file" =~ .*/.* ]]; then
dir="$(PWD)/$file%/*"
else
dir=$(PWD)
fi
if [[ -w "$dir" ]]; then
echo "writable"
#do stuff with writable file
else
echo "not writable"
#do stuff without writable file
fi
The first if construct checks if the argument is a full path (starts with /
) and sets the dir
variable to the directory path up to the last /
. Otherwise if the argument does not start with a /
but does contain a /
(specifying a sub directory) it will set dir
to the present working directory + the sub directory path. Otherwise it assumes the present working directory. It then checks if that directory is writable.
I think DopeGhoti's solution is better but this should also work:
file=$1
if [[ "$file:0:1" == '/' ]]; then
dir=$file%/*
elif [[ "$file" =~ .*/.* ]]; then
dir="$(PWD)/$file%/*"
else
dir=$(PWD)
fi
if [[ -w "$dir" ]]; then
echo "writable"
#do stuff with writable file
else
echo "not writable"
#do stuff without writable file
fi
The first if construct checks if the argument is a full path (starts with /
) and sets the dir
variable to the directory path up to the last /
. Otherwise if the argument does not start with a /
but does contain a /
(specifying a sub directory) it will set dir
to the present working directory + the sub directory path. Otherwise it assumes the present working directory. It then checks if that directory is writable.
answered yesterday
Jesse_b
10.6k22661
10.6k22661
add a comment |
add a comment |
up vote
3
down vote
You mentioned user experience was driving your question. I'll answer from a UX angle, since you've got good answers on the technical side.
Rather than performing the check up-front, how about writing the results into a temporary file then at the very end, placing the results into the user's desired file? Like:
userfile=$1:?Where would you like the file written?
tmpfile=$(mktemp)
# ... all the complicated stuff, writing into "$tmpfile"
# fill user's file, keeping existing permissions or creating anew
# while respecting umask
cat "$tmpfile" > "$userfile"
if [ 0 -eq $? ]; then
rm "$tmpfile"
else
echo "Couldn't write results into $userfile." >&2
echo "Results available in $tmpfile." >&2
exit 1
fi
The good with this approach: it produces the desired operation in the normal happy path scenario, side-steps the test-and-set atomicity issue, preserves permissions of the target file while creating if necessary, and is dead simple to implement.
Note: had we used mv
, we'd be keeping the permissions of the temporary file -- we don't want that, I think: we want to keep the permissions as set on the target file.
Now, the bad: it requires twice the space (cat .. >
construct), forces the user to do some manual work if the target file wasn't writable at the time it needed to be, and leaves the temporary file laying around (which might have security or maintenance issues).
In fact, this is more or less what I'm doing now. I write most of the results to a temporary file and then at the end do the final processing step and write the results to the final file. The problem is I want to bail out early (at the start of the script) if that final step is likely to fail. The script may run unattended for minutes or hours, so you really want to know up front that it is doomed to fail!
– BeeOnRope
9 hours ago
Sure, but there are so many ways this could fail: disk could fill, upstream directory could be removed, permissions could change, target file might be used for some other important stuff and the user forgot he assigned that same file to be destroyed by this operation. If we talk about this from a pure UX perspective, then perhaps the right thing to do is treat it like a job submission: at the end, when you know it worked correctly to completion, just tell the user where the resulting content resides and offer a suggested command for them to move it themselves.
– bishop
9 hours ago
In theory, yes there are infinite ways this could fail. In practice, the overwhelming majority of the time this fails is because the path provided is not valid. I can't reasonably prevent some rogue user from concurrently modifying the FS to break the job in the middle of the operation, but I certainly can check the #1 failure cause of an invalid or not writable path.
– BeeOnRope
8 hours ago
add a comment |
up vote
3
down vote
You mentioned user experience was driving your question. I'll answer from a UX angle, since you've got good answers on the technical side.
Rather than performing the check up-front, how about writing the results into a temporary file then at the very end, placing the results into the user's desired file? Like:
userfile=$1:?Where would you like the file written?
tmpfile=$(mktemp)
# ... all the complicated stuff, writing into "$tmpfile"
# fill user's file, keeping existing permissions or creating anew
# while respecting umask
cat "$tmpfile" > "$userfile"
if [ 0 -eq $? ]; then
rm "$tmpfile"
else
echo "Couldn't write results into $userfile." >&2
echo "Results available in $tmpfile." >&2
exit 1
fi
The good with this approach: it produces the desired operation in the normal happy path scenario, side-steps the test-and-set atomicity issue, preserves permissions of the target file while creating if necessary, and is dead simple to implement.
Note: had we used mv
, we'd be keeping the permissions of the temporary file -- we don't want that, I think: we want to keep the permissions as set on the target file.
Now, the bad: it requires twice the space (cat .. >
construct), forces the user to do some manual work if the target file wasn't writable at the time it needed to be, and leaves the temporary file laying around (which might have security or maintenance issues).
In fact, this is more or less what I'm doing now. I write most of the results to a temporary file and then at the end do the final processing step and write the results to the final file. The problem is I want to bail out early (at the start of the script) if that final step is likely to fail. The script may run unattended for minutes or hours, so you really want to know up front that it is doomed to fail!
– BeeOnRope
9 hours ago
Sure, but there are so many ways this could fail: disk could fill, upstream directory could be removed, permissions could change, target file might be used for some other important stuff and the user forgot he assigned that same file to be destroyed by this operation. If we talk about this from a pure UX perspective, then perhaps the right thing to do is treat it like a job submission: at the end, when you know it worked correctly to completion, just tell the user where the resulting content resides and offer a suggested command for them to move it themselves.
– bishop
9 hours ago
In theory, yes there are infinite ways this could fail. In practice, the overwhelming majority of the time this fails is because the path provided is not valid. I can't reasonably prevent some rogue user from concurrently modifying the FS to break the job in the middle of the operation, but I certainly can check the #1 failure cause of an invalid or not writable path.
– BeeOnRope
8 hours ago
add a comment |
up vote
3
down vote
up vote
3
down vote
You mentioned user experience was driving your question. I'll answer from a UX angle, since you've got good answers on the technical side.
Rather than performing the check up-front, how about writing the results into a temporary file then at the very end, placing the results into the user's desired file? Like:
userfile=$1:?Where would you like the file written?
tmpfile=$(mktemp)
# ... all the complicated stuff, writing into "$tmpfile"
# fill user's file, keeping existing permissions or creating anew
# while respecting umask
cat "$tmpfile" > "$userfile"
if [ 0 -eq $? ]; then
rm "$tmpfile"
else
echo "Couldn't write results into $userfile." >&2
echo "Results available in $tmpfile." >&2
exit 1
fi
The good with this approach: it produces the desired operation in the normal happy path scenario, side-steps the test-and-set atomicity issue, preserves permissions of the target file while creating if necessary, and is dead simple to implement.
Note: had we used mv
, we'd be keeping the permissions of the temporary file -- we don't want that, I think: we want to keep the permissions as set on the target file.
Now, the bad: it requires twice the space (cat .. >
construct), forces the user to do some manual work if the target file wasn't writable at the time it needed to be, and leaves the temporary file laying around (which might have security or maintenance issues).
You mentioned user experience was driving your question. I'll answer from a UX angle, since you've got good answers on the technical side.
Rather than performing the check up-front, how about writing the results into a temporary file then at the very end, placing the results into the user's desired file? Like:
userfile=$1:?Where would you like the file written?
tmpfile=$(mktemp)
# ... all the complicated stuff, writing into "$tmpfile"
# fill user's file, keeping existing permissions or creating anew
# while respecting umask
cat "$tmpfile" > "$userfile"
if [ 0 -eq $? ]; then
rm "$tmpfile"
else
echo "Couldn't write results into $userfile." >&2
echo "Results available in $tmpfile." >&2
exit 1
fi
The good with this approach: it produces the desired operation in the normal happy path scenario, side-steps the test-and-set atomicity issue, preserves permissions of the target file while creating if necessary, and is dead simple to implement.
Note: had we used mv
, we'd be keeping the permissions of the temporary file -- we don't want that, I think: we want to keep the permissions as set on the target file.
Now, the bad: it requires twice the space (cat .. >
construct), forces the user to do some manual work if the target file wasn't writable at the time it needed to be, and leaves the temporary file laying around (which might have security or maintenance issues).
edited 9 hours ago
answered 9 hours ago
bishop
1,8732819
1,8732819
In fact, this is more or less what I'm doing now. I write most of the results to a temporary file and then at the end do the final processing step and write the results to the final file. The problem is I want to bail out early (at the start of the script) if that final step is likely to fail. The script may run unattended for minutes or hours, so you really want to know up front that it is doomed to fail!
– BeeOnRope
9 hours ago
Sure, but there are so many ways this could fail: disk could fill, upstream directory could be removed, permissions could change, target file might be used for some other important stuff and the user forgot he assigned that same file to be destroyed by this operation. If we talk about this from a pure UX perspective, then perhaps the right thing to do is treat it like a job submission: at the end, when you know it worked correctly to completion, just tell the user where the resulting content resides and offer a suggested command for them to move it themselves.
– bishop
9 hours ago
In theory, yes there are infinite ways this could fail. In practice, the overwhelming majority of the time this fails is because the path provided is not valid. I can't reasonably prevent some rogue user from concurrently modifying the FS to break the job in the middle of the operation, but I certainly can check the #1 failure cause of an invalid or not writable path.
– BeeOnRope
8 hours ago
add a comment |
In fact, this is more or less what I'm doing now. I write most of the results to a temporary file and then at the end do the final processing step and write the results to the final file. The problem is I want to bail out early (at the start of the script) if that final step is likely to fail. The script may run unattended for minutes or hours, so you really want to know up front that it is doomed to fail!
– BeeOnRope
9 hours ago
Sure, but there are so many ways this could fail: disk could fill, upstream directory could be removed, permissions could change, target file might be used for some other important stuff and the user forgot he assigned that same file to be destroyed by this operation. If we talk about this from a pure UX perspective, then perhaps the right thing to do is treat it like a job submission: at the end, when you know it worked correctly to completion, just tell the user where the resulting content resides and offer a suggested command for them to move it themselves.
– bishop
9 hours ago
In theory, yes there are infinite ways this could fail. In practice, the overwhelming majority of the time this fails is because the path provided is not valid. I can't reasonably prevent some rogue user from concurrently modifying the FS to break the job in the middle of the operation, but I certainly can check the #1 failure cause of an invalid or not writable path.
– BeeOnRope
8 hours ago
In fact, this is more or less what I'm doing now. I write most of the results to a temporary file and then at the end do the final processing step and write the results to the final file. The problem is I want to bail out early (at the start of the script) if that final step is likely to fail. The script may run unattended for minutes or hours, so you really want to know up front that it is doomed to fail!
– BeeOnRope
9 hours ago
In fact, this is more or less what I'm doing now. I write most of the results to a temporary file and then at the end do the final processing step and write the results to the final file. The problem is I want to bail out early (at the start of the script) if that final step is likely to fail. The script may run unattended for minutes or hours, so you really want to know up front that it is doomed to fail!
– BeeOnRope
9 hours ago
Sure, but there are so many ways this could fail: disk could fill, upstream directory could be removed, permissions could change, target file might be used for some other important stuff and the user forgot he assigned that same file to be destroyed by this operation. If we talk about this from a pure UX perspective, then perhaps the right thing to do is treat it like a job submission: at the end, when you know it worked correctly to completion, just tell the user where the resulting content resides and offer a suggested command for them to move it themselves.
– bishop
9 hours ago
Sure, but there are so many ways this could fail: disk could fill, upstream directory could be removed, permissions could change, target file might be used for some other important stuff and the user forgot he assigned that same file to be destroyed by this operation. If we talk about this from a pure UX perspective, then perhaps the right thing to do is treat it like a job submission: at the end, when you know it worked correctly to completion, just tell the user where the resulting content resides and offer a suggested command for them to move it themselves.
– bishop
9 hours ago
In theory, yes there are infinite ways this could fail. In practice, the overwhelming majority of the time this fails is because the path provided is not valid. I can't reasonably prevent some rogue user from concurrently modifying the FS to break the job in the middle of the operation, but I certainly can check the #1 failure cause of an invalid or not writable path.
– BeeOnRope
8 hours ago
In theory, yes there are infinite ways this could fail. In practice, the overwhelming majority of the time this fails is because the path provided is not valid. I can't reasonably prevent some rogue user from concurrently modifying the FS to break the job in the middle of the operation, but I certainly can check the #1 failure cause of an invalid or not writable path.
– BeeOnRope
8 hours ago
add a comment |
up vote
1
down vote
What about using normal test
command like outlined below?
FILE=$1
DIR=$(dirname $FILE) # $DIR now contains '.' for file names only, 'foo' for 'foo/bar'
if [ -d $DIR ] ; then
echo "base directory $DIR for file exists"
if [ -e $FILE ] ; then
if [ -w $FILE ] ; then
echo "file exists, is writeable"
else
echo "file exists, NOT writeable"
fi
elif [ -w $DIR ] ; then
echo "directory is writeable"
else
echo "directory is NOT writeable"
fi
else
echo "can NOT create file in non-existent directory $DIR "
fi
add a comment |
up vote
1
down vote
What about using normal test
command like outlined below?
FILE=$1
DIR=$(dirname $FILE) # $DIR now contains '.' for file names only, 'foo' for 'foo/bar'
if [ -d $DIR ] ; then
echo "base directory $DIR for file exists"
if [ -e $FILE ] ; then
if [ -w $FILE ] ; then
echo "file exists, is writeable"
else
echo "file exists, NOT writeable"
fi
elif [ -w $DIR ] ; then
echo "directory is writeable"
else
echo "directory is NOT writeable"
fi
else
echo "can NOT create file in non-existent directory $DIR "
fi
add a comment |
up vote
1
down vote
up vote
1
down vote
What about using normal test
command like outlined below?
FILE=$1
DIR=$(dirname $FILE) # $DIR now contains '.' for file names only, 'foo' for 'foo/bar'
if [ -d $DIR ] ; then
echo "base directory $DIR for file exists"
if [ -e $FILE ] ; then
if [ -w $FILE ] ; then
echo "file exists, is writeable"
else
echo "file exists, NOT writeable"
fi
elif [ -w $DIR ] ; then
echo "directory is writeable"
else
echo "directory is NOT writeable"
fi
else
echo "can NOT create file in non-existent directory $DIR "
fi
What about using normal test
command like outlined below?
FILE=$1
DIR=$(dirname $FILE) # $DIR now contains '.' for file names only, 'foo' for 'foo/bar'
if [ -d $DIR ] ; then
echo "base directory $DIR for file exists"
if [ -e $FILE ] ; then
if [ -w $FILE ] ; then
echo "file exists, is writeable"
else
echo "file exists, NOT writeable"
fi
elif [ -w $DIR ] ; then
echo "directory is writeable"
else
echo "directory is NOT writeable"
fi
else
echo "can NOT create file in non-existent directory $DIR "
fi
answered yesterday
Jaleks
1,128422
1,128422
add a comment |
add a comment |
up vote
0
down vote
TL;DR:
: >> "$userfile"
From the OP:
I want to make a reasonable check if the file can be created/overwritten, but not actually create it.
And from your comment to my answer from a UX perspective:
The overwhelming majority of the time this fails is because the path provided is not valid. I can't reasonably prevent some rogue user some concurrently modifying the FS to break the job in the middle of the operation, but I certainly can check the #1 failure cause of an invalid or not writable path.
The only reliable test is to open(2)
the file, because only that resolves every question about the writeability: path, ownership, filesystem, network, security context, etc. Any other test will address some part of writeability, but not others. If you want a subset of tests, you'll ultimately have to choose what's important to you.
But here's another thought. From what I understand:
- the content creation process is long-running, and
- the target file should be left in a consistent state.
You're wanting to do this pre-check because of #1, and you don't want to overwrite an existing file because of #2. So why don't you just ask the shell to open the file for appending, but don't actually append anything?
$ tree -ps
.
├── [dr-x------ 4096] dir_r
├── [drwx------ 4096] dir_w
├── [-r-------- 0] file_r
└── [-rw------- 0] file_w
$ for p in file_r dir_r/foo file_w dir_w/foo; do : >> $p; done
-bash: file_r: Permission denied
-bash: dir_r/foo: Permission denied
$ tree -ps
.
├── [dr-x------ 4096] dir_r
├── [drwx------ 4096] dir_w
│ └── [-rw-rw-r-- 0] foo
├── [-r-------- 0] file_r
└── [-rw------- 0] file_w
Under the hood, this resolves the writeability question exactly as wanted:
open("dir_w/foo", O_WRONLY|O_CREAT|O_APPEND, 0666) = 3
but without modifying the file's contents. Now, yes, this approach:
- adjusts the file's modification time: you could mitigate that by storing the current value (from
stat
) then re-applying (viatouch
). - doesn't tell you if the file is append only, which might be a problem when you go about updating it at the end of your content creation. You can detect this, to a degree, with
lsattr
and react accordingly. - creates a file that didn't previously exist, if such is the case: mitigate this with a selective
rm
.
While I contend (in my other answer) that the most user-friendly approach is to create a temporary file the user has to move, I think this is the least user-hostile approach to fully vet their input.
add a comment |
up vote
0
down vote
TL;DR:
: >> "$userfile"
From the OP:
I want to make a reasonable check if the file can be created/overwritten, but not actually create it.
And from your comment to my answer from a UX perspective:
The overwhelming majority of the time this fails is because the path provided is not valid. I can't reasonably prevent some rogue user some concurrently modifying the FS to break the job in the middle of the operation, but I certainly can check the #1 failure cause of an invalid or not writable path.
The only reliable test is to open(2)
the file, because only that resolves every question about the writeability: path, ownership, filesystem, network, security context, etc. Any other test will address some part of writeability, but not others. If you want a subset of tests, you'll ultimately have to choose what's important to you.
But here's another thought. From what I understand:
- the content creation process is long-running, and
- the target file should be left in a consistent state.
You're wanting to do this pre-check because of #1, and you don't want to overwrite an existing file because of #2. So why don't you just ask the shell to open the file for appending, but don't actually append anything?
$ tree -ps
.
├── [dr-x------ 4096] dir_r
├── [drwx------ 4096] dir_w
├── [-r-------- 0] file_r
└── [-rw------- 0] file_w
$ for p in file_r dir_r/foo file_w dir_w/foo; do : >> $p; done
-bash: file_r: Permission denied
-bash: dir_r/foo: Permission denied
$ tree -ps
.
├── [dr-x------ 4096] dir_r
├── [drwx------ 4096] dir_w
│ └── [-rw-rw-r-- 0] foo
├── [-r-------- 0] file_r
└── [-rw------- 0] file_w
Under the hood, this resolves the writeability question exactly as wanted:
open("dir_w/foo", O_WRONLY|O_CREAT|O_APPEND, 0666) = 3
but without modifying the file's contents. Now, yes, this approach:
- adjusts the file's modification time: you could mitigate that by storing the current value (from
stat
) then re-applying (viatouch
). - doesn't tell you if the file is append only, which might be a problem when you go about updating it at the end of your content creation. You can detect this, to a degree, with
lsattr
and react accordingly. - creates a file that didn't previously exist, if such is the case: mitigate this with a selective
rm
.
While I contend (in my other answer) that the most user-friendly approach is to create a temporary file the user has to move, I think this is the least user-hostile approach to fully vet their input.
add a comment |
up vote
0
down vote
up vote
0
down vote
TL;DR:
: >> "$userfile"
From the OP:
I want to make a reasonable check if the file can be created/overwritten, but not actually create it.
And from your comment to my answer from a UX perspective:
The overwhelming majority of the time this fails is because the path provided is not valid. I can't reasonably prevent some rogue user some concurrently modifying the FS to break the job in the middle of the operation, but I certainly can check the #1 failure cause of an invalid or not writable path.
The only reliable test is to open(2)
the file, because only that resolves every question about the writeability: path, ownership, filesystem, network, security context, etc. Any other test will address some part of writeability, but not others. If you want a subset of tests, you'll ultimately have to choose what's important to you.
But here's another thought. From what I understand:
- the content creation process is long-running, and
- the target file should be left in a consistent state.
You're wanting to do this pre-check because of #1, and you don't want to overwrite an existing file because of #2. So why don't you just ask the shell to open the file for appending, but don't actually append anything?
$ tree -ps
.
├── [dr-x------ 4096] dir_r
├── [drwx------ 4096] dir_w
├── [-r-------- 0] file_r
└── [-rw------- 0] file_w
$ for p in file_r dir_r/foo file_w dir_w/foo; do : >> $p; done
-bash: file_r: Permission denied
-bash: dir_r/foo: Permission denied
$ tree -ps
.
├── [dr-x------ 4096] dir_r
├── [drwx------ 4096] dir_w
│ └── [-rw-rw-r-- 0] foo
├── [-r-------- 0] file_r
└── [-rw------- 0] file_w
Under the hood, this resolves the writeability question exactly as wanted:
open("dir_w/foo", O_WRONLY|O_CREAT|O_APPEND, 0666) = 3
but without modifying the file's contents. Now, yes, this approach:
- adjusts the file's modification time: you could mitigate that by storing the current value (from
stat
) then re-applying (viatouch
). - doesn't tell you if the file is append only, which might be a problem when you go about updating it at the end of your content creation. You can detect this, to a degree, with
lsattr
and react accordingly. - creates a file that didn't previously exist, if such is the case: mitigate this with a selective
rm
.
While I contend (in my other answer) that the most user-friendly approach is to create a temporary file the user has to move, I think this is the least user-hostile approach to fully vet their input.
TL;DR:
: >> "$userfile"
From the OP:
I want to make a reasonable check if the file can be created/overwritten, but not actually create it.
And from your comment to my answer from a UX perspective:
The overwhelming majority of the time this fails is because the path provided is not valid. I can't reasonably prevent some rogue user some concurrently modifying the FS to break the job in the middle of the operation, but I certainly can check the #1 failure cause of an invalid or not writable path.
The only reliable test is to open(2)
the file, because only that resolves every question about the writeability: path, ownership, filesystem, network, security context, etc. Any other test will address some part of writeability, but not others. If you want a subset of tests, you'll ultimately have to choose what's important to you.
But here's another thought. From what I understand:
- the content creation process is long-running, and
- the target file should be left in a consistent state.
You're wanting to do this pre-check because of #1, and you don't want to overwrite an existing file because of #2. So why don't you just ask the shell to open the file for appending, but don't actually append anything?
$ tree -ps
.
├── [dr-x------ 4096] dir_r
├── [drwx------ 4096] dir_w
├── [-r-------- 0] file_r
└── [-rw------- 0] file_w
$ for p in file_r dir_r/foo file_w dir_w/foo; do : >> $p; done
-bash: file_r: Permission denied
-bash: dir_r/foo: Permission denied
$ tree -ps
.
├── [dr-x------ 4096] dir_r
├── [drwx------ 4096] dir_w
│ └── [-rw-rw-r-- 0] foo
├── [-r-------- 0] file_r
└── [-rw------- 0] file_w
Under the hood, this resolves the writeability question exactly as wanted:
open("dir_w/foo", O_WRONLY|O_CREAT|O_APPEND, 0666) = 3
but without modifying the file's contents. Now, yes, this approach:
- adjusts the file's modification time: you could mitigate that by storing the current value (from
stat
) then re-applying (viatouch
). - doesn't tell you if the file is append only, which might be a problem when you go about updating it at the end of your content creation. You can detect this, to a degree, with
lsattr
and react accordingly. - creates a file that didn't previously exist, if such is the case: mitigate this with a selective
rm
.
While I contend (in my other answer) that the most user-friendly approach is to create a temporary file the user has to move, I think this is the least user-hostile approach to fully vet their input.
answered 5 hours ago
bishop
1,8732819
1,8732819
add a comment |
add a comment |
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f480656%2fhow-can-i-check-if-a-file-can-be-created-or-truncated-overwritten-in-bash%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Will the argument always be based on the current directory or could the user specify a full path?
– Jesse_b
yesterday
@Jesse_b - I suppose the user could specify an absolute path like
/foo/bar/file.txt
. Basically I pass the path totee
liketee $OUT_FILE
whereOUT_FILE
is passed on the command line. That should "just work" with both absolute and relative paths, right?– BeeOnRope
yesterday
@BeeOnRope, no you'd need
tee -- "$OUT_FILE"
at least. What if the file already exists or exists but is not a regular file (directory, symlink, fifo)?– Stéphane Chazelas
yesterday
@StéphaneChazelas - well I am using
tee "$OUT_FILE.tmp"
. If the file already exists,tee
overwrites, which is the desired behavior in this case. If it's a directory,tee
will fail (I think). symlink I'm not 100% sure?– BeeOnRope
yesterday