Is it safe to redirect stdout and stderr to the same file without file descriptor copies?





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}







27















I start off in empty directory.



$ touch aFile
$ ls
aFile


Then I ls two arguments, one of which isn't in this directory. I redirect both output streams to a file named output. I use >> in order to avoid writing simultaneously.



$ ls aFile not_exist >>output 2>>output
$ cat output
ls: cannot access 'not_exist': No such file or directory
aFile


Which seems to work. Are there any dangers to this approach?










share|improve this question






















  • 6





    That was a fast down vote. It took like five seconds. Can you tell me how is that you can assess the worthiness of my question so quickly? And better yet, what is wrong with it so that I can improve it?

    – exit_status
    May 19 at 10:35













  • Why don't you use the more standard ls aFile not_exist &>>output here? (Note, I am assuming you are using bash.)

    – FedonKadifeli
    May 19 at 10:47






  • 5





    Because that doesn't help me understand what I'm asking about. I know how to redirect these streams to the same file, portably even. What I want to know is if there's anything wrong with the what I suggested in the question. @FedonKadifeli

    – exit_status
    May 19 at 10:52








  • 1





    @FedonKadifeli &>> is NOT standard. It's a DEPRECATED, ambiguous syntax which works differently in different shells. I wonder where you guys get your stuff from.

    – Uncle Billy
    May 19 at 11:32






  • 4





    Bash is not a standard. The POSIX standard mandates that ls &>>foo ... should be parsed as two comands ls & and >>foo ..., and this is the way other shells like the /bin/sh from Ubuntu are parsing it. For it being deprecated, you can look here -- though I don't pretend that's any kind of authority. You may ask the bash maintainers if they consider using that a good idea, though.

    – Uncle Billy
    May 19 at 11:50




















27















I start off in empty directory.



$ touch aFile
$ ls
aFile


Then I ls two arguments, one of which isn't in this directory. I redirect both output streams to a file named output. I use >> in order to avoid writing simultaneously.



$ ls aFile not_exist >>output 2>>output
$ cat output
ls: cannot access 'not_exist': No such file or directory
aFile


Which seems to work. Are there any dangers to this approach?










share|improve this question






















  • 6





    That was a fast down vote. It took like five seconds. Can you tell me how is that you can assess the worthiness of my question so quickly? And better yet, what is wrong with it so that I can improve it?

    – exit_status
    May 19 at 10:35













  • Why don't you use the more standard ls aFile not_exist &>>output here? (Note, I am assuming you are using bash.)

    – FedonKadifeli
    May 19 at 10:47






  • 5





    Because that doesn't help me understand what I'm asking about. I know how to redirect these streams to the same file, portably even. What I want to know is if there's anything wrong with the what I suggested in the question. @FedonKadifeli

    – exit_status
    May 19 at 10:52








  • 1





    @FedonKadifeli &>> is NOT standard. It's a DEPRECATED, ambiguous syntax which works differently in different shells. I wonder where you guys get your stuff from.

    – Uncle Billy
    May 19 at 11:32






  • 4





    Bash is not a standard. The POSIX standard mandates that ls &>>foo ... should be parsed as two comands ls & and >>foo ..., and this is the way other shells like the /bin/sh from Ubuntu are parsing it. For it being deprecated, you can look here -- though I don't pretend that's any kind of authority. You may ask the bash maintainers if they consider using that a good idea, though.

    – Uncle Billy
    May 19 at 11:50
















27












27








27


4






I start off in empty directory.



$ touch aFile
$ ls
aFile


Then I ls two arguments, one of which isn't in this directory. I redirect both output streams to a file named output. I use >> in order to avoid writing simultaneously.



$ ls aFile not_exist >>output 2>>output
$ cat output
ls: cannot access 'not_exist': No such file or directory
aFile


Which seems to work. Are there any dangers to this approach?










share|improve this question
















I start off in empty directory.



$ touch aFile
$ ls
aFile


Then I ls two arguments, one of which isn't in this directory. I redirect both output streams to a file named output. I use >> in order to avoid writing simultaneously.



$ ls aFile not_exist >>output 2>>output
$ cat output
ls: cannot access 'not_exist': No such file or directory
aFile


Which seems to work. Are there any dangers to this approach?







io-redirection stdout stderr






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited May 19 at 15:34







exit_status

















asked May 19 at 10:34









exit_statusexit_status

1571 silver badge8 bronze badges




1571 silver badge8 bronze badges











  • 6





    That was a fast down vote. It took like five seconds. Can you tell me how is that you can assess the worthiness of my question so quickly? And better yet, what is wrong with it so that I can improve it?

    – exit_status
    May 19 at 10:35













  • Why don't you use the more standard ls aFile not_exist &>>output here? (Note, I am assuming you are using bash.)

    – FedonKadifeli
    May 19 at 10:47






  • 5





    Because that doesn't help me understand what I'm asking about. I know how to redirect these streams to the same file, portably even. What I want to know is if there's anything wrong with the what I suggested in the question. @FedonKadifeli

    – exit_status
    May 19 at 10:52








  • 1





    @FedonKadifeli &>> is NOT standard. It's a DEPRECATED, ambiguous syntax which works differently in different shells. I wonder where you guys get your stuff from.

    – Uncle Billy
    May 19 at 11:32






  • 4





    Bash is not a standard. The POSIX standard mandates that ls &>>foo ... should be parsed as two comands ls & and >>foo ..., and this is the way other shells like the /bin/sh from Ubuntu are parsing it. For it being deprecated, you can look here -- though I don't pretend that's any kind of authority. You may ask the bash maintainers if they consider using that a good idea, though.

    – Uncle Billy
    May 19 at 11:50
















  • 6





    That was a fast down vote. It took like five seconds. Can you tell me how is that you can assess the worthiness of my question so quickly? And better yet, what is wrong with it so that I can improve it?

    – exit_status
    May 19 at 10:35













  • Why don't you use the more standard ls aFile not_exist &>>output here? (Note, I am assuming you are using bash.)

    – FedonKadifeli
    May 19 at 10:47






  • 5





    Because that doesn't help me understand what I'm asking about. I know how to redirect these streams to the same file, portably even. What I want to know is if there's anything wrong with the what I suggested in the question. @FedonKadifeli

    – exit_status
    May 19 at 10:52








  • 1





    @FedonKadifeli &>> is NOT standard. It's a DEPRECATED, ambiguous syntax which works differently in different shells. I wonder where you guys get your stuff from.

    – Uncle Billy
    May 19 at 11:32






  • 4





    Bash is not a standard. The POSIX standard mandates that ls &>>foo ... should be parsed as two comands ls & and >>foo ..., and this is the way other shells like the /bin/sh from Ubuntu are parsing it. For it being deprecated, you can look here -- though I don't pretend that's any kind of authority. You may ask the bash maintainers if they consider using that a good idea, though.

    – Uncle Billy
    May 19 at 11:50










6




6





That was a fast down vote. It took like five seconds. Can you tell me how is that you can assess the worthiness of my question so quickly? And better yet, what is wrong with it so that I can improve it?

– exit_status
May 19 at 10:35







That was a fast down vote. It took like five seconds. Can you tell me how is that you can assess the worthiness of my question so quickly? And better yet, what is wrong with it so that I can improve it?

– exit_status
May 19 at 10:35















Why don't you use the more standard ls aFile not_exist &>>output here? (Note, I am assuming you are using bash.)

– FedonKadifeli
May 19 at 10:47





Why don't you use the more standard ls aFile not_exist &>>output here? (Note, I am assuming you are using bash.)

– FedonKadifeli
May 19 at 10:47




5




5





Because that doesn't help me understand what I'm asking about. I know how to redirect these streams to the same file, portably even. What I want to know is if there's anything wrong with the what I suggested in the question. @FedonKadifeli

– exit_status
May 19 at 10:52







Because that doesn't help me understand what I'm asking about. I know how to redirect these streams to the same file, portably even. What I want to know is if there's anything wrong with the what I suggested in the question. @FedonKadifeli

– exit_status
May 19 at 10:52






1




1





@FedonKadifeli &>> is NOT standard. It's a DEPRECATED, ambiguous syntax which works differently in different shells. I wonder where you guys get your stuff from.

– Uncle Billy
May 19 at 11:32





@FedonKadifeli &>> is NOT standard. It's a DEPRECATED, ambiguous syntax which works differently in different shells. I wonder where you guys get your stuff from.

– Uncle Billy
May 19 at 11:32




4




4





Bash is not a standard. The POSIX standard mandates that ls &>>foo ... should be parsed as two comands ls & and >>foo ..., and this is the way other shells like the /bin/sh from Ubuntu are parsing it. For it being deprecated, you can look here -- though I don't pretend that's any kind of authority. You may ask the bash maintainers if they consider using that a good idea, though.

– Uncle Billy
May 19 at 11:50







Bash is not a standard. The POSIX standard mandates that ls &>>foo ... should be parsed as two comands ls & and >>foo ..., and this is the way other shells like the /bin/sh from Ubuntu are parsing it. For it being deprecated, you can look here -- though I don't pretend that's any kind of authority. You may ask the bash maintainers if they consider using that a good idea, though.

– Uncle Billy
May 19 at 11:50












3 Answers
3






active

oldest

votes


















22














No, it's not just as safe as the standard >>bar 2>&1.



When you're writing



foo >>bar 2>>bar


you're opening the bar file twice with O_APPEND, creating two completely independent file objects[1], each with its own state (pointer, open modes, etc).



This is very much unlike 2>&1 which is just calling the dup(2) system call, and makes the stderr and stdout interchangeable aliases for the same file object.



Now, there's a problem with that:




O_APPEND may lead to corrupted files on NFS filesystems if more than one process appends data to a file at once. This is because NFS does not support appending to a file, so the client kernel has to simulate it, which can't be done without a race condition.




You usually can count on the probability of the file like bar in foo >>bar 2>&1 being written to at the same time from two separate places being quite low. But by your >>bar 2>>bar you just increased it by a dozen orders of magnitude, without any reason.



[1] "Open File Descriptions" in POSIX lingo.






share|improve this answer























  • 3





    Formally, for append-mode files, it is safe. The cited issue is a bug in NFS that makes it unsuitable (non-POSIX-conforming) as a filesystem. For the non-append-mode case, though, it's never safe.

    – R..
    May 20 at 4:14








  • 1





    That's immaterial. The OP's double-append is not safe to use (in addition to being completely pointless). And O_APPEND is kind of a botch anyway -- pretty onerous to implement correctly.

    – mosvy
    May 20 at 11:17











  • I believe the NFS race condition is only between different clients. The client OS should coordinate all the writes between its processes.

    – Barmar
    May 20 at 17:36













  • @Barmar that would be true if the client OS only cared about its own view of a nfs file. But when writing to nfs file opened with O_APPEND, the client will first retrieve the "real" size of the file from the server ("revalidate" the inode) and then do the seek+write+cached inode update, and only the last part is done under locks, which means that the first part could still retrieve a stale size from the server and override the correct one from the local/cached inode. Same problem with lseek(SEEK_END).

    – mosvy
    May 20 at 23:46











  • I still don't see how that could cause race conditions between two streams on the same client. Both streams should refer to the same local cached inode.

    – Barmar
    May 20 at 23:56



















22














What happens when you do



some_command >>file 2>>file


is that file will be opened for appending twice. This is safe to do on a POSIX filesystem. Any write that happens to the file when it's opened for appending will occur at the end of the file, regardless of whether the data comes over the standard output stream or the standard error stream.



This relies on support for atomic append write operations in the underlying filesystem. Some filesystems, such as NFS, does not support atomic append. See e.g. the question "Is file append atomic in UNIX?
" on StackOverflow.



Using



some_command >>file 2>&1


would work even on NFS though.



However, using



some_command >file 2>file


is not safe, as the shell will truncate the output file (twice) and any writing that happens on either stream will overwrite the data already written by the other stream.



Example:



$ { echo hello; echo abc >&2; } >file 2>file
$ cat file
abc
o


The hello string is written first (with a terminating newline), and then the string abc followed by a newline is written from standard error, overwriting the hell. The result is the string abc with a newline, followed by what's left of the first echo output, an o and a newline.



Swapping the two echo around wound produce only hello in the output file as that string is written last and is longer than the abc string. The order in which the redirections occur does not matter.



It would be better and safer to use the more idiomatic



some_command >file 2>&1





share|improve this answer























  • 1





    While that's true of modern shells, that was not the case in the Bourne or Thomson shell (where >> comes from), where >> would open for writing and seek to the end (I suppose because O_APPEND wasn't invented yet back then). Even on Solaris 10, /bin/sh -c '(echo a; echo b >&2) >> file 2>> file; cat file' outputs b.

    – Stéphane Chazelas
    May 19 at 18:47











  • @StéphaneChazelas Is that an issue with Solaris 10's implementation of sh, or with its filesystem?

    – Kusalananda
    May 19 at 18:49








  • 1





    That's what >> was originally doing, it was not opening with O_APPEND, it was opening without and seeking to the end. It's not that much an issue, it's what it was doing and was documented to do.

    – Stéphane Chazelas
    May 19 at 19:09



















0














It depends what you want to achieve. It's up to you to decide is it OK to have errors in the same file as the output. This is just saving text in a file with the functionality of the shell which let you redirect as you wish. There is no absolute yes or no.As everything in Linux it can be done in several ways , this is my way ls notExistingFile existingFile >> output 2>&1
To answer the question:
In terms of the redirecting itself , yes its perfectly safe.






share|improve this answer




























  • There's more to it than what you're saying here. The same exercise with > instead of >> will overwrite some characters. So it's not just that the shell allows me to redirect, because when I redirect with >, the result is different. So there are nuances with >, are there any with >>?

    – exit_status
    May 19 at 11:12













  • Yes it will be different . As i said it depends on your goal >- overwrite . >> - append

    – Angel
    May 19 at 11:20














Your Answer








StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "106"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f519782%2fis-it-safe-to-redirect-stdout-and-stderr-to-the-same-file-without-file-descripto%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























3 Answers
3






active

oldest

votes








3 Answers
3






active

oldest

votes









active

oldest

votes






active

oldest

votes









22














No, it's not just as safe as the standard >>bar 2>&1.



When you're writing



foo >>bar 2>>bar


you're opening the bar file twice with O_APPEND, creating two completely independent file objects[1], each with its own state (pointer, open modes, etc).



This is very much unlike 2>&1 which is just calling the dup(2) system call, and makes the stderr and stdout interchangeable aliases for the same file object.



Now, there's a problem with that:




O_APPEND may lead to corrupted files on NFS filesystems if more than one process appends data to a file at once. This is because NFS does not support appending to a file, so the client kernel has to simulate it, which can't be done without a race condition.




You usually can count on the probability of the file like bar in foo >>bar 2>&1 being written to at the same time from two separate places being quite low. But by your >>bar 2>>bar you just increased it by a dozen orders of magnitude, without any reason.



[1] "Open File Descriptions" in POSIX lingo.






share|improve this answer























  • 3





    Formally, for append-mode files, it is safe. The cited issue is a bug in NFS that makes it unsuitable (non-POSIX-conforming) as a filesystem. For the non-append-mode case, though, it's never safe.

    – R..
    May 20 at 4:14








  • 1





    That's immaterial. The OP's double-append is not safe to use (in addition to being completely pointless). And O_APPEND is kind of a botch anyway -- pretty onerous to implement correctly.

    – mosvy
    May 20 at 11:17











  • I believe the NFS race condition is only between different clients. The client OS should coordinate all the writes between its processes.

    – Barmar
    May 20 at 17:36













  • @Barmar that would be true if the client OS only cared about its own view of a nfs file. But when writing to nfs file opened with O_APPEND, the client will first retrieve the "real" size of the file from the server ("revalidate" the inode) and then do the seek+write+cached inode update, and only the last part is done under locks, which means that the first part could still retrieve a stale size from the server and override the correct one from the local/cached inode. Same problem with lseek(SEEK_END).

    – mosvy
    May 20 at 23:46











  • I still don't see how that could cause race conditions between two streams on the same client. Both streams should refer to the same local cached inode.

    – Barmar
    May 20 at 23:56
















22














No, it's not just as safe as the standard >>bar 2>&1.



When you're writing



foo >>bar 2>>bar


you're opening the bar file twice with O_APPEND, creating two completely independent file objects[1], each with its own state (pointer, open modes, etc).



This is very much unlike 2>&1 which is just calling the dup(2) system call, and makes the stderr and stdout interchangeable aliases for the same file object.



Now, there's a problem with that:




O_APPEND may lead to corrupted files on NFS filesystems if more than one process appends data to a file at once. This is because NFS does not support appending to a file, so the client kernel has to simulate it, which can't be done without a race condition.




You usually can count on the probability of the file like bar in foo >>bar 2>&1 being written to at the same time from two separate places being quite low. But by your >>bar 2>>bar you just increased it by a dozen orders of magnitude, without any reason.



[1] "Open File Descriptions" in POSIX lingo.






share|improve this answer























  • 3





    Formally, for append-mode files, it is safe. The cited issue is a bug in NFS that makes it unsuitable (non-POSIX-conforming) as a filesystem. For the non-append-mode case, though, it's never safe.

    – R..
    May 20 at 4:14








  • 1





    That's immaterial. The OP's double-append is not safe to use (in addition to being completely pointless). And O_APPEND is kind of a botch anyway -- pretty onerous to implement correctly.

    – mosvy
    May 20 at 11:17











  • I believe the NFS race condition is only between different clients. The client OS should coordinate all the writes between its processes.

    – Barmar
    May 20 at 17:36













  • @Barmar that would be true if the client OS only cared about its own view of a nfs file. But when writing to nfs file opened with O_APPEND, the client will first retrieve the "real" size of the file from the server ("revalidate" the inode) and then do the seek+write+cached inode update, and only the last part is done under locks, which means that the first part could still retrieve a stale size from the server and override the correct one from the local/cached inode. Same problem with lseek(SEEK_END).

    – mosvy
    May 20 at 23:46











  • I still don't see how that could cause race conditions between two streams on the same client. Both streams should refer to the same local cached inode.

    – Barmar
    May 20 at 23:56














22












22








22







No, it's not just as safe as the standard >>bar 2>&1.



When you're writing



foo >>bar 2>>bar


you're opening the bar file twice with O_APPEND, creating two completely independent file objects[1], each with its own state (pointer, open modes, etc).



This is very much unlike 2>&1 which is just calling the dup(2) system call, and makes the stderr and stdout interchangeable aliases for the same file object.



Now, there's a problem with that:




O_APPEND may lead to corrupted files on NFS filesystems if more than one process appends data to a file at once. This is because NFS does not support appending to a file, so the client kernel has to simulate it, which can't be done without a race condition.




You usually can count on the probability of the file like bar in foo >>bar 2>&1 being written to at the same time from two separate places being quite low. But by your >>bar 2>>bar you just increased it by a dozen orders of magnitude, without any reason.



[1] "Open File Descriptions" in POSIX lingo.






share|improve this answer















No, it's not just as safe as the standard >>bar 2>&1.



When you're writing



foo >>bar 2>>bar


you're opening the bar file twice with O_APPEND, creating two completely independent file objects[1], each with its own state (pointer, open modes, etc).



This is very much unlike 2>&1 which is just calling the dup(2) system call, and makes the stderr and stdout interchangeable aliases for the same file object.



Now, there's a problem with that:




O_APPEND may lead to corrupted files on NFS filesystems if more than one process appends data to a file at once. This is because NFS does not support appending to a file, so the client kernel has to simulate it, which can't be done without a race condition.




You usually can count on the probability of the file like bar in foo >>bar 2>&1 being written to at the same time from two separate places being quite low. But by your >>bar 2>>bar you just increased it by a dozen orders of magnitude, without any reason.



[1] "Open File Descriptions" in POSIX lingo.







share|improve this answer














share|improve this answer



share|improve this answer








edited May 19 at 16:48

























answered May 19 at 16:43









mosvymosvy

15.6k2 gold badges18 silver badges51 bronze badges




15.6k2 gold badges18 silver badges51 bronze badges











  • 3





    Formally, for append-mode files, it is safe. The cited issue is a bug in NFS that makes it unsuitable (non-POSIX-conforming) as a filesystem. For the non-append-mode case, though, it's never safe.

    – R..
    May 20 at 4:14








  • 1





    That's immaterial. The OP's double-append is not safe to use (in addition to being completely pointless). And O_APPEND is kind of a botch anyway -- pretty onerous to implement correctly.

    – mosvy
    May 20 at 11:17











  • I believe the NFS race condition is only between different clients. The client OS should coordinate all the writes between its processes.

    – Barmar
    May 20 at 17:36













  • @Barmar that would be true if the client OS only cared about its own view of a nfs file. But when writing to nfs file opened with O_APPEND, the client will first retrieve the "real" size of the file from the server ("revalidate" the inode) and then do the seek+write+cached inode update, and only the last part is done under locks, which means that the first part could still retrieve a stale size from the server and override the correct one from the local/cached inode. Same problem with lseek(SEEK_END).

    – mosvy
    May 20 at 23:46











  • I still don't see how that could cause race conditions between two streams on the same client. Both streams should refer to the same local cached inode.

    – Barmar
    May 20 at 23:56














  • 3





    Formally, for append-mode files, it is safe. The cited issue is a bug in NFS that makes it unsuitable (non-POSIX-conforming) as a filesystem. For the non-append-mode case, though, it's never safe.

    – R..
    May 20 at 4:14








  • 1





    That's immaterial. The OP's double-append is not safe to use (in addition to being completely pointless). And O_APPEND is kind of a botch anyway -- pretty onerous to implement correctly.

    – mosvy
    May 20 at 11:17











  • I believe the NFS race condition is only between different clients. The client OS should coordinate all the writes between its processes.

    – Barmar
    May 20 at 17:36













  • @Barmar that would be true if the client OS only cared about its own view of a nfs file. But when writing to nfs file opened with O_APPEND, the client will first retrieve the "real" size of the file from the server ("revalidate" the inode) and then do the seek+write+cached inode update, and only the last part is done under locks, which means that the first part could still retrieve a stale size from the server and override the correct one from the local/cached inode. Same problem with lseek(SEEK_END).

    – mosvy
    May 20 at 23:46











  • I still don't see how that could cause race conditions between two streams on the same client. Both streams should refer to the same local cached inode.

    – Barmar
    May 20 at 23:56








3




3





Formally, for append-mode files, it is safe. The cited issue is a bug in NFS that makes it unsuitable (non-POSIX-conforming) as a filesystem. For the non-append-mode case, though, it's never safe.

– R..
May 20 at 4:14







Formally, for append-mode files, it is safe. The cited issue is a bug in NFS that makes it unsuitable (non-POSIX-conforming) as a filesystem. For the non-append-mode case, though, it's never safe.

– R..
May 20 at 4:14






1




1





That's immaterial. The OP's double-append is not safe to use (in addition to being completely pointless). And O_APPEND is kind of a botch anyway -- pretty onerous to implement correctly.

– mosvy
May 20 at 11:17





That's immaterial. The OP's double-append is not safe to use (in addition to being completely pointless). And O_APPEND is kind of a botch anyway -- pretty onerous to implement correctly.

– mosvy
May 20 at 11:17













I believe the NFS race condition is only between different clients. The client OS should coordinate all the writes between its processes.

– Barmar
May 20 at 17:36







I believe the NFS race condition is only between different clients. The client OS should coordinate all the writes between its processes.

– Barmar
May 20 at 17:36















@Barmar that would be true if the client OS only cared about its own view of a nfs file. But when writing to nfs file opened with O_APPEND, the client will first retrieve the "real" size of the file from the server ("revalidate" the inode) and then do the seek+write+cached inode update, and only the last part is done under locks, which means that the first part could still retrieve a stale size from the server and override the correct one from the local/cached inode. Same problem with lseek(SEEK_END).

– mosvy
May 20 at 23:46





@Barmar that would be true if the client OS only cared about its own view of a nfs file. But when writing to nfs file opened with O_APPEND, the client will first retrieve the "real" size of the file from the server ("revalidate" the inode) and then do the seek+write+cached inode update, and only the last part is done under locks, which means that the first part could still retrieve a stale size from the server and override the correct one from the local/cached inode. Same problem with lseek(SEEK_END).

– mosvy
May 20 at 23:46













I still don't see how that could cause race conditions between two streams on the same client. Both streams should refer to the same local cached inode.

– Barmar
May 20 at 23:56





I still don't see how that could cause race conditions between two streams on the same client. Both streams should refer to the same local cached inode.

– Barmar
May 20 at 23:56













22














What happens when you do



some_command >>file 2>>file


is that file will be opened for appending twice. This is safe to do on a POSIX filesystem. Any write that happens to the file when it's opened for appending will occur at the end of the file, regardless of whether the data comes over the standard output stream or the standard error stream.



This relies on support for atomic append write operations in the underlying filesystem. Some filesystems, such as NFS, does not support atomic append. See e.g. the question "Is file append atomic in UNIX?
" on StackOverflow.



Using



some_command >>file 2>&1


would work even on NFS though.



However, using



some_command >file 2>file


is not safe, as the shell will truncate the output file (twice) and any writing that happens on either stream will overwrite the data already written by the other stream.



Example:



$ { echo hello; echo abc >&2; } >file 2>file
$ cat file
abc
o


The hello string is written first (with a terminating newline), and then the string abc followed by a newline is written from standard error, overwriting the hell. The result is the string abc with a newline, followed by what's left of the first echo output, an o and a newline.



Swapping the two echo around wound produce only hello in the output file as that string is written last and is longer than the abc string. The order in which the redirections occur does not matter.



It would be better and safer to use the more idiomatic



some_command >file 2>&1





share|improve this answer























  • 1





    While that's true of modern shells, that was not the case in the Bourne or Thomson shell (where >> comes from), where >> would open for writing and seek to the end (I suppose because O_APPEND wasn't invented yet back then). Even on Solaris 10, /bin/sh -c '(echo a; echo b >&2) >> file 2>> file; cat file' outputs b.

    – Stéphane Chazelas
    May 19 at 18:47











  • @StéphaneChazelas Is that an issue with Solaris 10's implementation of sh, or with its filesystem?

    – Kusalananda
    May 19 at 18:49








  • 1





    That's what >> was originally doing, it was not opening with O_APPEND, it was opening without and seeking to the end. It's not that much an issue, it's what it was doing and was documented to do.

    – Stéphane Chazelas
    May 19 at 19:09
















22














What happens when you do



some_command >>file 2>>file


is that file will be opened for appending twice. This is safe to do on a POSIX filesystem. Any write that happens to the file when it's opened for appending will occur at the end of the file, regardless of whether the data comes over the standard output stream or the standard error stream.



This relies on support for atomic append write operations in the underlying filesystem. Some filesystems, such as NFS, does not support atomic append. See e.g. the question "Is file append atomic in UNIX?
" on StackOverflow.



Using



some_command >>file 2>&1


would work even on NFS though.



However, using



some_command >file 2>file


is not safe, as the shell will truncate the output file (twice) and any writing that happens on either stream will overwrite the data already written by the other stream.



Example:



$ { echo hello; echo abc >&2; } >file 2>file
$ cat file
abc
o


The hello string is written first (with a terminating newline), and then the string abc followed by a newline is written from standard error, overwriting the hell. The result is the string abc with a newline, followed by what's left of the first echo output, an o and a newline.



Swapping the two echo around wound produce only hello in the output file as that string is written last and is longer than the abc string. The order in which the redirections occur does not matter.



It would be better and safer to use the more idiomatic



some_command >file 2>&1





share|improve this answer























  • 1





    While that's true of modern shells, that was not the case in the Bourne or Thomson shell (where >> comes from), where >> would open for writing and seek to the end (I suppose because O_APPEND wasn't invented yet back then). Even on Solaris 10, /bin/sh -c '(echo a; echo b >&2) >> file 2>> file; cat file' outputs b.

    – Stéphane Chazelas
    May 19 at 18:47











  • @StéphaneChazelas Is that an issue with Solaris 10's implementation of sh, or with its filesystem?

    – Kusalananda
    May 19 at 18:49








  • 1





    That's what >> was originally doing, it was not opening with O_APPEND, it was opening without and seeking to the end. It's not that much an issue, it's what it was doing and was documented to do.

    – Stéphane Chazelas
    May 19 at 19:09














22












22








22







What happens when you do



some_command >>file 2>>file


is that file will be opened for appending twice. This is safe to do on a POSIX filesystem. Any write that happens to the file when it's opened for appending will occur at the end of the file, regardless of whether the data comes over the standard output stream or the standard error stream.



This relies on support for atomic append write operations in the underlying filesystem. Some filesystems, such as NFS, does not support atomic append. See e.g. the question "Is file append atomic in UNIX?
" on StackOverflow.



Using



some_command >>file 2>&1


would work even on NFS though.



However, using



some_command >file 2>file


is not safe, as the shell will truncate the output file (twice) and any writing that happens on either stream will overwrite the data already written by the other stream.



Example:



$ { echo hello; echo abc >&2; } >file 2>file
$ cat file
abc
o


The hello string is written first (with a terminating newline), and then the string abc followed by a newline is written from standard error, overwriting the hell. The result is the string abc with a newline, followed by what's left of the first echo output, an o and a newline.



Swapping the two echo around wound produce only hello in the output file as that string is written last and is longer than the abc string. The order in which the redirections occur does not matter.



It would be better and safer to use the more idiomatic



some_command >file 2>&1





share|improve this answer















What happens when you do



some_command >>file 2>>file


is that file will be opened for appending twice. This is safe to do on a POSIX filesystem. Any write that happens to the file when it's opened for appending will occur at the end of the file, regardless of whether the data comes over the standard output stream or the standard error stream.



This relies on support for atomic append write operations in the underlying filesystem. Some filesystems, such as NFS, does not support atomic append. See e.g. the question "Is file append atomic in UNIX?
" on StackOverflow.



Using



some_command >>file 2>&1


would work even on NFS though.



However, using



some_command >file 2>file


is not safe, as the shell will truncate the output file (twice) and any writing that happens on either stream will overwrite the data already written by the other stream.



Example:



$ { echo hello; echo abc >&2; } >file 2>file
$ cat file
abc
o


The hello string is written first (with a terminating newline), and then the string abc followed by a newline is written from standard error, overwriting the hell. The result is the string abc with a newline, followed by what's left of the first echo output, an o and a newline.



Swapping the two echo around wound produce only hello in the output file as that string is written last and is longer than the abc string. The order in which the redirections occur does not matter.



It would be better and safer to use the more idiomatic



some_command >file 2>&1






share|improve this answer














share|improve this answer



share|improve this answer








edited May 20 at 7:17

























answered May 19 at 11:17









KusalanandaKusalananda

158k18 gold badges313 silver badges499 bronze badges




158k18 gold badges313 silver badges499 bronze badges











  • 1





    While that's true of modern shells, that was not the case in the Bourne or Thomson shell (where >> comes from), where >> would open for writing and seek to the end (I suppose because O_APPEND wasn't invented yet back then). Even on Solaris 10, /bin/sh -c '(echo a; echo b >&2) >> file 2>> file; cat file' outputs b.

    – Stéphane Chazelas
    May 19 at 18:47











  • @StéphaneChazelas Is that an issue with Solaris 10's implementation of sh, or with its filesystem?

    – Kusalananda
    May 19 at 18:49








  • 1





    That's what >> was originally doing, it was not opening with O_APPEND, it was opening without and seeking to the end. It's not that much an issue, it's what it was doing and was documented to do.

    – Stéphane Chazelas
    May 19 at 19:09














  • 1





    While that's true of modern shells, that was not the case in the Bourne or Thomson shell (where >> comes from), where >> would open for writing and seek to the end (I suppose because O_APPEND wasn't invented yet back then). Even on Solaris 10, /bin/sh -c '(echo a; echo b >&2) >> file 2>> file; cat file' outputs b.

    – Stéphane Chazelas
    May 19 at 18:47











  • @StéphaneChazelas Is that an issue with Solaris 10's implementation of sh, or with its filesystem?

    – Kusalananda
    May 19 at 18:49








  • 1





    That's what >> was originally doing, it was not opening with O_APPEND, it was opening without and seeking to the end. It's not that much an issue, it's what it was doing and was documented to do.

    – Stéphane Chazelas
    May 19 at 19:09








1




1





While that's true of modern shells, that was not the case in the Bourne or Thomson shell (where >> comes from), where >> would open for writing and seek to the end (I suppose because O_APPEND wasn't invented yet back then). Even on Solaris 10, /bin/sh -c '(echo a; echo b >&2) >> file 2>> file; cat file' outputs b.

– Stéphane Chazelas
May 19 at 18:47





While that's true of modern shells, that was not the case in the Bourne or Thomson shell (where >> comes from), where >> would open for writing and seek to the end (I suppose because O_APPEND wasn't invented yet back then). Even on Solaris 10, /bin/sh -c '(echo a; echo b >&2) >> file 2>> file; cat file' outputs b.

– Stéphane Chazelas
May 19 at 18:47













@StéphaneChazelas Is that an issue with Solaris 10's implementation of sh, or with its filesystem?

– Kusalananda
May 19 at 18:49







@StéphaneChazelas Is that an issue with Solaris 10's implementation of sh, or with its filesystem?

– Kusalananda
May 19 at 18:49






1




1





That's what >> was originally doing, it was not opening with O_APPEND, it was opening without and seeking to the end. It's not that much an issue, it's what it was doing and was documented to do.

– Stéphane Chazelas
May 19 at 19:09





That's what >> was originally doing, it was not opening with O_APPEND, it was opening without and seeking to the end. It's not that much an issue, it's what it was doing and was documented to do.

– Stéphane Chazelas
May 19 at 19:09











0














It depends what you want to achieve. It's up to you to decide is it OK to have errors in the same file as the output. This is just saving text in a file with the functionality of the shell which let you redirect as you wish. There is no absolute yes or no.As everything in Linux it can be done in several ways , this is my way ls notExistingFile existingFile >> output 2>&1
To answer the question:
In terms of the redirecting itself , yes its perfectly safe.






share|improve this answer




























  • There's more to it than what you're saying here. The same exercise with > instead of >> will overwrite some characters. So it's not just that the shell allows me to redirect, because when I redirect with >, the result is different. So there are nuances with >, are there any with >>?

    – exit_status
    May 19 at 11:12













  • Yes it will be different . As i said it depends on your goal >- overwrite . >> - append

    – Angel
    May 19 at 11:20
















0














It depends what you want to achieve. It's up to you to decide is it OK to have errors in the same file as the output. This is just saving text in a file with the functionality of the shell which let you redirect as you wish. There is no absolute yes or no.As everything in Linux it can be done in several ways , this is my way ls notExistingFile existingFile >> output 2>&1
To answer the question:
In terms of the redirecting itself , yes its perfectly safe.






share|improve this answer




























  • There's more to it than what you're saying here. The same exercise with > instead of >> will overwrite some characters. So it's not just that the shell allows me to redirect, because when I redirect with >, the result is different. So there are nuances with >, are there any with >>?

    – exit_status
    May 19 at 11:12













  • Yes it will be different . As i said it depends on your goal >- overwrite . >> - append

    – Angel
    May 19 at 11:20














0












0








0







It depends what you want to achieve. It's up to you to decide is it OK to have errors in the same file as the output. This is just saving text in a file with the functionality of the shell which let you redirect as you wish. There is no absolute yes or no.As everything in Linux it can be done in several ways , this is my way ls notExistingFile existingFile >> output 2>&1
To answer the question:
In terms of the redirecting itself , yes its perfectly safe.






share|improve this answer















It depends what you want to achieve. It's up to you to decide is it OK to have errors in the same file as the output. This is just saving text in a file with the functionality of the shell which let you redirect as you wish. There is no absolute yes or no.As everything in Linux it can be done in several ways , this is my way ls notExistingFile existingFile >> output 2>&1
To answer the question:
In terms of the redirecting itself , yes its perfectly safe.







share|improve this answer














share|improve this answer



share|improve this answer








edited May 19 at 11:16

























answered May 19 at 11:06









AngelAngel

825 bronze badges




825 bronze badges
















  • There's more to it than what you're saying here. The same exercise with > instead of >> will overwrite some characters. So it's not just that the shell allows me to redirect, because when I redirect with >, the result is different. So there are nuances with >, are there any with >>?

    – exit_status
    May 19 at 11:12













  • Yes it will be different . As i said it depends on your goal >- overwrite . >> - append

    – Angel
    May 19 at 11:20



















  • There's more to it than what you're saying here. The same exercise with > instead of >> will overwrite some characters. So it's not just that the shell allows me to redirect, because when I redirect with >, the result is different. So there are nuances with >, are there any with >>?

    – exit_status
    May 19 at 11:12













  • Yes it will be different . As i said it depends on your goal >- overwrite . >> - append

    – Angel
    May 19 at 11:20

















There's more to it than what you're saying here. The same exercise with > instead of >> will overwrite some characters. So it's not just that the shell allows me to redirect, because when I redirect with >, the result is different. So there are nuances with >, are there any with >>?

– exit_status
May 19 at 11:12







There's more to it than what you're saying here. The same exercise with > instead of >> will overwrite some characters. So it's not just that the shell allows me to redirect, because when I redirect with >, the result is different. So there are nuances with >, are there any with >>?

– exit_status
May 19 at 11:12















Yes it will be different . As i said it depends on your goal >- overwrite . >> - append

– Angel
May 19 at 11:20





Yes it will be different . As i said it depends on your goal >- overwrite . >> - append

– Angel
May 19 at 11:20


















draft saved

draft discarded




















































Thanks for contributing an answer to Unix & Linux Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f519782%2fis-it-safe-to-redirect-stdout-and-stderr-to-the-same-file-without-file-descripto%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Færeyskur hestur Heimild | Tengill | Tilvísanir | LeiðsagnarvalRossið - síða um færeyska hrossið á færeyskuGott ár hjá færeyska hestinum

He _____ here since 1970 . Answer needed [closed]What does “since he was so high” mean?Meaning of “catch birds for”?How do I ensure “since” takes the meaning I want?“Who cares here” meaningWhat does “right round toward” mean?the time tense (had now been detected)What does the phrase “ring around the roses” mean here?Correct usage of “visited upon”Meaning of “foiled rail sabotage bid”It was the third time I had gone to Rome or It is the third time I had been to Rome

Slayer Innehåll Historia | Stil, komposition och lyrik | Bandets betydelse och framgångar | Sidoprojekt och samarbeten | Kontroverser | Medlemmar | Utmärkelser och nomineringar | Turnéer och festivaler | Diskografi | Referenser | Externa länkar | Navigeringsmenywww.slayer.net”Metal Massacre vol. 1””Metal Massacre vol. 3””Metal Massacre Volume III””Show No Mercy””Haunting the Chapel””Live Undead””Hell Awaits””Reign in Blood””Reign in Blood””Gold & Platinum – Reign in Blood””Golden Gods Awards Winners”originalet”Kerrang! Hall Of Fame””Slayer Looks Back On 37-Year Career In New Video Series: Part Two””South of Heaven””Gold & Platinum – South of Heaven””Seasons in the Abyss””Gold & Platinum - Seasons in the Abyss””Divine Intervention””Divine Intervention - Release group by Slayer””Gold & Platinum - Divine Intervention””Live Intrusion””Undisputed Attitude””Abolish Government/Superficial Love””Release “Slatanic Slaughter: A Tribute to Slayer” by Various Artists””Diabolus in Musica””Soundtrack to the Apocalypse””God Hates Us All””Systematic - Relationships””War at the Warfield””Gold & Platinum - War at the Warfield””Soundtrack to the Apocalypse””Gold & Platinum - Still Reigning””Metallica, Slayer, Iron Mauden Among Winners At Metal Hammer Awards””Eternal Pyre””Eternal Pyre - Slayer release group””Eternal Pyre””Metal Storm Awards 2006””Kerrang! Hall Of Fame””Slayer Wins 'Best Metal' Grammy Award””Slayer Guitarist Jeff Hanneman Dies””Bullet-For My Valentine booed at Metal Hammer Golden Gods Awards””Unholy Aliance””The End Of Slayer?””Slayer: We Could Thrash Out Two More Albums If We're Fast Enough...””'The Unholy Alliance: Chapter III' UK Dates Added”originalet”Megadeth And Slayer To Co-Headline 'Canadian Carnage' Trek”originalet”World Painted Blood””Release “World Painted Blood” by Slayer””Metallica Heading To Cinemas””Slayer, Megadeth To Join Forces For 'European Carnage' Tour - Dec. 18, 2010”originalet”Slayer's Hanneman Contracts Acute Infection; Band To Bring In Guest Guitarist””Cannibal Corpse's Pat O'Brien Will Step In As Slayer's Guest Guitarist”originalet”Slayer’s Jeff Hanneman Dead at 49””Dave Lombardo Says He Made Only $67,000 In 2011 While Touring With Slayer””Slayer: We Do Not Agree With Dave Lombardo's Substance Or Timeline Of Events””Slayer Welcomes Drummer Paul Bostaph Back To The Fold””Slayer Hope to Unveil Never-Before-Heard Jeff Hanneman Material on Next Album””Slayer Debut New Song 'Implode' During Surprise Golden Gods Appearance””Release group Repentless by Slayer””Repentless - Slayer - Credits””Slayer””Metal Storm Awards 2015””Slayer - to release comic book "Repentless #1"””Slayer To Release 'Repentless' 6.66" Vinyl Box Set””BREAKING NEWS: Slayer Announce Farewell Tour””Slayer Recruit Lamb of God, Anthrax, Behemoth + Testament for Final Tour””Slayer lägger ner efter 37 år””Slayer Announces Second North American Leg Of 'Final' Tour””Final World Tour””Slayer Announces Final European Tour With Lamb of God, Anthrax And Obituary””Slayer To Tour Europe With Lamb of God, Anthrax And Obituary””Slayer To Play 'Last French Show Ever' At Next Year's Hellfst””Slayer's Final World Tour Will Extend Into 2019””Death Angel's Rob Cavestany On Slayer's 'Farewell' Tour: 'Some Of Us Could See This Coming'””Testament Has No Plans To Retire Anytime Soon, Says Chuck Billy””Anthrax's Scott Ian On Slayer's 'Farewell' Tour Plans: 'I Was Surprised And I Wasn't Surprised'””Slayer””Slayer's Morbid Schlock””Review/Rock; For Slayer, the Mania Is the Message””Slayer - Biography””Slayer - Reign In Blood”originalet”Dave Lombardo””An exclusive oral history of Slayer”originalet”Exclusive! Interview With Slayer Guitarist Jeff Hanneman”originalet”Thinking Out Loud: Slayer's Kerry King on hair metal, Satan and being polite””Slayer Lyrics””Slayer - Biography””Most influential artists for extreme metal music””Slayer - Reign in Blood””Slayer guitarist Jeff Hanneman dies aged 49””Slatanic Slaughter: A Tribute to Slayer””Gateway to Hell: A Tribute to Slayer””Covered In Blood””Slayer: The Origins of Thrash in San Francisco, CA.””Why They Rule - #6 Slayer”originalet”Guitar World's 100 Greatest Heavy Metal Guitarists Of All Time”originalet”The fans have spoken: Slayer comes out on top in readers' polls”originalet”Tribute to Jeff Hanneman (1964-2013)””Lamb Of God Frontman: We Sound Like A Slayer Rip-Off””BEHEMOTH Frontman Pays Tribute To SLAYER's JEFF HANNEMAN””Slayer, Hatebreed Doing Double Duty On This Year's Ozzfest””System of a Down””Lacuna Coil’s Andrea Ferro Talks Influences, Skateboarding, Band Origins + More””Slayer - Reign in Blood””Into The Lungs of Hell””Slayer rules - en utställning om fans””Slayer and Their Fans Slashed Through a No-Holds-Barred Night at Gas Monkey””Home””Slayer””Gold & Platinum - The Big 4 Live from Sofia, Bulgaria””Exclusive! Interview With Slayer Guitarist Kerry King””2008-02-23: Wiltern, Los Angeles, CA, USA””Slayer's Kerry King To Perform With Megadeth Tonight! - Oct. 21, 2010”originalet”Dave Lombardo - Biography”Slayer Case DismissedArkiveradUltimate Classic Rock: Slayer guitarist Jeff Hanneman dead at 49.”Slayer: "We could never do any thing like Some Kind Of Monster..."””Cannibal Corpse'S Pat O'Brien Will Step In As Slayer'S Guest Guitarist | The Official Slayer Site”originalet”Slayer Wins 'Best Metal' Grammy Award””Slayer Guitarist Jeff Hanneman Dies””Kerrang! Awards 2006 Blog: Kerrang! Hall Of Fame””Kerrang! Awards 2013: Kerrang! Legend”originalet”Metallica, Slayer, Iron Maien Among Winners At Metal Hammer Awards””Metal Hammer Golden Gods Awards””Bullet For My Valentine Booed At Metal Hammer Golden Gods Awards””Metal Storm Awards 2006””Metal Storm Awards 2015””Slayer's Concert History””Slayer - Relationships””Slayer - Releases”Slayers officiella webbplatsSlayer på MusicBrainzOfficiell webbplatsSlayerSlayerr1373445760000 0001 1540 47353068615-5086262726cb13906545x(data)6033143kn20030215029