Bash - Looping through Array in Nested [FOR, WHILE, IF] statements
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}
I am trying to process a large file-set, appending specific lines into the "test_result.txt" file - I achieved it -not very elegantly- with the following code.
for i in *merged; do
while read -r lo; do
if [[ $lo == *"ID"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"Instance"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"NOT"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"AI"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"Sitting"* ]]; then
echo $lo >> test_result.txt
done < $i
done
However, I am trying to size-it-down using an array - which resulted in quite an unsuccessful attempt.
KEYWORDS=("ID" "Instance" "NOT" "AI" "Sitting" )
KEY_COUNT=0
for i in *merged; do
while read -r lo; do
if [[$lo == ${KEYWORDS[@]} ]]; then
echo $lo >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
fi
done < $i
done
bash
New contributor
add a comment |
I am trying to process a large file-set, appending specific lines into the "test_result.txt" file - I achieved it -not very elegantly- with the following code.
for i in *merged; do
while read -r lo; do
if [[ $lo == *"ID"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"Instance"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"NOT"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"AI"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"Sitting"* ]]; then
echo $lo >> test_result.txt
done < $i
done
However, I am trying to size-it-down using an array - which resulted in quite an unsuccessful attempt.
KEYWORDS=("ID" "Instance" "NOT" "AI" "Sitting" )
KEY_COUNT=0
for i in *merged; do
while read -r lo; do
if [[$lo == ${KEYWORDS[@]} ]]; then
echo $lo >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
fi
done < $i
done
bash
New contributor
3
How large is the file set? This sounds like an XY problem that could be better accomplished by a straightforwardgrep
command.
– steeldriver
17 hours ago
2
Small side note: Instead ofKEY_COUNT="`expr $KEY_COUNT + 1`"
you could also write((KEY_COUNT++))
– Freddy
16 hours ago
add a comment |
I am trying to process a large file-set, appending specific lines into the "test_result.txt" file - I achieved it -not very elegantly- with the following code.
for i in *merged; do
while read -r lo; do
if [[ $lo == *"ID"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"Instance"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"NOT"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"AI"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"Sitting"* ]]; then
echo $lo >> test_result.txt
done < $i
done
However, I am trying to size-it-down using an array - which resulted in quite an unsuccessful attempt.
KEYWORDS=("ID" "Instance" "NOT" "AI" "Sitting" )
KEY_COUNT=0
for i in *merged; do
while read -r lo; do
if [[$lo == ${KEYWORDS[@]} ]]; then
echo $lo >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
fi
done < $i
done
bash
New contributor
I am trying to process a large file-set, appending specific lines into the "test_result.txt" file - I achieved it -not very elegantly- with the following code.
for i in *merged; do
while read -r lo; do
if [[ $lo == *"ID"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"Instance"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"NOT"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"AI"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"Sitting"* ]]; then
echo $lo >> test_result.txt
done < $i
done
However, I am trying to size-it-down using an array - which resulted in quite an unsuccessful attempt.
KEYWORDS=("ID" "Instance" "NOT" "AI" "Sitting" )
KEY_COUNT=0
for i in *merged; do
while read -r lo; do
if [[$lo == ${KEYWORDS[@]} ]]; then
echo $lo >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
fi
done < $i
done
bash
bash
New contributor
New contributor
edited 5 hours ago
Rui F Ribeiro
42k1483142
42k1483142
New contributor
asked 17 hours ago
AF.BJAF.BJ
164
164
New contributor
New contributor
3
How large is the file set? This sounds like an XY problem that could be better accomplished by a straightforwardgrep
command.
– steeldriver
17 hours ago
2
Small side note: Instead ofKEY_COUNT="`expr $KEY_COUNT + 1`"
you could also write((KEY_COUNT++))
– Freddy
16 hours ago
add a comment |
3
How large is the file set? This sounds like an XY problem that could be better accomplished by a straightforwardgrep
command.
– steeldriver
17 hours ago
2
Small side note: Instead ofKEY_COUNT="`expr $KEY_COUNT + 1`"
you could also write((KEY_COUNT++))
– Freddy
16 hours ago
3
3
How large is the file set? This sounds like an XY problem that could be better accomplished by a straightforward
grep
command.– steeldriver
17 hours ago
How large is the file set? This sounds like an XY problem that could be better accomplished by a straightforward
grep
command.– steeldriver
17 hours ago
2
2
Small side note: Instead of
KEY_COUNT="`expr $KEY_COUNT + 1`"
you could also write ((KEY_COUNT++))
– Freddy
16 hours ago
Small side note: Instead of
KEY_COUNT="`expr $KEY_COUNT + 1`"
you could also write ((KEY_COUNT++))
– Freddy
16 hours ago
add a comment |
2 Answers
2
active
oldest
votes
It looks like you want to get all the lines that contains at least one out of a set of words, from a set of files.
Assuming that you don't have many thousands of files, you could do that with a single grep
command:
grep -wE '(ID|Instance|NOT|AI|Sitting)' ./*merged >outputfile
This would extract the lines matching any of the words listed in the pattern from the files whose names matches *merged
.
The -w
with grep
ensures that the given strings are not matched as substrings (i.e. NOT
will not be matched in NOTICE
). The -E
option enables the alternation with |
in the pattern.
Add the -h
option to the command if you don't want the names of the files containing matching lines in the output.
If you do have many thousands of files, the above command may fail due to expanding to a too long command line. In that case, you may want to do something like
for file in ./*merged; do
grep -wE '(ID|Instance|NOT|AI|Sitting)' "$file"
done >outputfile
which would run the grep
command once on each file, or,
find . -maxdepth 1 -type f -name '*merged'
-exec grep -wE '(ID|Instance|NOT|AI|Sitting)' {} + >outputfile
which would do as few invocations of grep
as possible with as many files as possible at once.
Related:
- Why is using a shell loop to process text considered bad practice?
1
It is indeed a file-set of a few thousand. Originally, I built other processes into the loop but running grep separately - before the extra tweakings - it's a cleaner solution. Just needed to add the "-h" option to suppress default prefixes - Thnks.
– AF.BJ
10 hours ago
@AF.BJ since this answer solved your problem, consider accepting it: What should I do when someone answers my question?
– muru
2 hours ago
add a comment |
Adding an array doesn't particularly help: you still would need to loop over the elements of the array (see How do I test if an item is in a bash array?):
while read -r lo; do
for keyword in "${keywords[@]}"; do
if [[ $lo == *$keyword* ]]; then
echo $lo >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
fi
done
done < "$i"
It might be better to use a case
statement:
while read -r lo; do
case $lo in
*(ID|Instance|NOT|AI|Sitting)*)
echo "$lo" >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
;;
esac
done < "$i"
(I assume you do further processing of these lines within the loop. If not, grep or awk could do this more efficiently.)
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "106"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
AF.BJ is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f511636%2fbash-looping-through-array-in-nested-for-while-if-statements%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
It looks like you want to get all the lines that contains at least one out of a set of words, from a set of files.
Assuming that you don't have many thousands of files, you could do that with a single grep
command:
grep -wE '(ID|Instance|NOT|AI|Sitting)' ./*merged >outputfile
This would extract the lines matching any of the words listed in the pattern from the files whose names matches *merged
.
The -w
with grep
ensures that the given strings are not matched as substrings (i.e. NOT
will not be matched in NOTICE
). The -E
option enables the alternation with |
in the pattern.
Add the -h
option to the command if you don't want the names of the files containing matching lines in the output.
If you do have many thousands of files, the above command may fail due to expanding to a too long command line. In that case, you may want to do something like
for file in ./*merged; do
grep -wE '(ID|Instance|NOT|AI|Sitting)' "$file"
done >outputfile
which would run the grep
command once on each file, or,
find . -maxdepth 1 -type f -name '*merged'
-exec grep -wE '(ID|Instance|NOT|AI|Sitting)' {} + >outputfile
which would do as few invocations of grep
as possible with as many files as possible at once.
Related:
- Why is using a shell loop to process text considered bad practice?
1
It is indeed a file-set of a few thousand. Originally, I built other processes into the loop but running grep separately - before the extra tweakings - it's a cleaner solution. Just needed to add the "-h" option to suppress default prefixes - Thnks.
– AF.BJ
10 hours ago
@AF.BJ since this answer solved your problem, consider accepting it: What should I do when someone answers my question?
– muru
2 hours ago
add a comment |
It looks like you want to get all the lines that contains at least one out of a set of words, from a set of files.
Assuming that you don't have many thousands of files, you could do that with a single grep
command:
grep -wE '(ID|Instance|NOT|AI|Sitting)' ./*merged >outputfile
This would extract the lines matching any of the words listed in the pattern from the files whose names matches *merged
.
The -w
with grep
ensures that the given strings are not matched as substrings (i.e. NOT
will not be matched in NOTICE
). The -E
option enables the alternation with |
in the pattern.
Add the -h
option to the command if you don't want the names of the files containing matching lines in the output.
If you do have many thousands of files, the above command may fail due to expanding to a too long command line. In that case, you may want to do something like
for file in ./*merged; do
grep -wE '(ID|Instance|NOT|AI|Sitting)' "$file"
done >outputfile
which would run the grep
command once on each file, or,
find . -maxdepth 1 -type f -name '*merged'
-exec grep -wE '(ID|Instance|NOT|AI|Sitting)' {} + >outputfile
which would do as few invocations of grep
as possible with as many files as possible at once.
Related:
- Why is using a shell loop to process text considered bad practice?
1
It is indeed a file-set of a few thousand. Originally, I built other processes into the loop but running grep separately - before the extra tweakings - it's a cleaner solution. Just needed to add the "-h" option to suppress default prefixes - Thnks.
– AF.BJ
10 hours ago
@AF.BJ since this answer solved your problem, consider accepting it: What should I do when someone answers my question?
– muru
2 hours ago
add a comment |
It looks like you want to get all the lines that contains at least one out of a set of words, from a set of files.
Assuming that you don't have many thousands of files, you could do that with a single grep
command:
grep -wE '(ID|Instance|NOT|AI|Sitting)' ./*merged >outputfile
This would extract the lines matching any of the words listed in the pattern from the files whose names matches *merged
.
The -w
with grep
ensures that the given strings are not matched as substrings (i.e. NOT
will not be matched in NOTICE
). The -E
option enables the alternation with |
in the pattern.
Add the -h
option to the command if you don't want the names of the files containing matching lines in the output.
If you do have many thousands of files, the above command may fail due to expanding to a too long command line. In that case, you may want to do something like
for file in ./*merged; do
grep -wE '(ID|Instance|NOT|AI|Sitting)' "$file"
done >outputfile
which would run the grep
command once on each file, or,
find . -maxdepth 1 -type f -name '*merged'
-exec grep -wE '(ID|Instance|NOT|AI|Sitting)' {} + >outputfile
which would do as few invocations of grep
as possible with as many files as possible at once.
Related:
- Why is using a shell loop to process text considered bad practice?
It looks like you want to get all the lines that contains at least one out of a set of words, from a set of files.
Assuming that you don't have many thousands of files, you could do that with a single grep
command:
grep -wE '(ID|Instance|NOT|AI|Sitting)' ./*merged >outputfile
This would extract the lines matching any of the words listed in the pattern from the files whose names matches *merged
.
The -w
with grep
ensures that the given strings are not matched as substrings (i.e. NOT
will not be matched in NOTICE
). The -E
option enables the alternation with |
in the pattern.
Add the -h
option to the command if you don't want the names of the files containing matching lines in the output.
If you do have many thousands of files, the above command may fail due to expanding to a too long command line. In that case, you may want to do something like
for file in ./*merged; do
grep -wE '(ID|Instance|NOT|AI|Sitting)' "$file"
done >outputfile
which would run the grep
command once on each file, or,
find . -maxdepth 1 -type f -name '*merged'
-exec grep -wE '(ID|Instance|NOT|AI|Sitting)' {} + >outputfile
which would do as few invocations of grep
as possible with as many files as possible at once.
Related:
- Why is using a shell loop to process text considered bad practice?
edited 12 hours ago
answered 14 hours ago
Kusalananda♦Kusalananda
141k17262438
141k17262438
1
It is indeed a file-set of a few thousand. Originally, I built other processes into the loop but running grep separately - before the extra tweakings - it's a cleaner solution. Just needed to add the "-h" option to suppress default prefixes - Thnks.
– AF.BJ
10 hours ago
@AF.BJ since this answer solved your problem, consider accepting it: What should I do when someone answers my question?
– muru
2 hours ago
add a comment |
1
It is indeed a file-set of a few thousand. Originally, I built other processes into the loop but running grep separately - before the extra tweakings - it's a cleaner solution. Just needed to add the "-h" option to suppress default prefixes - Thnks.
– AF.BJ
10 hours ago
@AF.BJ since this answer solved your problem, consider accepting it: What should I do when someone answers my question?
– muru
2 hours ago
1
1
It is indeed a file-set of a few thousand. Originally, I built other processes into the loop but running grep separately - before the extra tweakings - it's a cleaner solution. Just needed to add the "-h" option to suppress default prefixes - Thnks.
– AF.BJ
10 hours ago
It is indeed a file-set of a few thousand. Originally, I built other processes into the loop but running grep separately - before the extra tweakings - it's a cleaner solution. Just needed to add the "-h" option to suppress default prefixes - Thnks.
– AF.BJ
10 hours ago
@AF.BJ since this answer solved your problem, consider accepting it: What should I do when someone answers my question?
– muru
2 hours ago
@AF.BJ since this answer solved your problem, consider accepting it: What should I do when someone answers my question?
– muru
2 hours ago
add a comment |
Adding an array doesn't particularly help: you still would need to loop over the elements of the array (see How do I test if an item is in a bash array?):
while read -r lo; do
for keyword in "${keywords[@]}"; do
if [[ $lo == *$keyword* ]]; then
echo $lo >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
fi
done
done < "$i"
It might be better to use a case
statement:
while read -r lo; do
case $lo in
*(ID|Instance|NOT|AI|Sitting)*)
echo "$lo" >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
;;
esac
done < "$i"
(I assume you do further processing of these lines within the loop. If not, grep or awk could do this more efficiently.)
add a comment |
Adding an array doesn't particularly help: you still would need to loop over the elements of the array (see How do I test if an item is in a bash array?):
while read -r lo; do
for keyword in "${keywords[@]}"; do
if [[ $lo == *$keyword* ]]; then
echo $lo >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
fi
done
done < "$i"
It might be better to use a case
statement:
while read -r lo; do
case $lo in
*(ID|Instance|NOT|AI|Sitting)*)
echo "$lo" >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
;;
esac
done < "$i"
(I assume you do further processing of these lines within the loop. If not, grep or awk could do this more efficiently.)
add a comment |
Adding an array doesn't particularly help: you still would need to loop over the elements of the array (see How do I test if an item is in a bash array?):
while read -r lo; do
for keyword in "${keywords[@]}"; do
if [[ $lo == *$keyword* ]]; then
echo $lo >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
fi
done
done < "$i"
It might be better to use a case
statement:
while read -r lo; do
case $lo in
*(ID|Instance|NOT|AI|Sitting)*)
echo "$lo" >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
;;
esac
done < "$i"
(I assume you do further processing of these lines within the loop. If not, grep or awk could do this more efficiently.)
Adding an array doesn't particularly help: you still would need to loop over the elements of the array (see How do I test if an item is in a bash array?):
while read -r lo; do
for keyword in "${keywords[@]}"; do
if [[ $lo == *$keyword* ]]; then
echo $lo >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
fi
done
done < "$i"
It might be better to use a case
statement:
while read -r lo; do
case $lo in
*(ID|Instance|NOT|AI|Sitting)*)
echo "$lo" >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
;;
esac
done < "$i"
(I assume you do further processing of these lines within the loop. If not, grep or awk could do this more efficiently.)
answered 17 hours ago
murumuru
37.4k589164
37.4k589164
add a comment |
add a comment |
AF.BJ is a new contributor. Be nice, and check out our Code of Conduct.
AF.BJ is a new contributor. Be nice, and check out our Code of Conduct.
AF.BJ is a new contributor. Be nice, and check out our Code of Conduct.
AF.BJ is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Unix & Linux Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f511636%2fbash-looping-through-array-in-nested-for-while-if-statements%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
3
How large is the file set? This sounds like an XY problem that could be better accomplished by a straightforward
grep
command.– steeldriver
17 hours ago
2
Small side note: Instead of
KEY_COUNT="`expr $KEY_COUNT + 1`"
you could also write((KEY_COUNT++))
– Freddy
16 hours ago