Find the identical rows in a matrix [duplicate]
$begingroup$
This question already has an answer here:
How to efficiently find positions of duplicates?
8 answers
Suppose I have the following matrix:
M =
{{0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0},
{0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0},
{0, 0, 0, 0,1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0},
{0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}};
TableForm[M, TableHeadings -> {{S1, S2, S3, S4, S5, S6, S7, S8}}]
In this case, it turns out that rows (S1, S8), (S2, S3, S4), (S5, S6, S7) have equal element values in identical column positions. I have a 1000 x 1000 matrix to examine and would appreciate any assistance in coding this problem.
list-manipulation matrix
$endgroup$
marked as duplicate by Michael E2, m_goldberg, happy fish, bbgodfrey, C. E. May 1 at 13:07
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
add a comment |
$begingroup$
This question already has an answer here:
How to efficiently find positions of duplicates?
8 answers
Suppose I have the following matrix:
M =
{{0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0},
{0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0},
{0, 0, 0, 0,1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0},
{0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}};
TableForm[M, TableHeadings -> {{S1, S2, S3, S4, S5, S6, S7, S8}}]
In this case, it turns out that rows (S1, S8), (S2, S3, S4), (S5, S6, S7) have equal element values in identical column positions. I have a 1000 x 1000 matrix to examine and would appreciate any assistance in coding this problem.
list-manipulation matrix
$endgroup$
marked as duplicate by Michael E2, m_goldberg, happy fish, bbgodfrey, C. E. May 1 at 13:07
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
3
$begingroup$
TryValues[PositionIndex[M]]
$endgroup$
– Coolwater
Apr 25 at 21:39
$begingroup$
@Coolwater If there is a unique row, your method will fail. At least one needs to delete if list has length 1
$endgroup$
– Okkes Dulgerci
Apr 25 at 22:58
$begingroup$
@Coolwater IMHO, the best answer is lacking so far. Please consider postingPositionIndex
as possible solution.
$endgroup$
– Henrik Schumacher
Apr 26 at 12:08
add a comment |
$begingroup$
This question already has an answer here:
How to efficiently find positions of duplicates?
8 answers
Suppose I have the following matrix:
M =
{{0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0},
{0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0},
{0, 0, 0, 0,1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0},
{0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}};
TableForm[M, TableHeadings -> {{S1, S2, S3, S4, S5, S6, S7, S8}}]
In this case, it turns out that rows (S1, S8), (S2, S3, S4), (S5, S6, S7) have equal element values in identical column positions. I have a 1000 x 1000 matrix to examine and would appreciate any assistance in coding this problem.
list-manipulation matrix
$endgroup$
This question already has an answer here:
How to efficiently find positions of duplicates?
8 answers
Suppose I have the following matrix:
M =
{{0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0},
{0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0},
{0, 0, 0, 0,1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0},
{0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}};
TableForm[M, TableHeadings -> {{S1, S2, S3, S4, S5, S6, S7, S8}}]
In this case, it turns out that rows (S1, S8), (S2, S3, S4), (S5, S6, S7) have equal element values in identical column positions. I have a 1000 x 1000 matrix to examine and would appreciate any assistance in coding this problem.
This question already has an answer here:
How to efficiently find positions of duplicates?
8 answers
list-manipulation matrix
list-manipulation matrix
edited Apr 25 at 20:03
m_goldberg
90.4k873203
90.4k873203
asked Apr 25 at 19:15
PRGPRG
1248
1248
marked as duplicate by Michael E2, m_goldberg, happy fish, bbgodfrey, C. E. May 1 at 13:07
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
marked as duplicate by Michael E2, m_goldberg, happy fish, bbgodfrey, C. E. May 1 at 13:07
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
3
$begingroup$
TryValues[PositionIndex[M]]
$endgroup$
– Coolwater
Apr 25 at 21:39
$begingroup$
@Coolwater If there is a unique row, your method will fail. At least one needs to delete if list has length 1
$endgroup$
– Okkes Dulgerci
Apr 25 at 22:58
$begingroup$
@Coolwater IMHO, the best answer is lacking so far. Please consider postingPositionIndex
as possible solution.
$endgroup$
– Henrik Schumacher
Apr 26 at 12:08
add a comment |
3
$begingroup$
TryValues[PositionIndex[M]]
$endgroup$
– Coolwater
Apr 25 at 21:39
$begingroup$
@Coolwater If there is a unique row, your method will fail. At least one needs to delete if list has length 1
$endgroup$
– Okkes Dulgerci
Apr 25 at 22:58
$begingroup$
@Coolwater IMHO, the best answer is lacking so far. Please consider postingPositionIndex
as possible solution.
$endgroup$
– Henrik Schumacher
Apr 26 at 12:08
3
3
$begingroup$
Try
Values[PositionIndex[M]]
$endgroup$
– Coolwater
Apr 25 at 21:39
$begingroup$
Try
Values[PositionIndex[M]]
$endgroup$
– Coolwater
Apr 25 at 21:39
$begingroup$
@Coolwater If there is a unique row, your method will fail. At least one needs to delete if list has length 1
$endgroup$
– Okkes Dulgerci
Apr 25 at 22:58
$begingroup$
@Coolwater If there is a unique row, your method will fail. At least one needs to delete if list has length 1
$endgroup$
– Okkes Dulgerci
Apr 25 at 22:58
$begingroup$
@Coolwater IMHO, the best answer is lacking so far. Please consider posting
PositionIndex
as possible solution.$endgroup$
– Henrik Schumacher
Apr 26 at 12:08
$begingroup$
@Coolwater IMHO, the best answer is lacking so far. Please consider posting
PositionIndex
as possible solution.$endgroup$
– Henrik Schumacher
Apr 26 at 12:08
add a comment |
4 Answers
4
active
oldest
votes
$begingroup$
I'd use GroupBy
.
First the names of the rows: can be anything you like, for example
rownames = Array[ToExpression["S" <> ToString[#]] &, Length[M]]
{S1, S2, S3, S4, S5, S6, S7, S8}
Next the grouping:
groups = GroupBy[Thread[rownames -> M], Last -> First]
<|{0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} -> {S1, S8},
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0} -> {S2, S3, S4},
{0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0} -> {S5, S6, S7}|>
If all you need are the names:
Values[groups]
{{S1, S8}, {S2, S3, S4}, {S5, S6, S7}}
$endgroup$
add a comment |
$begingroup$
idx = DeleteDuplicates[Sort /@ Nearest[M -> Automatic, M, {∞, 0}]]
{{1, 8}, {2, 3, 4}, {5, 6, 7}}
In order to obtain the labels of the rows, you may use the following:
labels = {S1, S2, S3, S4, S5, S6, S7, S8};
Map[labels[[#]] &, idx, {2}]
{{S1, S8}, {S2, S3, S4}, {S5, S6, S7}}
$endgroup$
$begingroup$
Henrik: Can I add the S in front of the result; e.g., (S1,S8),(S3,S4),(S5,S6,S7)?
$endgroup$
– PRG
Apr 25 at 19:26
$begingroup$
MANY THANKS, HENRIK!
$endgroup$
– PRG
Apr 25 at 20:33
$begingroup$
YOU'RE WELCOME, PRG! =D
$endgroup$
– Henrik Schumacher
Apr 25 at 20:34
add a comment |
$begingroup$
The function positionDuplicates
from How to efficiently find positions of duplicates? does the job, faster than Nearest
.
(* Henrik's method *)
posDupes[M_] := DeleteDuplicates[Sort /@ Nearest[M -> Automatic, M, {∞, 0}]]
SeedRandom[0]; (* to make a reproducible 1000 x 1000 matrix *)
foo = Nest[RandomInteger[1, {1000, 1000}] # &, 1, 9];
d1 = Cases[positionDuplicates[foo], dupe_ /; Length[dupe] > 1]; // RepeatedTiming
(* {0.017, Null} *)
d2 = Cases[posDupes[foo], dupe_ /; Length[dupe] > 1]; // RepeatedTiming
(* {0.060, Null} *)
d1 == d2
(* True *)
d1
(*
{{25, 75, 291, 355, 356, 425, 475, 518, 547, 668, 670, 750, 777},
{173, 516}, {544, 816}, {610, 720}}
*)
$endgroup$
1
$begingroup$
Cases[Values[PositionIndex[M]], dupe_ /; Length[dupe] > 1]
is faster thanpositionDuplicates
$endgroup$
– Okkes Dulgerci
Apr 26 at 1:41
$begingroup$
@OkkesDulgerci Yes, it is for me, too, in V12. My main point is that the solution to this question has been given in another Q&A. See this answer for thePositionIndex
solutoin.
$endgroup$
– Michael E2
Apr 26 at 1:44
3
$begingroup$
@OkkesDulgerci It's interesting thatPositionIndex
outperformspositionDuplicates
on a list of lists, because it is still much slower on a list of integers.
$endgroup$
– Michael E2
Apr 26 at 1:53
add a comment |
$begingroup$
While this question repeats a previous query about finding DuplicatePositions
, the duplicates here are amongst a list of binary vectors in contrast to the original duplicates occurring amongst a list of numbers. As illustrated in an answer to the original query however, the type, depth and distribution of inputs can significantly impact efficiency so there may well be further optimizations specific to this case of finding duplicates amongst binary vectors. The following summarises timings of the "superfunction" DuplicatePositions
(collected and defined from answers to the original question - in particular Szabolcs, Carl Woll and Mr.Wizard), postionDuplicates
(the fastest solutions for numbers from Szabolcs) and a tweeking in the "UseGatherByLocalMap"
Method
option (from Carl Woll), the accepted groupBy
answer (by Roman) and the nearest
answer (by Henrik Schumacher) for various types of binary vectors. I've contributed the "UseOrdering"
Method
in DuplicatePositions
.
duplicatePositionsByOrdering[ls_]:= SplitBy[Ordering@ls, ls[[#]] &] // SortBy[First]
which seems to do well for sparse vectors (a more succinct version of similar ideas used by Mr.Wizard and Leonid Shifrin in their anwers). Note that a random 1000x1000 binary matrix is very likely to be sparse to the point of no (row) duplicates occurring so presumably in the OP's situation the authentic data is not randomly generated and instead includes manufactured repeats. To the timings (the tag function just puts in the S1, S2 ... tags as originally requested and the tick indicates identical output):
Obviously timings aren't everything as short-clear functions can often be preferable (as well as potentially being more efficient for different inputs) but it can also sometimes be illuminating--here for example, indicating that GroupBy
seems to recognize order for ragged vectors unlike GatherBy
.
The code for the above output is below
SetAttributes[benchmark, HoldAll];
benchmark[functions_, opts : OptionsPattern] :=
Function[input, benchmark[functions, input, opts], HoldAll];
benchmark[functions_, input_, OptionsPattern] := Module[{
tm = Function[fn,
Function[x, <|ToString[fn] -> RepeatedTiming@fn@x|>]],
timesOutputs, times},
SeedRandom@0;
timesOutputs = Through[(tm /@ functions)@input];
times =
SortBy[Query[All, All, First]@timesOutputs, Last] // Dataset;
If[OptionValue@"CheckOutputs",
Labeled[times,
Row[{ToString@Unevaluated@input, Spacer@80,
If[SameQ @@ (Query[All, Last, 2]@timesOutputs),
Style["[Checkmark]", Green, 20], Style["x", Red, 20]]}],
Top], times]
];
Options[benchmark] = {"CheckOutputs" -> True};
Options[DuplicatePositions] = {Method -> Automatic};
DuplicatePositions[ls_, OptionsPattern] :=
With[{method = OptionValue[Method]},
Switch[method,
"UseGatherBy", GatherBy[Range@Length@ls, ls[[#]] &],
"UsePositionIndex", Values@PositionIndex@ls,
"UseOrdering", SplitBy[Ordering@ls, ls[[#]] &] // SortBy[First],
"UseGatherByLocalMap", Module[{func}, func /: Map[func, _] := ls;
GatherBy[Range@Length@ls, func]],
Automatic, Which[
ArrayQ[ls, 1, NumericQ],
DuplicatePositions[ls, "Method" -> "UseGatherBy" ],
ArrayQ[ls, 2, NumericQ], DuplicatePositionsBy[ls, FromDigits],
MatchQ[{{_?IntegerQ ..} ..}]@ls,
DuplicatePositionsBy[ls, FromDigits],
True, DuplicatePositions[ls, Method -> "UsePositionIndex" ]
]]];
DuplicatePositionsBy[ls_, fn_, opts : OptionsPattern] :=
DuplicatePositions[fn /@ ls, opts];
tag = Map["S" <> ToString@# &, #, {-1}] &;
positionDuplicates[ls_] := GatherBy[Range@Length@ls, ls[[#]] &];
groupBy[M_] := With[
{rownames = Array["S" <> ToString[#] &, Length[M]]},
Values@GroupBy[Thread[rownames -> M], Last -> First]];
nearest[M_] :=
DeleteDuplicates[
Sort /@ Nearest[M -> Automatic, M, {[Infinity], 0}]];
n = 10^4;
binaryVectors50k =
IntegerDigits[#, 2, 13] & /@ RandomInteger[n, 5*n];
fns = {
groupBy,
(nearest@# // tag) &,
(DuplicatePositions@# // tag) &,
(DuplicatePositionsBy[#, FromDigits[#, 2] &,
Method -> "UseGatherByLocalMap"] // tag) &,
(positionDuplicates@# // tag) &
};
benchmark[fns]@binaryVectors50k
n = 10^3;
binaryVectorsRagged5k = IntegerDigits[#, 2] & /@ RandomInteger[n, 5*n];
fns = {
groupBy,
(DuplicatePositions@# // tag) &,
(DuplicatePositionsBy[#, FromDigits[#, 2] &,
Method -> "UseGatherByLocalMap"] // tag) &,
(positionDuplicates@# // tag) &
};
benchmark[fns]@binaryVectorsRagged5k
n = 10^4;
binaryVectorsSparse10k := RandomInteger[1, {n, n}];
fns = {
(DuplicatePositions@# // tag) &,
(positionDuplicates@# // tag) &,
(DuplicatePositions[#, Method -> "UseOrdering"] // tag) &,
groupBy};
benchmark[fns]@binaryVectorsSparse10k
$endgroup$
add a comment |
4 Answers
4
active
oldest
votes
4 Answers
4
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
I'd use GroupBy
.
First the names of the rows: can be anything you like, for example
rownames = Array[ToExpression["S" <> ToString[#]] &, Length[M]]
{S1, S2, S3, S4, S5, S6, S7, S8}
Next the grouping:
groups = GroupBy[Thread[rownames -> M], Last -> First]
<|{0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} -> {S1, S8},
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0} -> {S2, S3, S4},
{0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0} -> {S5, S6, S7}|>
If all you need are the names:
Values[groups]
{{S1, S8}, {S2, S3, S4}, {S5, S6, S7}}
$endgroup$
add a comment |
$begingroup$
I'd use GroupBy
.
First the names of the rows: can be anything you like, for example
rownames = Array[ToExpression["S" <> ToString[#]] &, Length[M]]
{S1, S2, S3, S4, S5, S6, S7, S8}
Next the grouping:
groups = GroupBy[Thread[rownames -> M], Last -> First]
<|{0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} -> {S1, S8},
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0} -> {S2, S3, S4},
{0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0} -> {S5, S6, S7}|>
If all you need are the names:
Values[groups]
{{S1, S8}, {S2, S3, S4}, {S5, S6, S7}}
$endgroup$
add a comment |
$begingroup$
I'd use GroupBy
.
First the names of the rows: can be anything you like, for example
rownames = Array[ToExpression["S" <> ToString[#]] &, Length[M]]
{S1, S2, S3, S4, S5, S6, S7, S8}
Next the grouping:
groups = GroupBy[Thread[rownames -> M], Last -> First]
<|{0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} -> {S1, S8},
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0} -> {S2, S3, S4},
{0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0} -> {S5, S6, S7}|>
If all you need are the names:
Values[groups]
{{S1, S8}, {S2, S3, S4}, {S5, S6, S7}}
$endgroup$
I'd use GroupBy
.
First the names of the rows: can be anything you like, for example
rownames = Array[ToExpression["S" <> ToString[#]] &, Length[M]]
{S1, S2, S3, S4, S5, S6, S7, S8}
Next the grouping:
groups = GroupBy[Thread[rownames -> M], Last -> First]
<|{0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} -> {S1, S8},
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0} -> {S2, S3, S4},
{0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0} -> {S5, S6, S7}|>
If all you need are the names:
Values[groups]
{{S1, S8}, {S2, S3, S4}, {S5, S6, S7}}
answered Apr 26 at 11:45
RomanRoman
9,59511440
9,59511440
add a comment |
add a comment |
$begingroup$
idx = DeleteDuplicates[Sort /@ Nearest[M -> Automatic, M, {∞, 0}]]
{{1, 8}, {2, 3, 4}, {5, 6, 7}}
In order to obtain the labels of the rows, you may use the following:
labels = {S1, S2, S3, S4, S5, S6, S7, S8};
Map[labels[[#]] &, idx, {2}]
{{S1, S8}, {S2, S3, S4}, {S5, S6, S7}}
$endgroup$
$begingroup$
Henrik: Can I add the S in front of the result; e.g., (S1,S8),(S3,S4),(S5,S6,S7)?
$endgroup$
– PRG
Apr 25 at 19:26
$begingroup$
MANY THANKS, HENRIK!
$endgroup$
– PRG
Apr 25 at 20:33
$begingroup$
YOU'RE WELCOME, PRG! =D
$endgroup$
– Henrik Schumacher
Apr 25 at 20:34
add a comment |
$begingroup$
idx = DeleteDuplicates[Sort /@ Nearest[M -> Automatic, M, {∞, 0}]]
{{1, 8}, {2, 3, 4}, {5, 6, 7}}
In order to obtain the labels of the rows, you may use the following:
labels = {S1, S2, S3, S4, S5, S6, S7, S8};
Map[labels[[#]] &, idx, {2}]
{{S1, S8}, {S2, S3, S4}, {S5, S6, S7}}
$endgroup$
$begingroup$
Henrik: Can I add the S in front of the result; e.g., (S1,S8),(S3,S4),(S5,S6,S7)?
$endgroup$
– PRG
Apr 25 at 19:26
$begingroup$
MANY THANKS, HENRIK!
$endgroup$
– PRG
Apr 25 at 20:33
$begingroup$
YOU'RE WELCOME, PRG! =D
$endgroup$
– Henrik Schumacher
Apr 25 at 20:34
add a comment |
$begingroup$
idx = DeleteDuplicates[Sort /@ Nearest[M -> Automatic, M, {∞, 0}]]
{{1, 8}, {2, 3, 4}, {5, 6, 7}}
In order to obtain the labels of the rows, you may use the following:
labels = {S1, S2, S3, S4, S5, S6, S7, S8};
Map[labels[[#]] &, idx, {2}]
{{S1, S8}, {S2, S3, S4}, {S5, S6, S7}}
$endgroup$
idx = DeleteDuplicates[Sort /@ Nearest[M -> Automatic, M, {∞, 0}]]
{{1, 8}, {2, 3, 4}, {5, 6, 7}}
In order to obtain the labels of the rows, you may use the following:
labels = {S1, S2, S3, S4, S5, S6, S7, S8};
Map[labels[[#]] &, idx, {2}]
{{S1, S8}, {S2, S3, S4}, {S5, S6, S7}}
edited Apr 25 at 20:04
answered Apr 25 at 19:22
Henrik SchumacherHenrik Schumacher
63.2k587176
63.2k587176
$begingroup$
Henrik: Can I add the S in front of the result; e.g., (S1,S8),(S3,S4),(S5,S6,S7)?
$endgroup$
– PRG
Apr 25 at 19:26
$begingroup$
MANY THANKS, HENRIK!
$endgroup$
– PRG
Apr 25 at 20:33
$begingroup$
YOU'RE WELCOME, PRG! =D
$endgroup$
– Henrik Schumacher
Apr 25 at 20:34
add a comment |
$begingroup$
Henrik: Can I add the S in front of the result; e.g., (S1,S8),(S3,S4),(S5,S6,S7)?
$endgroup$
– PRG
Apr 25 at 19:26
$begingroup$
MANY THANKS, HENRIK!
$endgroup$
– PRG
Apr 25 at 20:33
$begingroup$
YOU'RE WELCOME, PRG! =D
$endgroup$
– Henrik Schumacher
Apr 25 at 20:34
$begingroup$
Henrik: Can I add the S in front of the result; e.g., (S1,S8),(S3,S4),(S5,S6,S7)?
$endgroup$
– PRG
Apr 25 at 19:26
$begingroup$
Henrik: Can I add the S in front of the result; e.g., (S1,S8),(S3,S4),(S5,S6,S7)?
$endgroup$
– PRG
Apr 25 at 19:26
$begingroup$
MANY THANKS, HENRIK!
$endgroup$
– PRG
Apr 25 at 20:33
$begingroup$
MANY THANKS, HENRIK!
$endgroup$
– PRG
Apr 25 at 20:33
$begingroup$
YOU'RE WELCOME, PRG! =D
$endgroup$
– Henrik Schumacher
Apr 25 at 20:34
$begingroup$
YOU'RE WELCOME, PRG! =D
$endgroup$
– Henrik Schumacher
Apr 25 at 20:34
add a comment |
$begingroup$
The function positionDuplicates
from How to efficiently find positions of duplicates? does the job, faster than Nearest
.
(* Henrik's method *)
posDupes[M_] := DeleteDuplicates[Sort /@ Nearest[M -> Automatic, M, {∞, 0}]]
SeedRandom[0]; (* to make a reproducible 1000 x 1000 matrix *)
foo = Nest[RandomInteger[1, {1000, 1000}] # &, 1, 9];
d1 = Cases[positionDuplicates[foo], dupe_ /; Length[dupe] > 1]; // RepeatedTiming
(* {0.017, Null} *)
d2 = Cases[posDupes[foo], dupe_ /; Length[dupe] > 1]; // RepeatedTiming
(* {0.060, Null} *)
d1 == d2
(* True *)
d1
(*
{{25, 75, 291, 355, 356, 425, 475, 518, 547, 668, 670, 750, 777},
{173, 516}, {544, 816}, {610, 720}}
*)
$endgroup$
1
$begingroup$
Cases[Values[PositionIndex[M]], dupe_ /; Length[dupe] > 1]
is faster thanpositionDuplicates
$endgroup$
– Okkes Dulgerci
Apr 26 at 1:41
$begingroup$
@OkkesDulgerci Yes, it is for me, too, in V12. My main point is that the solution to this question has been given in another Q&A. See this answer for thePositionIndex
solutoin.
$endgroup$
– Michael E2
Apr 26 at 1:44
3
$begingroup$
@OkkesDulgerci It's interesting thatPositionIndex
outperformspositionDuplicates
on a list of lists, because it is still much slower on a list of integers.
$endgroup$
– Michael E2
Apr 26 at 1:53
add a comment |
$begingroup$
The function positionDuplicates
from How to efficiently find positions of duplicates? does the job, faster than Nearest
.
(* Henrik's method *)
posDupes[M_] := DeleteDuplicates[Sort /@ Nearest[M -> Automatic, M, {∞, 0}]]
SeedRandom[0]; (* to make a reproducible 1000 x 1000 matrix *)
foo = Nest[RandomInteger[1, {1000, 1000}] # &, 1, 9];
d1 = Cases[positionDuplicates[foo], dupe_ /; Length[dupe] > 1]; // RepeatedTiming
(* {0.017, Null} *)
d2 = Cases[posDupes[foo], dupe_ /; Length[dupe] > 1]; // RepeatedTiming
(* {0.060, Null} *)
d1 == d2
(* True *)
d1
(*
{{25, 75, 291, 355, 356, 425, 475, 518, 547, 668, 670, 750, 777},
{173, 516}, {544, 816}, {610, 720}}
*)
$endgroup$
1
$begingroup$
Cases[Values[PositionIndex[M]], dupe_ /; Length[dupe] > 1]
is faster thanpositionDuplicates
$endgroup$
– Okkes Dulgerci
Apr 26 at 1:41
$begingroup$
@OkkesDulgerci Yes, it is for me, too, in V12. My main point is that the solution to this question has been given in another Q&A. See this answer for thePositionIndex
solutoin.
$endgroup$
– Michael E2
Apr 26 at 1:44
3
$begingroup$
@OkkesDulgerci It's interesting thatPositionIndex
outperformspositionDuplicates
on a list of lists, because it is still much slower on a list of integers.
$endgroup$
– Michael E2
Apr 26 at 1:53
add a comment |
$begingroup$
The function positionDuplicates
from How to efficiently find positions of duplicates? does the job, faster than Nearest
.
(* Henrik's method *)
posDupes[M_] := DeleteDuplicates[Sort /@ Nearest[M -> Automatic, M, {∞, 0}]]
SeedRandom[0]; (* to make a reproducible 1000 x 1000 matrix *)
foo = Nest[RandomInteger[1, {1000, 1000}] # &, 1, 9];
d1 = Cases[positionDuplicates[foo], dupe_ /; Length[dupe] > 1]; // RepeatedTiming
(* {0.017, Null} *)
d2 = Cases[posDupes[foo], dupe_ /; Length[dupe] > 1]; // RepeatedTiming
(* {0.060, Null} *)
d1 == d2
(* True *)
d1
(*
{{25, 75, 291, 355, 356, 425, 475, 518, 547, 668, 670, 750, 777},
{173, 516}, {544, 816}, {610, 720}}
*)
$endgroup$
The function positionDuplicates
from How to efficiently find positions of duplicates? does the job, faster than Nearest
.
(* Henrik's method *)
posDupes[M_] := DeleteDuplicates[Sort /@ Nearest[M -> Automatic, M, {∞, 0}]]
SeedRandom[0]; (* to make a reproducible 1000 x 1000 matrix *)
foo = Nest[RandomInteger[1, {1000, 1000}] # &, 1, 9];
d1 = Cases[positionDuplicates[foo], dupe_ /; Length[dupe] > 1]; // RepeatedTiming
(* {0.017, Null} *)
d2 = Cases[posDupes[foo], dupe_ /; Length[dupe] > 1]; // RepeatedTiming
(* {0.060, Null} *)
d1 == d2
(* True *)
d1
(*
{{25, 75, 291, 355, 356, 425, 475, 518, 547, 668, 670, 750, 777},
{173, 516}, {544, 816}, {610, 720}}
*)
answered Apr 26 at 0:28
Michael E2Michael E2
153k12208495
153k12208495
1
$begingroup$
Cases[Values[PositionIndex[M]], dupe_ /; Length[dupe] > 1]
is faster thanpositionDuplicates
$endgroup$
– Okkes Dulgerci
Apr 26 at 1:41
$begingroup$
@OkkesDulgerci Yes, it is for me, too, in V12. My main point is that the solution to this question has been given in another Q&A. See this answer for thePositionIndex
solutoin.
$endgroup$
– Michael E2
Apr 26 at 1:44
3
$begingroup$
@OkkesDulgerci It's interesting thatPositionIndex
outperformspositionDuplicates
on a list of lists, because it is still much slower on a list of integers.
$endgroup$
– Michael E2
Apr 26 at 1:53
add a comment |
1
$begingroup$
Cases[Values[PositionIndex[M]], dupe_ /; Length[dupe] > 1]
is faster thanpositionDuplicates
$endgroup$
– Okkes Dulgerci
Apr 26 at 1:41
$begingroup$
@OkkesDulgerci Yes, it is for me, too, in V12. My main point is that the solution to this question has been given in another Q&A. See this answer for thePositionIndex
solutoin.
$endgroup$
– Michael E2
Apr 26 at 1:44
3
$begingroup$
@OkkesDulgerci It's interesting thatPositionIndex
outperformspositionDuplicates
on a list of lists, because it is still much slower on a list of integers.
$endgroup$
– Michael E2
Apr 26 at 1:53
1
1
$begingroup$
Cases[Values[PositionIndex[M]], dupe_ /; Length[dupe] > 1]
is faster than positionDuplicates
$endgroup$
– Okkes Dulgerci
Apr 26 at 1:41
$begingroup$
Cases[Values[PositionIndex[M]], dupe_ /; Length[dupe] > 1]
is faster than positionDuplicates
$endgroup$
– Okkes Dulgerci
Apr 26 at 1:41
$begingroup$
@OkkesDulgerci Yes, it is for me, too, in V12. My main point is that the solution to this question has been given in another Q&A. See this answer for the
PositionIndex
solutoin.$endgroup$
– Michael E2
Apr 26 at 1:44
$begingroup$
@OkkesDulgerci Yes, it is for me, too, in V12. My main point is that the solution to this question has been given in another Q&A. See this answer for the
PositionIndex
solutoin.$endgroup$
– Michael E2
Apr 26 at 1:44
3
3
$begingroup$
@OkkesDulgerci It's interesting that
PositionIndex
outperforms positionDuplicates
on a list of lists, because it is still much slower on a list of integers.$endgroup$
– Michael E2
Apr 26 at 1:53
$begingroup$
@OkkesDulgerci It's interesting that
PositionIndex
outperforms positionDuplicates
on a list of lists, because it is still much slower on a list of integers.$endgroup$
– Michael E2
Apr 26 at 1:53
add a comment |
$begingroup$
While this question repeats a previous query about finding DuplicatePositions
, the duplicates here are amongst a list of binary vectors in contrast to the original duplicates occurring amongst a list of numbers. As illustrated in an answer to the original query however, the type, depth and distribution of inputs can significantly impact efficiency so there may well be further optimizations specific to this case of finding duplicates amongst binary vectors. The following summarises timings of the "superfunction" DuplicatePositions
(collected and defined from answers to the original question - in particular Szabolcs, Carl Woll and Mr.Wizard), postionDuplicates
(the fastest solutions for numbers from Szabolcs) and a tweeking in the "UseGatherByLocalMap"
Method
option (from Carl Woll), the accepted groupBy
answer (by Roman) and the nearest
answer (by Henrik Schumacher) for various types of binary vectors. I've contributed the "UseOrdering"
Method
in DuplicatePositions
.
duplicatePositionsByOrdering[ls_]:= SplitBy[Ordering@ls, ls[[#]] &] // SortBy[First]
which seems to do well for sparse vectors (a more succinct version of similar ideas used by Mr.Wizard and Leonid Shifrin in their anwers). Note that a random 1000x1000 binary matrix is very likely to be sparse to the point of no (row) duplicates occurring so presumably in the OP's situation the authentic data is not randomly generated and instead includes manufactured repeats. To the timings (the tag function just puts in the S1, S2 ... tags as originally requested and the tick indicates identical output):
Obviously timings aren't everything as short-clear functions can often be preferable (as well as potentially being more efficient for different inputs) but it can also sometimes be illuminating--here for example, indicating that GroupBy
seems to recognize order for ragged vectors unlike GatherBy
.
The code for the above output is below
SetAttributes[benchmark, HoldAll];
benchmark[functions_, opts : OptionsPattern] :=
Function[input, benchmark[functions, input, opts], HoldAll];
benchmark[functions_, input_, OptionsPattern] := Module[{
tm = Function[fn,
Function[x, <|ToString[fn] -> RepeatedTiming@fn@x|>]],
timesOutputs, times},
SeedRandom@0;
timesOutputs = Through[(tm /@ functions)@input];
times =
SortBy[Query[All, All, First]@timesOutputs, Last] // Dataset;
If[OptionValue@"CheckOutputs",
Labeled[times,
Row[{ToString@Unevaluated@input, Spacer@80,
If[SameQ @@ (Query[All, Last, 2]@timesOutputs),
Style["[Checkmark]", Green, 20], Style["x", Red, 20]]}],
Top], times]
];
Options[benchmark] = {"CheckOutputs" -> True};
Options[DuplicatePositions] = {Method -> Automatic};
DuplicatePositions[ls_, OptionsPattern] :=
With[{method = OptionValue[Method]},
Switch[method,
"UseGatherBy", GatherBy[Range@Length@ls, ls[[#]] &],
"UsePositionIndex", Values@PositionIndex@ls,
"UseOrdering", SplitBy[Ordering@ls, ls[[#]] &] // SortBy[First],
"UseGatherByLocalMap", Module[{func}, func /: Map[func, _] := ls;
GatherBy[Range@Length@ls, func]],
Automatic, Which[
ArrayQ[ls, 1, NumericQ],
DuplicatePositions[ls, "Method" -> "UseGatherBy" ],
ArrayQ[ls, 2, NumericQ], DuplicatePositionsBy[ls, FromDigits],
MatchQ[{{_?IntegerQ ..} ..}]@ls,
DuplicatePositionsBy[ls, FromDigits],
True, DuplicatePositions[ls, Method -> "UsePositionIndex" ]
]]];
DuplicatePositionsBy[ls_, fn_, opts : OptionsPattern] :=
DuplicatePositions[fn /@ ls, opts];
tag = Map["S" <> ToString@# &, #, {-1}] &;
positionDuplicates[ls_] := GatherBy[Range@Length@ls, ls[[#]] &];
groupBy[M_] := With[
{rownames = Array["S" <> ToString[#] &, Length[M]]},
Values@GroupBy[Thread[rownames -> M], Last -> First]];
nearest[M_] :=
DeleteDuplicates[
Sort /@ Nearest[M -> Automatic, M, {[Infinity], 0}]];
n = 10^4;
binaryVectors50k =
IntegerDigits[#, 2, 13] & /@ RandomInteger[n, 5*n];
fns = {
groupBy,
(nearest@# // tag) &,
(DuplicatePositions@# // tag) &,
(DuplicatePositionsBy[#, FromDigits[#, 2] &,
Method -> "UseGatherByLocalMap"] // tag) &,
(positionDuplicates@# // tag) &
};
benchmark[fns]@binaryVectors50k
n = 10^3;
binaryVectorsRagged5k = IntegerDigits[#, 2] & /@ RandomInteger[n, 5*n];
fns = {
groupBy,
(DuplicatePositions@# // tag) &,
(DuplicatePositionsBy[#, FromDigits[#, 2] &,
Method -> "UseGatherByLocalMap"] // tag) &,
(positionDuplicates@# // tag) &
};
benchmark[fns]@binaryVectorsRagged5k
n = 10^4;
binaryVectorsSparse10k := RandomInteger[1, {n, n}];
fns = {
(DuplicatePositions@# // tag) &,
(positionDuplicates@# // tag) &,
(DuplicatePositions[#, Method -> "UseOrdering"] // tag) &,
groupBy};
benchmark[fns]@binaryVectorsSparse10k
$endgroup$
add a comment |
$begingroup$
While this question repeats a previous query about finding DuplicatePositions
, the duplicates here are amongst a list of binary vectors in contrast to the original duplicates occurring amongst a list of numbers. As illustrated in an answer to the original query however, the type, depth and distribution of inputs can significantly impact efficiency so there may well be further optimizations specific to this case of finding duplicates amongst binary vectors. The following summarises timings of the "superfunction" DuplicatePositions
(collected and defined from answers to the original question - in particular Szabolcs, Carl Woll and Mr.Wizard), postionDuplicates
(the fastest solutions for numbers from Szabolcs) and a tweeking in the "UseGatherByLocalMap"
Method
option (from Carl Woll), the accepted groupBy
answer (by Roman) and the nearest
answer (by Henrik Schumacher) for various types of binary vectors. I've contributed the "UseOrdering"
Method
in DuplicatePositions
.
duplicatePositionsByOrdering[ls_]:= SplitBy[Ordering@ls, ls[[#]] &] // SortBy[First]
which seems to do well for sparse vectors (a more succinct version of similar ideas used by Mr.Wizard and Leonid Shifrin in their anwers). Note that a random 1000x1000 binary matrix is very likely to be sparse to the point of no (row) duplicates occurring so presumably in the OP's situation the authentic data is not randomly generated and instead includes manufactured repeats. To the timings (the tag function just puts in the S1, S2 ... tags as originally requested and the tick indicates identical output):
Obviously timings aren't everything as short-clear functions can often be preferable (as well as potentially being more efficient for different inputs) but it can also sometimes be illuminating--here for example, indicating that GroupBy
seems to recognize order for ragged vectors unlike GatherBy
.
The code for the above output is below
SetAttributes[benchmark, HoldAll];
benchmark[functions_, opts : OptionsPattern] :=
Function[input, benchmark[functions, input, opts], HoldAll];
benchmark[functions_, input_, OptionsPattern] := Module[{
tm = Function[fn,
Function[x, <|ToString[fn] -> RepeatedTiming@fn@x|>]],
timesOutputs, times},
SeedRandom@0;
timesOutputs = Through[(tm /@ functions)@input];
times =
SortBy[Query[All, All, First]@timesOutputs, Last] // Dataset;
If[OptionValue@"CheckOutputs",
Labeled[times,
Row[{ToString@Unevaluated@input, Spacer@80,
If[SameQ @@ (Query[All, Last, 2]@timesOutputs),
Style["[Checkmark]", Green, 20], Style["x", Red, 20]]}],
Top], times]
];
Options[benchmark] = {"CheckOutputs" -> True};
Options[DuplicatePositions] = {Method -> Automatic};
DuplicatePositions[ls_, OptionsPattern] :=
With[{method = OptionValue[Method]},
Switch[method,
"UseGatherBy", GatherBy[Range@Length@ls, ls[[#]] &],
"UsePositionIndex", Values@PositionIndex@ls,
"UseOrdering", SplitBy[Ordering@ls, ls[[#]] &] // SortBy[First],
"UseGatherByLocalMap", Module[{func}, func /: Map[func, _] := ls;
GatherBy[Range@Length@ls, func]],
Automatic, Which[
ArrayQ[ls, 1, NumericQ],
DuplicatePositions[ls, "Method" -> "UseGatherBy" ],
ArrayQ[ls, 2, NumericQ], DuplicatePositionsBy[ls, FromDigits],
MatchQ[{{_?IntegerQ ..} ..}]@ls,
DuplicatePositionsBy[ls, FromDigits],
True, DuplicatePositions[ls, Method -> "UsePositionIndex" ]
]]];
DuplicatePositionsBy[ls_, fn_, opts : OptionsPattern] :=
DuplicatePositions[fn /@ ls, opts];
tag = Map["S" <> ToString@# &, #, {-1}] &;
positionDuplicates[ls_] := GatherBy[Range@Length@ls, ls[[#]] &];
groupBy[M_] := With[
{rownames = Array["S" <> ToString[#] &, Length[M]]},
Values@GroupBy[Thread[rownames -> M], Last -> First]];
nearest[M_] :=
DeleteDuplicates[
Sort /@ Nearest[M -> Automatic, M, {[Infinity], 0}]];
n = 10^4;
binaryVectors50k =
IntegerDigits[#, 2, 13] & /@ RandomInteger[n, 5*n];
fns = {
groupBy,
(nearest@# // tag) &,
(DuplicatePositions@# // tag) &,
(DuplicatePositionsBy[#, FromDigits[#, 2] &,
Method -> "UseGatherByLocalMap"] // tag) &,
(positionDuplicates@# // tag) &
};
benchmark[fns]@binaryVectors50k
n = 10^3;
binaryVectorsRagged5k = IntegerDigits[#, 2] & /@ RandomInteger[n, 5*n];
fns = {
groupBy,
(DuplicatePositions@# // tag) &,
(DuplicatePositionsBy[#, FromDigits[#, 2] &,
Method -> "UseGatherByLocalMap"] // tag) &,
(positionDuplicates@# // tag) &
};
benchmark[fns]@binaryVectorsRagged5k
n = 10^4;
binaryVectorsSparse10k := RandomInteger[1, {n, n}];
fns = {
(DuplicatePositions@# // tag) &,
(positionDuplicates@# // tag) &,
(DuplicatePositions[#, Method -> "UseOrdering"] // tag) &,
groupBy};
benchmark[fns]@binaryVectorsSparse10k
$endgroup$
add a comment |
$begingroup$
While this question repeats a previous query about finding DuplicatePositions
, the duplicates here are amongst a list of binary vectors in contrast to the original duplicates occurring amongst a list of numbers. As illustrated in an answer to the original query however, the type, depth and distribution of inputs can significantly impact efficiency so there may well be further optimizations specific to this case of finding duplicates amongst binary vectors. The following summarises timings of the "superfunction" DuplicatePositions
(collected and defined from answers to the original question - in particular Szabolcs, Carl Woll and Mr.Wizard), postionDuplicates
(the fastest solutions for numbers from Szabolcs) and a tweeking in the "UseGatherByLocalMap"
Method
option (from Carl Woll), the accepted groupBy
answer (by Roman) and the nearest
answer (by Henrik Schumacher) for various types of binary vectors. I've contributed the "UseOrdering"
Method
in DuplicatePositions
.
duplicatePositionsByOrdering[ls_]:= SplitBy[Ordering@ls, ls[[#]] &] // SortBy[First]
which seems to do well for sparse vectors (a more succinct version of similar ideas used by Mr.Wizard and Leonid Shifrin in their anwers). Note that a random 1000x1000 binary matrix is very likely to be sparse to the point of no (row) duplicates occurring so presumably in the OP's situation the authentic data is not randomly generated and instead includes manufactured repeats. To the timings (the tag function just puts in the S1, S2 ... tags as originally requested and the tick indicates identical output):
Obviously timings aren't everything as short-clear functions can often be preferable (as well as potentially being more efficient for different inputs) but it can also sometimes be illuminating--here for example, indicating that GroupBy
seems to recognize order for ragged vectors unlike GatherBy
.
The code for the above output is below
SetAttributes[benchmark, HoldAll];
benchmark[functions_, opts : OptionsPattern] :=
Function[input, benchmark[functions, input, opts], HoldAll];
benchmark[functions_, input_, OptionsPattern] := Module[{
tm = Function[fn,
Function[x, <|ToString[fn] -> RepeatedTiming@fn@x|>]],
timesOutputs, times},
SeedRandom@0;
timesOutputs = Through[(tm /@ functions)@input];
times =
SortBy[Query[All, All, First]@timesOutputs, Last] // Dataset;
If[OptionValue@"CheckOutputs",
Labeled[times,
Row[{ToString@Unevaluated@input, Spacer@80,
If[SameQ @@ (Query[All, Last, 2]@timesOutputs),
Style["[Checkmark]", Green, 20], Style["x", Red, 20]]}],
Top], times]
];
Options[benchmark] = {"CheckOutputs" -> True};
Options[DuplicatePositions] = {Method -> Automatic};
DuplicatePositions[ls_, OptionsPattern] :=
With[{method = OptionValue[Method]},
Switch[method,
"UseGatherBy", GatherBy[Range@Length@ls, ls[[#]] &],
"UsePositionIndex", Values@PositionIndex@ls,
"UseOrdering", SplitBy[Ordering@ls, ls[[#]] &] // SortBy[First],
"UseGatherByLocalMap", Module[{func}, func /: Map[func, _] := ls;
GatherBy[Range@Length@ls, func]],
Automatic, Which[
ArrayQ[ls, 1, NumericQ],
DuplicatePositions[ls, "Method" -> "UseGatherBy" ],
ArrayQ[ls, 2, NumericQ], DuplicatePositionsBy[ls, FromDigits],
MatchQ[{{_?IntegerQ ..} ..}]@ls,
DuplicatePositionsBy[ls, FromDigits],
True, DuplicatePositions[ls, Method -> "UsePositionIndex" ]
]]];
DuplicatePositionsBy[ls_, fn_, opts : OptionsPattern] :=
DuplicatePositions[fn /@ ls, opts];
tag = Map["S" <> ToString@# &, #, {-1}] &;
positionDuplicates[ls_] := GatherBy[Range@Length@ls, ls[[#]] &];
groupBy[M_] := With[
{rownames = Array["S" <> ToString[#] &, Length[M]]},
Values@GroupBy[Thread[rownames -> M], Last -> First]];
nearest[M_] :=
DeleteDuplicates[
Sort /@ Nearest[M -> Automatic, M, {[Infinity], 0}]];
n = 10^4;
binaryVectors50k =
IntegerDigits[#, 2, 13] & /@ RandomInteger[n, 5*n];
fns = {
groupBy,
(nearest@# // tag) &,
(DuplicatePositions@# // tag) &,
(DuplicatePositionsBy[#, FromDigits[#, 2] &,
Method -> "UseGatherByLocalMap"] // tag) &,
(positionDuplicates@# // tag) &
};
benchmark[fns]@binaryVectors50k
n = 10^3;
binaryVectorsRagged5k = IntegerDigits[#, 2] & /@ RandomInteger[n, 5*n];
fns = {
groupBy,
(DuplicatePositions@# // tag) &,
(DuplicatePositionsBy[#, FromDigits[#, 2] &,
Method -> "UseGatherByLocalMap"] // tag) &,
(positionDuplicates@# // tag) &
};
benchmark[fns]@binaryVectorsRagged5k
n = 10^4;
binaryVectorsSparse10k := RandomInteger[1, {n, n}];
fns = {
(DuplicatePositions@# // tag) &,
(positionDuplicates@# // tag) &,
(DuplicatePositions[#, Method -> "UseOrdering"] // tag) &,
groupBy};
benchmark[fns]@binaryVectorsSparse10k
$endgroup$
While this question repeats a previous query about finding DuplicatePositions
, the duplicates here are amongst a list of binary vectors in contrast to the original duplicates occurring amongst a list of numbers. As illustrated in an answer to the original query however, the type, depth and distribution of inputs can significantly impact efficiency so there may well be further optimizations specific to this case of finding duplicates amongst binary vectors. The following summarises timings of the "superfunction" DuplicatePositions
(collected and defined from answers to the original question - in particular Szabolcs, Carl Woll and Mr.Wizard), postionDuplicates
(the fastest solutions for numbers from Szabolcs) and a tweeking in the "UseGatherByLocalMap"
Method
option (from Carl Woll), the accepted groupBy
answer (by Roman) and the nearest
answer (by Henrik Schumacher) for various types of binary vectors. I've contributed the "UseOrdering"
Method
in DuplicatePositions
.
duplicatePositionsByOrdering[ls_]:= SplitBy[Ordering@ls, ls[[#]] &] // SortBy[First]
which seems to do well for sparse vectors (a more succinct version of similar ideas used by Mr.Wizard and Leonid Shifrin in their anwers). Note that a random 1000x1000 binary matrix is very likely to be sparse to the point of no (row) duplicates occurring so presumably in the OP's situation the authentic data is not randomly generated and instead includes manufactured repeats. To the timings (the tag function just puts in the S1, S2 ... tags as originally requested and the tick indicates identical output):
Obviously timings aren't everything as short-clear functions can often be preferable (as well as potentially being more efficient for different inputs) but it can also sometimes be illuminating--here for example, indicating that GroupBy
seems to recognize order for ragged vectors unlike GatherBy
.
The code for the above output is below
SetAttributes[benchmark, HoldAll];
benchmark[functions_, opts : OptionsPattern] :=
Function[input, benchmark[functions, input, opts], HoldAll];
benchmark[functions_, input_, OptionsPattern] := Module[{
tm = Function[fn,
Function[x, <|ToString[fn] -> RepeatedTiming@fn@x|>]],
timesOutputs, times},
SeedRandom@0;
timesOutputs = Through[(tm /@ functions)@input];
times =
SortBy[Query[All, All, First]@timesOutputs, Last] // Dataset;
If[OptionValue@"CheckOutputs",
Labeled[times,
Row[{ToString@Unevaluated@input, Spacer@80,
If[SameQ @@ (Query[All, Last, 2]@timesOutputs),
Style["[Checkmark]", Green, 20], Style["x", Red, 20]]}],
Top], times]
];
Options[benchmark] = {"CheckOutputs" -> True};
Options[DuplicatePositions] = {Method -> Automatic};
DuplicatePositions[ls_, OptionsPattern] :=
With[{method = OptionValue[Method]},
Switch[method,
"UseGatherBy", GatherBy[Range@Length@ls, ls[[#]] &],
"UsePositionIndex", Values@PositionIndex@ls,
"UseOrdering", SplitBy[Ordering@ls, ls[[#]] &] // SortBy[First],
"UseGatherByLocalMap", Module[{func}, func /: Map[func, _] := ls;
GatherBy[Range@Length@ls, func]],
Automatic, Which[
ArrayQ[ls, 1, NumericQ],
DuplicatePositions[ls, "Method" -> "UseGatherBy" ],
ArrayQ[ls, 2, NumericQ], DuplicatePositionsBy[ls, FromDigits],
MatchQ[{{_?IntegerQ ..} ..}]@ls,
DuplicatePositionsBy[ls, FromDigits],
True, DuplicatePositions[ls, Method -> "UsePositionIndex" ]
]]];
DuplicatePositionsBy[ls_, fn_, opts : OptionsPattern] :=
DuplicatePositions[fn /@ ls, opts];
tag = Map["S" <> ToString@# &, #, {-1}] &;
positionDuplicates[ls_] := GatherBy[Range@Length@ls, ls[[#]] &];
groupBy[M_] := With[
{rownames = Array["S" <> ToString[#] &, Length[M]]},
Values@GroupBy[Thread[rownames -> M], Last -> First]];
nearest[M_] :=
DeleteDuplicates[
Sort /@ Nearest[M -> Automatic, M, {[Infinity], 0}]];
n = 10^4;
binaryVectors50k =
IntegerDigits[#, 2, 13] & /@ RandomInteger[n, 5*n];
fns = {
groupBy,
(nearest@# // tag) &,
(DuplicatePositions@# // tag) &,
(DuplicatePositionsBy[#, FromDigits[#, 2] &,
Method -> "UseGatherByLocalMap"] // tag) &,
(positionDuplicates@# // tag) &
};
benchmark[fns]@binaryVectors50k
n = 10^3;
binaryVectorsRagged5k = IntegerDigits[#, 2] & /@ RandomInteger[n, 5*n];
fns = {
groupBy,
(DuplicatePositions@# // tag) &,
(DuplicatePositionsBy[#, FromDigits[#, 2] &,
Method -> "UseGatherByLocalMap"] // tag) &,
(positionDuplicates@# // tag) &
};
benchmark[fns]@binaryVectorsRagged5k
n = 10^4;
binaryVectorsSparse10k := RandomInteger[1, {n, n}];
fns = {
(DuplicatePositions@# // tag) &,
(positionDuplicates@# // tag) &,
(DuplicatePositions[#, Method -> "UseOrdering"] // tag) &,
groupBy};
benchmark[fns]@binaryVectorsSparse10k
edited May 8 at 23:08
answered Apr 26 at 13:40
Ronald MonsonRonald Monson
3,2331735
3,2331735
add a comment |
add a comment |
3
$begingroup$
Try
Values[PositionIndex[M]]
$endgroup$
– Coolwater
Apr 25 at 21:39
$begingroup$
@Coolwater If there is a unique row, your method will fail. At least one needs to delete if list has length 1
$endgroup$
– Okkes Dulgerci
Apr 25 at 22:58
$begingroup$
@Coolwater IMHO, the best answer is lacking so far. Please consider posting
PositionIndex
as possible solution.$endgroup$
– Henrik Schumacher
Apr 26 at 12:08