An aggregate function, like mean()
, takes a vector of n
elements and returns a single scalar, with the value of this scalar
depending on all the values of the input vector.
mean(c(1, 2, 3, 4, 5, 6))
## [1] 3.5
Vectorized functions, like arithmetic operators, return instead a vector with the same length of the longest one used and for every element of the output its value only depends on the elements directly involved in the operation.
Here for example the summation is performed between elements that share the same position.
c(1, 2, 3, 4, 5, 6) +
c(1, 2, 1, 1, 2, 1)
## [1] 2 4 4 5 7 7
Vectorized functions recycle vectors of minor lengths,
c(1, 2, 3, 4, 5, 6) +
c(1, 2, 1)
## [1] 2 4 4 5 7 7
but if their number of elements is not a divisor of the longer ones’ a warning is issued.
c(1, 2, 3, 4, 5, 6) +
c(1, 2, 1, 1)
## Warning in c(1, 2, 3, 4, 5, 6) + c(1, 2, 1, 1): longer object length is not a
## multiple of shorter object length
## [1] 2 4 4 5 6 8
Window functions are a mix between the two as they generally return vectors of the same length as the input one with the values of the output depending on all the values of the input.
row_number(c(1, 2, 3, 4, 5, 6))
## [1] 1 2 3 4 5 6
There are five families of window functions:
ranking and ordering functions
offset functions
cumulative aggregate
rolling aggregates recycled aggregates
and in the context of dplyr
we will discuss the first
three.
This family consists of
row_number()
min_rank()
dense_rank()
cume_dist()
percent_rank()
ntile()
and these functions ranks the elements of a vector by the values of its elements, with the specific that smaller values are ranked higher.
row_number(c(2, 8, 6, 3, 4))
## [1] 1 5 4 2 3
They differ between themselves by how they resolve possible ties and this dictates the presence of missing or repeated ranks in the output.
row_number()
uses the position to resolve ties by
assigning the higher rank to the element with the lower position, the
one that “comes first”.
<- c(1, 2, 2, 3)) (x
## [1] 1 2 2 3
row_number(x)
## [1] 1 2 3 4
With row_number()
there are no missing or repeated
ranks.
min_rank
allows for ties and, in case of one, the
following element has its rank augmented by the number of elements in
the previous tie minus 1, similar to what happens with sports results
(if two athletes rank at the second place, the following one is ranked
fourth).
x
## [1] 1 2 2 3
min_rank(x)
## [1] 1 2 2 4
There are missing and repeated ranks then.
dense_rank()
allows for ties but, differently from
min_rank()
, uses all the scale without skipping ranks (the
athlete of the previous example would rank third here).
x
## [1] 1 2 2 3
dense_rank(x)
## [1] 1 2 2 3
There are no missing ranks but some can be repeated.
cume_dist()
and percent_rank()
are
different as they don’t return an integer but a fraction.
The former returns the relative percentiles of a cumulative distribution, calculated with the following formula:
/ total number of element number of elements less or equal to that element
It can be seen as the proportion of all values less then or equal to a specific one.
x
## [1] 1 2 2 3
cume_dist(x)
## [1] 0.25 0.75 0.75 1.00
The latter rescales the values between 0 and 1 (included) and the ranks are calculated as
/ the number of elements except that element the number of elements less than that element
x
## [1] 1 2 2 3
percent_rank(x)
## [1] 0.0000000 0.3333333 0.3333333 1.0000000
it is computed by rescaling the results of min_rank()
to
[0, 1]
min_rank(x)
## [1] 1 2 2 4
min_rank(x) - min(min_rank(x))) /
(max(min_rank(x)) - min(min_rank(x))) (
## [1] 0.0000000 0.3333333 0.3333333 1.0000000
For both there are repeated ranks while the idea of missing ones obviously loses meaning here.
One difference between the two is that cume_dist()
will
never have a rank equal to 0.
ntile()
divides the vector in n
different
groups. n
must be a single positive integer.
x
## [1] 1 2 2 3
ntile(x, n = 2)
## [1] 1 1 2 2
The maximum difference in group sizes is 1 and, when the output requires groups of different sizes, the assignment of elements to groups is performed in such a way that larger groups obtain higher ranks.
sort(c(x, x))
## [1] 1 1 2 2 2 2 3 3
ntile(sort(c(x, x)), n = 3)
## [1] 1 1 1 2 2 2 3 3
From these examples we can see that the same value (2 in this case) can be assigned to different groups.
When n
(that can be implicit) is greater than the number
of elements in the vector, ntile()
will have groups of size
1.
ntile(x, 10)
## [1] 1 2 3 4
When used on a data frame the default of its first argument is
row_number()
, so, if we don’t input one,
ntile()
will use the row indexes to create the groups.
%>%
df slice(1:8) %>%
mutate(group = ntile(n = 4))
%>%
df slice(1:8) %>%
mutate(group = ntile(row_number(), n = 4))
n
must be explicit in these cases.
%>%
df slice(1:8) %>%
mutate(group = ntile(4))
Error in `mutate()`:
ℹ In argument: `group = ntile(4)`.
Caused by error in `ntile()`:
! `n` must be a whole number, not absent.
As just seen, inside a data frame row_number()
can in
fact also be used without an argument. In this case the “column” it will
use to rank is the row indexes (the same implicit column it uses to
resolve ties).
%>%
df arrange(row_number())
%>%
df filter(between(row_number(), 1, 10))
As the name suggests, we can use ntile(
) to filter by
quantiles, especially with databases where there’s no median
function.
x
## [1] 1 2 2 3
median(x)
## [1] 2
ntile(x, 2) == 2
## [1] FALSE FALSE TRUE TRUE
ntile(x, 2) == 2] x[
## [1] 2 3
With these functions we can rank directly on expressions.
/ 2 - sqrt(x)) ^ 2 (x
## [1] 0.25000000 0.17157288 0.17157288 0.05384758
row_number((x / 2 - sqrt(x)) ^ 2)
## [1] 4 2 3 1
min_rank((x / 2 - sqrt(x)) ^ 2)
## [1] 4 2 2 1
dense_rank((x / 2 - sqrt(x)) ^ 2)
## [1] 3 2 2 1
cume_dist((x / 2 - sqrt(x)) ^ 2)
## [1] 1.00 0.75 0.75 0.25
percent_rank((x / 2 - sqrt(x)) ^ 2)
## [1] 1.0000000 0.3333333 0.3333333 0.0000000
ntile((x / 2 - sqrt(x)) ^ 2, 2)
## [1] 2 1 2 1
Besides numbers, we can rank also on strings
<- c("a", "b", "b", "c")) (y
## [1] "a" "b" "b" "c"
row_number(y)
## [1] 1 2 3 4
min_rank(y)
## [1] 1 2 2 4
dense_rank(y)
## [1] 1 2 2 3
cume_dist(y)
## [1] 0.25 0.75 0.75 1.00
percent_rank(y)
## [1] 0.0000000 0.3333333 0.3333333 1.0000000
ntile(y, 2)
## [1] 1 1 2 2
and dates.
<- as.Date(c("1900-11-24", "2001-10-18", "2001-10-18", "2004-05-28"))) (z
## [1] "1900-11-24" "2001-10-18" "2001-10-18" "2004-05-28"
row_number(z)
## [1] 1 2 3 4
min_rank(z)
## [1] 1 2 2 4
dense_rank(z)
## [1] 1 2 2 3
cume_dist(z)
## [1] 0.25 0.75 0.75 1.00
percent_rank(z)
## [1] 0.0000000 0.3333333 0.3333333 1.0000000
ntile(z, 2)
## [1] 1 1 2 2
With these functions NAs are ranked with an NA.
<- c(1, 2, NA, 2, 3)) (xNA
## [1] 1 2 NA 2 3
row_number(xNA)
## [1] 1 2 NA 3 4
min_rank(xNA)
## [1] 1 2 NA 2 4
dense_rank(xNA)
## [1] 1 2 NA 2 3
cume_dist(xNA)
## [1] 0.25 0.75 NA 0.75 1.00
percent_rank(xNA)
## [1] 0.0000000 0.3333333 NA 0.3333333 1.0000000
ntile(xNA, 2)
## [1] 1 1 NA 2 2
The ranks outputted by all these functions can be used to arrange the rows of a data frame by the values of the elements of a column.
%>%
df arrange(row_number(Quantity))
When using row_number()
, we may want to use
arrange()
before ranking if the original row indexes, that
are used to resolve ties, don’t satisfy us (rows 4 and 5 shift place
here).
%>%
df arrange(desc(StockCode)) %>%
arrange(row_number(Quantity))
Using a ranking function with arrange(
) is not, however,
different from a direct arrange()
call.
identical(df %>%
arrange(Quantity),
%>%
df arrange(row_number(Quantity)))
## [1] TRUE
They are therefore more useful if we store the ranks in a column with
mutate()
.
%>%
df mutate(Quantity_Ranks = row_number(Quantity), .keep = "used")
Column that can be used to filter for a rank of choice.
%>%
df mutate(Quantity_Ranks = row_number(Quantity)) %>%
filter(Quantity_Ranks == 5)
An operation than can be done also directly without an intermediary
mutate()
call.
%>%
df filter(row_number(Quantity) == 5)
We need to pay attention in choosing the ranking functions when filtering as they don’t all return the same results.
%>%
df filter(min_rank(Quantity) == 5)
min_rank()
in fact returns 0 rows as the fifth rank is
missing by the way ties influence the output.
%>%
df mutate(Ranks = min_rank(Quantity), .keep = "used") %>%
arrange(Ranks)
And, for the same reason, dense_rank()
returns another
row compared to row_number()
.
%>%
df filter(dense_rank(Quantity) == 5)
%>%
df mutate(Ranks = dense_rank(Quantity), .keep = "used") %>%
arrange(Ranks)
Another difference between the three functions is the total number of ranks they output which stems by how the ranks are augmented in case of ties.
bind_cols("Price" = sort(df$Price)[1:1606],
%>%
df count(rank_from_row_number = row_number(Price), name = "row_number_#_of_elements") %>%
slice(1:1606),
"Price_unique_values" = sort(unique(df$Price)),
%>%
df count(rank_from_min_rank = min_rank(Price), name = "min_rank_#_of_elements"),
%>%
df count(rank_from_dense_rank = dense_rank(Price), name = "dense_rank_#_of_elements"))
As row_number()
uses one rank for each element of the
column it evaluates, its total number of ranks will be equal to the
number of elements.
nrow(count(df, row_number(Price)))
## [1] 525461
bind_cols("Price" = sort(df$Price),
%>%
df count(rank_from_row_number = row_number(Price), name = "row_number_#_of_elements"))
For min_rank()
and dense_rank()
instead
their total number of ranks will be equal to the total number of unique
elements, as elements with the same value get the same rank.
nrow(count(df, min_rank(Price)))
## [1] 1606
nrow(count(df, dense_rank(Price)))
## [1] 1606
bind_cols("Price_unique_values" = sort(unique(df$Price)),
%>%
df count(rank_from_min_rank = min_rank(Price), name = "min_rank_#_of_elements"),
%>%
df count(rank_from_dense_rank = dense_rank(Price), name = "dense_rank_#_of_elements"))
The last rank they output is also different, as using
row_number()
or min_rank()
it is equal to the
total number of non NAs values (bar ties) whereas using
dense_rank()
it is equal to the total number of unique non
NAs values, which is usually less, and sometimes we might prefer to have
a smaller and more manageable number of ranks.
%>%
df group_by(rank_from_row_number = row_number(Price)) %>%
group_by(rank_from_min_rank = min_rank(Price)) %>%
group_by(rank_from_dense_rank = dense_rank(Price)) %>%
ungroup() %>%
select(Price, (ncol(df) + 1):ncol(.)) %>%
arrange(desc(Price), desc(rank_from_row_number))
To break ties we can use several columns, the same logic we use with
arrange()
.
%>%
df arrange(Quantity)
%>%
df arrange(Quantity, desc(InvoiceDate))
We have to use pick()
to select the columns.
%>%
df mutate(Rank = dense_rank(pick(Quantity, InvoiceDate)), .keep = "used") %>%
arrange(Rank)
Without the second column we would have had the rank 4 two times.
%>%
df mutate(Rank = dense_rank(Quantity), .keep = "used") %>%
arrange(Rank)
But using two columns the ranks from the second one are used to break the ties of the first.
%>%
df mutate(across(c(Quantity, InvoiceDate), dense_rank, .names = "Rank_{.col}"), .keep = "used") %>%
arrange(Rank_Quantity, Rank_InvoiceDate)
We can also create a logical column to filter on.
%>%
df mutate(Good_Rank = row_number(Quantity) == 5) %>%
filter(Good_Rank)
cume_dist()
and percent_rank()
must be used
differently, as it would be very hard to guess an exact percentage to
filter for, so we usually use them with comparisons.
%>%
df filter(cume_dist(Quantity) < 0.1)
%>%
df filter(percent_rank(Quantity) < 0.1)
With ntile()
we decide both the number of groups and the
groups we are looking for.
%>%
df filter(ntile(Quantity, 400) == 50)
%>%
df filter(ntile(Quantity, 400) > 50)
As we’ve seen before, we can use ntile()
to filter by
quantiles.
%>%
df filter(ntile(Quantity, 2) == 2)
We can use desc()
or a minus (-
) in case we
want to invert the default ranking behavior that assigns higher ranks to
smaller values.
%>%
df mutate(row_number(desc(Quantity)), .keep = "used")
%>%
df mutate(row_number(-(Quantity)), .keep = "used")
It doesn’t work for row_number()
without an argument
though, it doesn’t invert the order of row indexes.
%>%
df filter(row_number(desc()) == 5)
Error in `filter()`:
ℹ In argument: `row_number(desc()) == 5`.
Caused by error in `desc()`:
! argument "x" is missing, with no default
This can be solved using a turnaround.
%>%
df filter(row_number() == n() - 5)
Or more simply with slice()
.
%>%
df slice(n() - 5)
With a combination of arrange()
plus
slice()
we can in fact replicate, with some caveats, the
results of filtering using ranking functions.
%>%
df filter(row_number(Quantity) == 5)
%>%
df arrange(Quantity) %>%
slice(5)
We have less control over the ties though, so a ranking function is preferable.
This example shows it well, as slice()
cuts to the 8th
row while filter()
includes all the rows tied with a value
of 10000.
%>%
df arrange(desc(Quantity)) %>%
slice(1:8)
%>%
df filter(min_rank(desc(Quantity)) <= 8)
Notice as well how the code with filter()
doesn’t modify
the rows’ order.
slice(
) has some helpers though,
slice_max()
and slice_min()
, that are better
at dealing with ties.
%>%
df filter(min_rank(desc(Quantity)) <= 8)
%>%
df slice_max(Quantity, n = 8)
%>%
df slice_max(Quantity, n = 8, with_ties = FALSE)
With grouped data frames we get the expected results of a selected number of rows per group.
%>%
df group_by(Country) %>%
filter(row_number() == 5)
%>%
df group_by(Country) %>%
filter(row_number(Quantity) == 5)
%>%
df group_by(Country) %>%
filter(min_rank(Quantity) == 5)
%>%
df group_by(Country) %>%
filter(dense_rank(Quantity) == 5)
%>%
df group_by(Country) %>%
filter(cume_dist(Quantity) < 0.1)
%>%
df group_by(Country) %>%
filter(percent_rank(Quantity) < 0.1)
%>%
df group_by(Country) %>%
filter(ntile(Quantity, 5) == 5)
This behavior is not one to one with arrange()
plus
slice()
as slice()
sorts the rows by the
grouping columns.
%>%
df group_by(Country) %>%
arrange(Quantity) %>%
slice(5)
The behavior is as expected also with mutate()
.
%>%
df group_by(Country) %>%
mutate(Row_Index = row_number(), .keep = "used")
%>%
df group_by(Country) %>%
mutate(Quantity_Ranks = row_number(Quantity), .keep = "used")