In statistics, the Kendall rank correlation coefficient, commonly referred to as Kendall's τ coefficient (after the Greek letter τ, tau), is a statistic used to measure the ordinal association between two measured quantities. A τ test is a non-parametric hypothesis test for statistical dependence based on the τ coefficient. It is a measure of rank correlation: the similarity of the orderings of the data when ranked by each of the quantities. It is named after Maurice Kendall, who developed it in 1938, though Gustav Fechner had proposed a similar measure in the context of time series in 1897.
Intuitively, the Kendall correlation between two variables will be high when observations have a similar (or identical for a correlation of 1) rank (i.e. relative position label of the observations within the variable: 1st, 2nd, 3rd, etc.) between the two variables, and low when observations have a dissimilar (or fully different for a correlation of −1) rank between the two variables.
Both Kendall's τ {\displaystyle \tau } and Spearman's ρ {\displaystyle \rho } can be formulated as special cases of a more general correlation coefficient. Its notions of concordance and discordance also appear in other areas of statistics, like the Rand index in cluster analysis.
Let ( x 1 , y 1 ) , . . . , ( x n , y n ) {\displaystyle (x_{1},y_{1}),...,(x_{n},y_{n})} be a set of observations of the joint random variables X and Y, such that all the values of ( x i {\displaystyle x_{i}} ) and ( y i {\displaystyle y_{i}} ) are unique (ties are neglected for simplicity). Any pair of observations ( x i , y i ) {\displaystyle (x_{i},y_{i})} and ( x j , y j ) {\displaystyle (x_{j},y_{j})} , where i < j {\displaystyle i<j} , are said to be concordant if the sort order of ( x i , x j ) {\displaystyle (x_{i},x_{j})} and ( y i , y j ) {\displaystyle (y_{i},y_{j})} agrees: that is, if either both x i > x j {\displaystyle x_{i}>x_{j}} and y i > y j {\displaystyle y_{i}>y_{j}} holds or both x i < x j {\displaystyle x_{i}<x_{j}} and y i < y j {\displaystyle y_{i}<y_{j}} ; otherwise they are said to be discordant.
The Kendall τ coefficient is defined as:
τ = ( number of concordant pairs ) − ( number of discordant pairs ) ( number of pairs ) = 1 − 2 ( number of discordant pairs ) ( n 2 ) . {\displaystyle \tau ={\frac {({\text{number of concordant pairs}})-({\text{number of discordant pairs}})}{({\text{number of pairs}})}}=1-{\frac {2({\text{number of discordant pairs}})}{n \choose 2}}.}where ( n 2 ) = n ( n − 1 ) 2 {\displaystyle {n \choose 2}={n(n-1) \over 2}} is the binomial coefficient for the number of ways to choose two items from n items.
The number of discordant pairs is equal to the inversion number that permutes the y-sequence into the same order as the x-sequence.
The denominator is the total number of pair combinations, so the coefficient must be in the range −1 ≤ τ ≤ 1.
The Kendall rank coefficient is often used as a test statistic in a statistical hypothesis test to establish whether two variables may be regarded as statistically dependent. This test is non-parametric, as it does not rely on any assumptions on the distributions of X or Y or the distribution of (X,Y).
Under the null hypothesis of independence of X and Y, the sampling distribution of τ has an expected value of zero. The precise distribution cannot be characterized in terms of common distributions, but may be calculated exactly for small samples; for larger samples, it is common to use an approximation to the normal distribution, with mean zero and variance 2 ( 2 n + 5 ) / 9 n ( n − 1 ) {\textstyle 2(2n+5)/9n(n-1)} .
Theorem. If the samples are independent, then V = 2 ( 2 n + 5 ) / 9 n ( n − 1 ) {\textstyle \mathbb {V} =2(2n+5)/9n(n-1)} .
Proof ProofWLOG, we reorder the data pairs, so that x 1 < x 2 < ⋯ < x n {\textstyle x_{1}<x_{2}<\cdots <x_{n}} . By assumption of independence, the order of y 1 , . . . , y n {\textstyle y_{1},...,y_{n}} is a permutation sampled uniformly at random from S n {\textstyle S_{n}} , the permutation group on 1 : n {\textstyle 1:n} .
For each permutation, its unique l {\textstyle l} inversion code is l 0 l 1 ⋯ l n − 1 {\textstyle l_{0}l_{1}\cdots l_{n-1}} such that each l i {\textstyle l_{i}} is in the range 0 : i {\textstyle 0:i} . Sampling a permutation uniformly is equivalent to sampling a l {\textstyle l} -inversion code uniformly, which is equivalent to sampling each l i {\textstyle l_{i}} uniformly and independently.
Then we have
E = E = 1 − 8 n ( n − 1 ) ∑ i E + 16 n 2 ( n − 1 ) 2 ∑ i j E = 1 − 8 n ( n − 1 ) ∑ i E + 16 n 2 ( n − 1 ) 2 ( ∑ i j E E + ∑ i V ) = 1 − 8 n ( n − 1 ) ∑ i E + 16 n 2 ( n − 1 ) 2 ∑ i j E E + 16 n 2 ( n − 1 ) 2 ( ∑ i V ) = ( 1 − 4 ∑ i E n ( n − 1 ) ) 2 + 16 n 2 ( n − 1 ) 2 ( ∑ i V ) {\displaystyle {\begin{aligned}E&=E\left\\&=1-{\frac {8}{n(n-1)}}\sum _{i}E+{\frac {16}{n^{2}(n-1)^{2}}}\sum _{ij}E\\&=1-{\frac {8}{n(n-1)}}\sum _{i}E+{\frac {16}{n^{2}(n-1)^{2}}}\left(\sum _{ij}EE+\sum _{i}V\right)\\&=1-{\frac {8}{n(n-1)}}\sum _{i}E+{\frac {16}{n^{2}(n-1)^{2}}}\sum _{ij}EE+{\frac {16}{n^{2}(n-1)^{2}}}\left(\sum _{i}V\right)\\&=\left(1-{\frac {4\sum _{i}E}{n(n-1)}}\right)^{2}+{\frac {16}{n^{2}(n-1)^{2}}}\left(\sum _{i}V\right)\end{aligned}}}The first term is just E 2 = 0 {\textstyle E^{2}=0} . The second term can be calculated by noting that l i {\textstyle l_{i}} is a uniform random variable on 0 : i {\textstyle 0:i} , so E = i 2 {\textstyle E={\frac {i}{2}}} and E = 0 2 + ⋯ + i 2 i + 1 = i ( 2 i + 1 ) 6 {\textstyle E={\frac {0^{2}+\cdots +i^{2}}{i+1}}={\frac {i(2i+1)}{6}}} , then using the sum of squares formula again.
Asymptotic normality — At the n → ∞ {\textstyle n\to \infty } limit, z A = τ A V a r = n C − n D n ( n − 1 ) ( 2 n + 5 ) / 18 {\textstyle z_{A}={\frac {\tau _{A}}{\sqrt {Var}}}={n_{C}-n_{D} \over {\sqrt {n(n-1)(2n+5)/18}}}} converges in distribution to the standard normal distribution.
ProofUse a result from A class of statistics with asymptotically normal distribution Hoeffding (1948).
If ( x 1 , y 1 ) , ( x 2 , y 2 ) , . . . , ( x n , y n ) {\textstyle (x_{1},y_{1}),(x_{2},y_{2}),...,(x_{n},y_{n})} are IID samples from the same jointly normal distribution with a known Pearson correlation coefficient r {\textstyle r} , then the expectation of Kendall rank correlation has a closed-form formula.
Greiner's equality — If X , Y {\textstyle X,Y} are jointly normal, with correlation r {\textstyle r} , then
r = sin ( π 2 E ) {\displaystyle r=\sin {\left({\frac {\pi }{2}}E\right)}}The name is credited to Richard Greiner (1909) by P. A. P. Moran.
Proof ProofDefine the following quantities.
In the notation, we see that the number of concordant pairs, n C {\textstyle n_{C}} , is equal to the number of Δ i , j {\textstyle \Delta _{i,j}} that fall in the subset A + {\textstyle A^{+}} . That is, n C = ∑ 1 ≤ i < j ≤ n 1 Δ i , j ∈ A + {\textstyle n_{C}=\sum _{1\leq i<j\leq n}1_{\Delta _{i,j}\in A^{+}}} .
Thus,
E = 4 n ( n − 1 ) E − 1 = 4 n ( n − 1 ) ∑ 1 ≤ i < j ≤ n P r ( Δ i , j ∈ A + ) − 1 {\displaystyle E={\frac {4}{n(n-1)}}E-1={\frac {4}{n(n-1)}}\sum _{1\leq i<j\leq n}Pr(\Delta _{i,j}\in A^{+})-1}Since each ( x i , y i ) {\textstyle (x_{i},y_{i})} is an IID sample of the jointly normal distribution, the pairing does not matter, so each term in the summation is exactly the same, and so
E = 2 P r ( Δ 1 , 2 ∈ A + ) − 1 {\displaystyle E=2Pr(\Delta _{1,2}\in A^{+})-1} and it remains to calculate the probability. We perform this by repeated affine transforms.First normalize X , Y {\textstyle X,Y} by subtracting the mean and dividing the standard deviation. This does not change τ A {\textstyle \tau _{A}} . This gives us
= 1 / 2 {\displaystyle {\begin{bmatrix}x\\y\end{bmatrix}}={\begin{bmatrix}1&r\\r&1\end{bmatrix}}^{1/2}{\begin{bmatrix}z\\w\end{bmatrix}}} where ( Z , W ) {\textstyle (Z,W)} is sampled from the standard normal distribution on R 2 {\textstyle \mathbb {R} ^{2}} .Thus,
Δ 1 , 2 = 2 1 / 2 {\displaystyle \Delta _{1,2}={\sqrt {2}}{\begin{bmatrix}1&r\\r&1\end{bmatrix}}^{1/2}{\begin{bmatrix}(z_{1}-z_{2})/{\sqrt {2}}\\(w_{1}-w_{2})/{\sqrt {2}}\end{bmatrix}}} where the vector {\textstyle {\begin{bmatrix}(z_{1}-z_{2})/{\sqrt {2}}\\(w_{1}-w_{2})/{\sqrt {2}}\end{bmatrix}}} is still distributed as the standard normal distribution on R 2 {\textstyle \mathbb {R} ^{2}} . It remains to perform some unenlightening tedious matrix exponentiations and trigonometry, which can be skipped over.Thus, Δ 1 , 2 ∈ A + {\textstyle \Delta _{1,2}\in A^{+}} iff
∈ 1 2 − 1 / 2 A + = 1 2 2 A + {\displaystyle {\begin{bmatrix}(z_{1}-z_{2})/{\sqrt {2}}\\(w_{1}-w_{2})/{\sqrt {2}}\end{bmatrix}}\in {\frac {1}{\sqrt {2}}}{\begin{bmatrix}1&r\\r&1\end{bmatrix}}^{-1/2}A^{+}={\frac {1}{2{\sqrt {2}}}}{\begin{bmatrix}{\frac {1}{\sqrt {1+r}}}+{\frac {1}{\sqrt {1-r}}}&{\frac {1}{\sqrt {1+r}}}-{\frac {1}{\sqrt {1-r}}}\\{\frac {1}{\sqrt {1+r}}}-{\frac {1}{\sqrt {1-r}}}&{\frac {1}{\sqrt {1+r}}}+{\frac {1}{\sqrt {1-r}}}\end{bmatrix}}A^{+}} where the subset on the right is a “squashed” version of two quadrants. Since the standard normal distribution is rotationally symmetric, we need only calculate the angle spanned by each squashed quadrant.The first quadrant is the sector bounded by the two rays ( 1 , 0 ) , ( 0 , 1 ) {\textstyle (1,0),(0,1)} . It is transformed to the sector bounded by the two rays ( 1 1 + r + 1 1 − r , 1 1 + r − 1 1 − r ) {\textstyle ({\frac {1}{\sqrt {1+r}}}+{\frac {1}{\sqrt {1-r}}},{\frac {1}{\sqrt {1+r}}}-{\frac {1}{\sqrt {1-r}}})} and ( 1 1 + r − 1 1 − r , 1 1 + r + 1 1 − r ) {\textstyle ({\frac {1}{\sqrt {1+r}}}-{\frac {1}{\sqrt {1-r}}},{\frac {1}{\sqrt {1+r}}}+{\frac {1}{\sqrt {1-r}}})} . They respectively make angle θ {\textstyle \theta } with the horizontal and vertical axis, where
θ = arctan 1 1 + r − 1 1 − r 1 1 + r + 1 1 − r {\displaystyle \theta =\arctan {\frac {{\frac {1}{\sqrt {1+r}}}-{\frac {1}{\sqrt {1-r}}}}{{\frac {1}{\sqrt {1+r}}}+{\frac {1}{\sqrt {1-r}}}}}}Together, the two transformed quadrants span an angle of π + 4 θ {\textstyle \pi +4\theta } , so
P r ( Δ 1 , 2 ∈ A + ) = π + 4 θ 2 π {\displaystyle Pr(\Delta _{1,2}\in A^{+})={\frac {\pi +4\theta }{2\pi }}} and thereforeA pair { ( x i , y i ) , ( x j , y j ) } {\displaystyle \{(x_{i},y_{i}),(x_{j},y_{j})\}} is said to be tied if and only if x i = x j {\displaystyle x_{i}=x_{j}} or y i = y j {\displaystyle y_{i}=y_{j}} ; a tied pair is neither concordant nor discordant. When tied pairs arise in the data, the coefficient may be modified in a number of ways to keep it in the range :
The Tau-a statistic tests the strength of association of the cross tabulations. Both variables have to be ordinal. Tau-a will not make any adjustment for ties. It is defined as:
τ A = n c − n d n 0 {\displaystyle \tau _{A}={\frac {n_{c}-n_{d}}{n_{0}}}}where nc, nd and n0 are defined as in the next section.
The Tau-b statistic, unlike Tau-a, makes adjustments for ties. Values of Tau-b range from −1 (100% negative association, or perfect inversion) to +1 (100% positive association, or perfect agreement). A value of zero indicates the absence of association.
The Kendall Tau-b coefficient is defined as:
τ B = n c − n d ( n 0 − n 1 ) ( n 0 − n 2 ) {\displaystyle \tau _{B}={\frac {n_{c}-n_{d}}{\sqrt {(n_{0}-n_{1})(n_{0}-n_{2})}}}}where
n 0 = n ( n − 1 ) / 2 n 1 = ∑ i t i ( t i − 1 ) / 2 n 2 = ∑ j u j ( u j − 1 ) / 2 n c = Number of concordant pairs n d = Number of discordant pairs t i = Number of tied values in the i th group of ties for the first quantity u j = Number of tied values in the j th group of ties for the second quantity {\displaystyle {\begin{aligned}n_{0}&=n(n-1)/2\\n_{1}&=\sum _{i}t_{i}(t_{i}-1)/2\\n_{2}&=\sum _{j}u_{j}(u_{j}-1)/2\\n_{c}&={\text{Number of concordant pairs}}\\n_{d}&={\text{Number of discordant pairs}}\\t_{i}&={\text{Number of tied values in the }}i^{\text{th}}{\text{ group of ties for the first quantity}}\\u_{j}&={\text{Number of tied values in the }}j^{\text{th}}{\text{ group of ties for the second quantity}}\end{aligned}}}A simple algorithm developed in BASIC computes Tau-b coefficient using an alternative formula.
Be aware that some statistical packages, e.g. SPSS, use alternative formulas for computational efficiency, with double the 'usual' number of concordant and discordant pairs.
Tau-c (also called Stuart-Kendall Tau-c) is more suitable than Tau-b for the analysis of data based on non-square (i.e. rectangular) contingency tables. So use Tau-b if the underlying scale of both variables has the same number of possible values (before ranking) and Tau-c if they differ. For instance, one variable might be scored on a 5-point scale (very good, good, average, bad, very bad), whereas the other might be based on a finer 10-point scale.
The Kendall Tau-c coefficient is defined as:
τ C = 2 ( n c − n d ) n 2 ( m − 1 ) m = τ A n − 1 n m m − 1 {\displaystyle \tau _{C}={\frac {2(n_{c}-n_{d})}{n^{2}{\frac {(m-1)}{m}}}}=\tau _{A}{\frac {n-1}{n}}{\frac {m}{m-1}}}where
n c = Number of concordant pairs n d = Number of discordant pairs r = Number of rows c = Number of columns m = min ( r , c ) {\displaystyle {\begin{aligned}n_{c}&={\text{Number of concordant pairs}}\\n_{d}&={\text{Number of discordant pairs}}\\r&={\text{Number of rows}}\\c&={\text{Number of columns}}\\m&=\min(r,c)\end{aligned}}}When two quantities are statistically dependent, the distribution of τ {\displaystyle \tau } is not easily characterizable in terms of known distributions. However, for τ A {\displaystyle \tau _{A}} the following statistic, z A {\displaystyle z_{A}} , is approximately distributed as a standard normal when the variables are statistically independent:
z A = n c − n d 1 18 v 0 {\displaystyle z_{A}={n_{c}-n_{d} \over {\sqrt {{\frac {1}{18}}v_{0}}}}}where v 0 = n ( n − 1 ) ( 2 n + 5 ) {\displaystyle v_{0}=n(n-1)(2n+5)} .
Thus, to test whether two variables are statistically dependent, one computes z A {\displaystyle z_{A}} , and finds the cumulative probability for a standard normal distribution at − | z A | {\displaystyle -|z_{A}|} . For a 2-tailed test, multiply that number by two to obtain the p-value. If the p-value is below a given significance level, one rejects the null hypothesis (at that significance level) that the quantities are statistically independent.
Numerous adjustments should be added to z A {\displaystyle z_{A}} when accounting for ties. The following statistic, z B {\displaystyle z_{B}} , has the same distribution as the τ B {\displaystyle \tau _{B}} distribution, and is again approximately equal to a standard normal distribution when the quantities are statistically independent:
z B = n c − n d v {\displaystyle z_{B}={n_{c}-n_{d} \over {\sqrt {v}}}}where
v = 1 18 v 0 − ( v t + v u ) / 18 + ( v 1 + v 2 ) v 0 = n ( n − 1 ) ( 2 n + 5 ) v t = ∑ i t i ( t i − 1 ) ( 2 t i + 5 ) v u = ∑ j u j ( u j − 1 ) ( 2 u j + 5 ) v 1 = ∑ i t i ( t i − 1 ) ∑ j u j ( u j − 1 ) / ( 2 n ( n − 1 ) ) v 2 = ∑ i t i ( t i − 1 ) ( t i − 2 ) ∑ j u j ( u j − 1 ) ( u j − 2 ) / ( 9 n ( n − 1 ) ( n − 2 ) ) {\displaystyle {\begin{array}{ccl}v&=&{\frac {1}{18}}v_{0}-(v_{t}+v_{u})/18+(v_{1}+v_{2})\\v_{0}&=&n(n-1)(2n+5)\\v_{t}&=&\sum _{i}t_{i}(t_{i}-1)(2t_{i}+5)\\v_{u}&=&\sum _{j}u_{j}(u_{j}-1)(2u_{j}+5)\\v_{1}&=&\sum _{i}t_{i}(t_{i}-1)\sum _{j}u_{j}(u_{j}-1)/(2n(n-1))\\v_{2}&=&\sum _{i}t_{i}(t_{i}-1)(t_{i}-2)\sum _{j}u_{j}(u_{j}-1)(u_{j}-2)/(9n(n-1)(n-2))\end{array}}}This is sometimes referred to as the Mann-Kendall test.
The direct computation of the numerator n c − n d {\displaystyle n_{c}-n_{d}} , involves two nested iterations, as characterized by the following pseudocode:
numer := 0 for i := 2..N do for j := 1..(i − 1) do numer := numer + sign(x − x) × sign(y − y) return numerAlthough quick to implement, this algorithm is O ( n 2 ) {\displaystyle O(n^{2})} in complexity and becomes very slow on large samples. A more sophisticated algorithm built upon the Merge Sort algorithm can be used to compute the numerator in O ( n ⋅ log n ) {\displaystyle O(n\cdot \log {n})} time.
Begin by ordering your data points sorting by the first quantity, x {\displaystyle x} , and secondarily (among ties in x {\displaystyle x} ) by the second quantity, y {\displaystyle y} . With this initial ordering, y {\displaystyle y} is not sorted, and the core of the algorithm consists of computing how many steps a Bubble Sort would take to sort this initial y {\displaystyle y} . An enhanced Merge Sort algorithm, with O ( n log n ) {\displaystyle O(n\log n)} complexity, can be applied to compute the number of swaps, S ( y ) {\displaystyle S(y)} , that would be required by a Bubble Sort to sort y i {\displaystyle y_{i}} . Then the numerator for τ {\displaystyle \tau } is computed as:
n c − n d = n 0 − n 1 − n 2 + n 3 − 2 S ( y ) , {\displaystyle n_{c}-n_{d}=n_{0}-n_{1}-n_{2}+n_{3}-2S(y),}where n 3 {\displaystyle n_{3}} is computed like n 1 {\displaystyle n_{1}} and n 2 {\displaystyle n_{2}} , but with respect to the joint ties in x {\displaystyle x} and y {\displaystyle y} .
A Merge Sort partitions the data to be sorted, y {\displaystyle y} into two roughly equal halves, y l e f t {\displaystyle y_{\mathrm {left} }} and y r i g h t {\displaystyle y_{\mathrm {right} }} , then sorts each half recursive, and then merges the two sorted halves into a fully sorted vector. The number of Bubble Sort swaps is equal to:
S ( y ) = S ( y l e f t ) + S ( y r i g h t ) + M ( Y l e f t , Y r i g h t ) {\displaystyle S(y)=S(y_{\mathrm {left} })+S(y_{\mathrm {right} })+M(Y_{\mathrm {left} },Y_{\mathrm {right} })}where Y l e f t {\displaystyle Y_{\mathrm {left} }} and Y r i g h t {\displaystyle Y_{\mathrm {right} }} are the sorted versions of y l e f t {\displaystyle y_{\mathrm {left} }} and y r i g h t {\displaystyle y_{\mathrm {right} }} , and M ( ⋅ , ⋅ ) {\displaystyle M(\cdot ,\cdot )} characterizes the Bubble Sort swap-equivalent for a merge operation. M ( ⋅ , ⋅ ) {\displaystyle M(\cdot ,\cdot )} is computed as depicted in the following pseudo-code:
function M(L, R) is i := 1 j := 1 nSwaps := 0 while i ≤ n and j ≤ m do if R < L then nSwaps := nSwaps + n − i + 1 j := j + 1 else i := i + 1 return nSwapsA side effect of the above steps is that you end up with both a sorted version of x {\displaystyle x} and a sorted version of y {\displaystyle y} . With these, the factors t i {\displaystyle t_{i}} and u j {\displaystyle u_{j}} used to compute τ B {\displaystyle \tau _{B}} are easily obtained in a single linear-time pass through the sorted arrays.