5x2cv配对t检验(5x2cv paired t test)
5x2cv配对t检验程序,用于比较两个模型的性能
from mlxtend.evaluate import paired_ttest_5x2cv
概述
5x2cv配对t检验是一种比较两个模型(分类器或回归器)性能的程序,Dieterich[1]提出了这两个模型,以解决其他方法的缺点,如重抽样配对t检验(参见配对t检验: paired_ttest_resampled)和k倍交叉验证配对t检验(参见配对t检验: paired_ttest_kfold_cv)。
为了解释这种方法是如何工作的,让我们考虑估计量(例如,分类器)A和B。此外,我们有一个标记数据集D。在公共保持方法中,我们通常将数据集分成2个部分:训练和测试集。在5x2cv配对t检验中,我们重复分裂(50%训练和50%测试数据)5次。
在5个迭代中的每一个迭代中,我们将A和B拟合到训练分割中,并在测试分割中评估它们的性能( 
    
     
      
       
        
         p
        
        
         A
        
       
      
      
       p_A
      
     
    pA 和 
    
     
      
       
        
         p
        
        
         B
        
       
      
      
       p_B
      
     
    pB)。然后,我们旋转训练集和测试集(训练集成为测试集,反之亦然)再次计算性能,这导致两个性能差异度量:
 
     
      
       
        
         
          p
         
         
          
           (
          
          
           1
          
          
           )
          
         
        
        
         =
        
        
         
          p
         
         
          A
         
         
          
           (
          
          
           1
          
          
           )
          
         
        
        
         −
        
        
         
          p
         
         
          B
         
         
          
           (
          
          
           1
          
          
           )
          
         
        
       
       
         p^{(1)} = p^{(1)}_A - p^{(1)}_B 
       
      
     p(1)=pA(1)−pB(1)
 和
 
     
      
       
        
         
          p
         
         
          
           (
          
          
           2
          
          
           )
          
         
        
        
         =
        
        
         
          p
         
         
          A
         
         
          
           (
          
          
           2
          
          
           )
          
         
        
        
         −
        
        
         
          p
         
         
          B
         
         
          
           (
          
          
           2
          
          
           )
          
         
        
        
         .
        
       
       
         p^{(2)} = p^{(2)}_A - p^{(2)}_B. 
       
      
     p(2)=pA(2)−pB(2).
 然后,我们估计差异的估计均值和方差:
 
     
      
       
        
         
          p
         
         
          ‾
         
        
        
         =
        
        
         
          
           
            p
           
           
            
             (
            
            
             1
            
            
             )
            
           
          
          
           +
          
          
           
            p
           
           
            
             (
            
            
             2
            
            
             )
            
           
          
         
         
          2
         
        
       
       
         \overline{p} = \frac{p^{(1)} + p^{(2)}}{2} 
       
      
     p=2p(1)+p(2)
 和
 
     
      
       
        
         
          s
         
         
          2
         
        
        
         =
        
        
         (
        
        
         
          p
         
         
          
           (
          
          
           1
          
          
           )
          
         
        
        
         −
        
        
         
          p
         
         
          ‾
         
        
        
         
          )
         
         
          2
         
        
        
         +
        
        
         (
        
        
         
          p
         
         
          
           (
          
          
           2
          
          
           )
          
         
        
        
         −
        
        
         
          p
         
         
          ‾
         
        
        
         
          )
         
         
          2
         
        
        
         .
        
       
       
         s^2 = (p^{(1)} - \overline{p})^2 + (p^{(2)} - \overline{p})^2. 
       
      
     s2=(p(1)−p)2+(p(2)−p)2.
 计算5次迭代的差异方差,然后用于计算 t统计量(t statistic),如下所示:
 
     
      
       
        
         t
        
        
         =
        
        
         
          
           p
          
          
           1
          
          
           
            (
           
           
            1
           
           
            )
           
          
         
         
          
           
            (
           
           
            1
           
           
            /
           
           
            5
           
           
            )
           
           
            
             ∑
            
            
             
              i
             
             
              =
             
             
              1
             
            
            
             5
            
           
           
            
             s
            
            
             i
            
            
             2
            
           
          
         
        
        
         ,
        
       
       
         t = \frac{p_1^{(1)}}{\sqrt{(1/5) \sum_{i=1}^{5}s_i^2}}, 
       
      
     t=(1/5)∑i=15si2p1(1),
 其中 
    
     
      
       
        
         p
        
        
         1
        
        
         
          (
         
         
          1
         
         
          )
         
        
       
      
      
       p_1^{(1)}
      
     
    p1(1) 是第一次迭代的 
    
     
      
       
        
         p
        
        
         1
        
       
      
      
       p_1
      
     
    p1。t统计量为在模型A和模型B具有相同性能的零假设下,假设它近似遵循5个自由度的t分布。使用t统计量,可以计算 
    
     
      
       
        p
       
      
      
       p
      
     
    p 值,并与之前选择的显著性水平进行比较,例如,
    
     
      
       
        α
       
       
        =
       
       
        0.05
       
      
      
       α=0.05
      
     
    α=0.05。如果 
    
     
      
       
        p
       
      
      
       p
      
     
    p 值小于 
    
     
      
       
        α
       
      
      
       α
      
     
    α,我们拒绝零假设,并接受两个模型存在显著差异。
References
- [1] Dietterich TG (1998) Approximate Statistical Tests for Comparing Supervised Classification Learning Algorithms. Neural Comput 10:1895–1923.
例1-5x2cv配对t检验(5x2cv paired t test)
假设我们想要比较两种分类算法,逻辑回归和决策树算法:
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from mlxtend.data import iris_data
from sklearn.model_selection import train_test_split
X, y = iris_data()
clf1 = LogisticRegression(random_state=1)
clf2 = DecisionTreeClassifier(random_state=1)
X_train, X_test, y_train, y_test = \
    train_test_split(X, y, test_size=0.25,
                     random_state=123)
score1 = clf1.fit(X_train, y_train).score(X_test, y_test)
score2 = clf2.fit(X_train, y_train).score(X_test, y_test)
print('Logistic regression accuracy: %.2f%%' % (score1*100))
print('Decision tree accuracy: %.2f%%' % (score2*100))
Logistic regression accuracy: 97.37%
Decision tree accuracy: 94.74%
请注意,由于在重采样过程中产生了新的测试/列车分离,这些精度值不用于配对t测试程序,上述值仅用于直觉。
现在,我们假设显著性阈值α=0.05,以拒绝两种算法在数据集上表现相同的无效假设,并进行5x2cv t检验(5x2cv t test):
from mlxtend.evaluate import paired_ttest_5x2cv
t, p = paired_ttest_5x2cv(estimator1=clf1,
                          estimator2=clf2,
                          X=X, y=y,
                          random_seed=1)
print('t statistic: %.3f' % t)
print('p value: %.3f' % p)
t statistic: -1.539
p value: 0.184
由于 p > α p > \alpha p>α,我们不能拒绝零假设,并且可以得出结论,两种算法的性能没有显著差异。
虽然通常不建议在不纠正多个假设测试的情况下多次应用统计测试,但让我们来看一个示例,其中决策树算法仅限于生成一个非常简单的决策边界,这将导致相对较差的性能:
clf2 = DecisionTreeClassifier(random_state=1, max_depth=1)
score2 = clf2.fit(X_train, y_train).score(X_test, y_test)
print('Decision tree accuracy: %.2f%%' % (score2*100))
t, p = paired_ttest_5x2cv(estimator1=clf1,
                          estimator2=clf2,
                          X=X, y=y,
                          random_seed=1)
print('t statistic: %.3f' % t)
print('p value: %.3f' % p)
Decision tree accuracy: 63.16%
t statistic: 5.386
p value: 0.003
假设我们在显著性水平 α = 0.05 α=0.05 α=0.05 的情况下进行了该测试,我们可以拒绝两个模型在该数据集上表现相同的无效假设,因为 p p p 值( p < 0.001 p<0.001 p<0.001)小于 α α α。
API
paired_ttest_5x2cv(estimator1, estimator2, X, y, scoring=None, random_seed=None)
实施Dieterrich(1998)提出的5x2cv配对t检验,以比较两个模型的性能。
Parameters
-  estimator1: scikit-learn classifier or regressor
-  estimator2: scikit-learn classifier or regressor
-  X: {array-like, sparse matrix}, shape = [n_samples, n_features]Training vectors, where n_samples is the number of samples and n_features is the number of features. 
-  y: array-like, shape = [n_samples]Target values. 
-  scoring: str, callable, or None (default: None)If None (default), uses ‘accuracy’ for sklearn classifiers and ‘r2’ for sklearn regressors. If str, uses a sklearn scoring metric string identifier, for example {accuracy, f1, precision, recall, roc_auc} for classifiers, {‘mean_absolute_error’, ‘mean_squared_error’/‘neg_mean_squared_error’, ‘median_absolute_error’, ‘r2’} for regressors. If a callable object or function is provided, it has to be conform with sklearn’s signature scorer(estimator, X, y); see http://scikit-learn.org/stable/modules/generated/sklearn.metrics.make_scorer.html for more information.如果没有(默认),则对sklearn分类器使用“准确性”,对sklearn回归器使用“r2”。如果str使用sklearn评分度量字符串标识符,例如{accurity,f1,precision,recall,roc_auc}作为分类器,{‘mean_absolute_error’,‘mean_squared_error’/‘neg_mean_squared_error’,‘median_absolute_error’,‘r2’}作为回归器。如果提供了一个可调用的对象或函数,它必须符合sklearn的签名“scorer(estimator,X,y)”;看见http://scikit-learn.org/stable/modules/generated/sklearn.metrics.make_scorer.html了解更多信息。 
-  random_seed: int or None (default: None)Random seed for creating the test/train splits. 用于创建测试/训练拆分的随机种子。 
Returns
-  t: floatThe t-statistic 
-  pvalue: floatTwo-tailed p-value. If the chosen significance level is larger than the p-value, we reject the null hypothesis and accept that there are significant differences in the two compared models. 双尾p值。如果选择的显著性水平大于p值,我们拒绝零假设,并接受两个比较模型存在显著差异。 
Examples
For usage examples, please see http://rasbt.github.io/mlxtend/user_guide/evaluate/paired_ttest_5x2cv/
reference
@online{Raschka2021Sep,
 author = {Raschka, S.},
 title = {{5x2cv paired t test - mlxtend}},
 year = {2021},
 month = {9},
 date = {2021-09-03},
 urldate = {2022-03-10},
 language = {english},
 hyphenation = {english},
 note = {[Online; accessed 10. Mar. 2022]},
 url = {http://rasbt.github.io/mlxtend/user_guide/evaluate/paired_ttest_5x2cv},
 abstract = {{A library consisting of useful tools and extensions for the day-to-day data science tasks.}}
 }










