![]() turned the CI test into a two-sample test by finding a permutation matrix and measuring the Maximum Mean Discrepancy (MMD) between the two distributions. Sen et al. Beyond kernel-based methods, Runge used a Conditional Mutual Information (CMI) estimator as the test statistic and proposed a k-nearest-neighbor-based permutation to generate samples from the null distribution. Shah and Peters proposed a Generalized Covariance Measure (GCM) as the test statistic based on regression method. Doran et al. proposed a Kernel Partial Correlation (KPC), a generalization of a partial correlation to measure conditional dependence. ![]() proposed the RCIT and RCoT to use random Fourier features to approximate the KCIT efficiently. Huang et al. For the CI test on a large-scale dataset, Strobl et al. A major advantage of the KCIT is a known asymptotic distribution that can be efficiently approximated using Monte Carlo simulations. proposed the KCIT, based on the partial association of functions in some universal RKHS. However, its asymptotic distribution under the null hypothesis is unknown, and the bin-based permutation degrades as the dimension of conditioning variable Z grows. proposed a measure of CI based on cross-covariance operators. Many test statistics have been constructed by embedding distributions in Reproducing Kernel Hilbert Spaces (RKHSs). Fukumizu et al. Recently, numerous nonparametric methods have been proposed for CI testing.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |