http://www.jerrydallal.com/LHSP/dof.htm http://stats.stackexchange.com/questions/6350/anova-assumption-normality-normal-distribution-of-residuals https://en.wikipedia.org/wiki/Errors_and_residuals_in_statistics # Friedman rank-sum test (non-parametric counterpart to two-way ANOVA tests) (from http://www.real-statistics.com/anova-repeated-measures/friedman-test/) The Friedman test is a non-parametric alternative to ANOVA with repeated measures. No normality assumption is required. The test is similar to the Kruskal-Wallis Test. the null hypothesis [is] that the sum of the ranks of the groups are the same. Example 1: A winery wanted to find out whether people preferred red, white or rosé wines. They invited 12 people to taste one red, one white and one rose’ wine with the order of tasting chosen at random and a suitable interval between tastings. Each person was asked to evaluate each wine with the scores tabulated in the table on the left side of Figure 1. The ranks of the scores for each person were then calculated and the Friedman statistic Q was calculated to be 1.79 using the above formula. Since p-value = CHITEST(1.79, 2) = 0.408 > .05 = α, we conclude there is no significant difference between the three types of wines. ## Friedman test vs others Friedman’s test would be used when you have 33 participants who experienced all three conditions. In this case the sample sizes would all be the same, namely 33. If you are instead looking at three independent groups, the participants in each group experience one and only one condition, then you want to use fixed factor ANOVA, Kruskal-Wallis or some other similar test. TITLE: Chapter 20 # t-Test Design (refer to https://en.wikipedia.org/wiki/Statistical_power#Example) From R in a Nutshell: power.t.test This function will calculate either n, delta, sig.level, sd, or power, depending on the input. You must specify at least four of these parameters. Indeed in the formula Now suppose that the alternative hypothesis is true and \mathrm{E}[D]=\tau. Then the power is : \begin{array}{ccl} \pi(\tau)&=&P(\sqrt{n}\bar{D}/\hat{\sigma}_D > 1.64|\mathrm{E}[D]=\tau) \\ &=&P\left(\sqrt{n}(\bar{D}-\tau+\tau)/\hat{\sigma}_D > 1.64\right|\mathrm{E}[D]=\tau)\\ &=& P\left(\sqrt{n}(\bar{D}-\tau)/\hat{\sigma}_D > 1.64-\sqrt{n}\tau/\hat{\sigma}_D\right|\mathrm{E}[D]=\tau)\\ \end{array} there are 5 factors: sqrt{n}, \bar{D}, \sigma}_D, 1.64, and the whole \pi(\tau). # Revision from first file prop.test part: See below file for excellent explanation: http://math.arizona.edu/~ghystad/chapter13.pdf ## Matched pairs t-Test (http://math.arizona.edu/~ghystad/chapter11.pdf) For example X could be the old treatment given to a group of patients and Y could be the new treatment give to the same group of patients. Another example: Student's seeds experiment. normal seeds vs. dried seeds. ## Matched pairs vs Two-Sample t Procedures (approximated by Welch) Note: matched pairs test provides a more effective test when its conditions are satisfied.