Skip to main content

Conducting Exploratory Factor Analysis: A Sample Concise and Practical Method

 

Conducting Exploratory Factor Analysis: A Concise and Practical Method

Johnny T. Amora1,2

1De La Salle-College of Saint Benilde

2Philippine Association of Researchers and Statistical Software Users(PARSSU)


 

To uncover the underlying factor structure of the scale, the collected data were subjected to exploratory factor analysis. To test the factorability of the scales, the following were examined: inter-item correlations, Kaiser-Meyer-Olkin(KMO) measure of sampling adequacy, Bartlett’s Test of Sphericity, and communalities.  The inter-item correlation coefficients were examined to ensure that most of them are greater than 0.3 (SPSS, 2000).  Subsequently, the KMO for both multiple and individual variables/items were examined.  The values of KMO vary between 0 and 1, where values closer to 1 are better.  This study used the KMO criterion of greater than 0.5 (Field, 2000).   To ensure that the correlation matrix is not an identity matrix, the Bartlett’s Test of Sphericity was examined. Identity matrix is a matrix in which all of the diagonal elements are 1 and all off diagonal elements are 0. Factor analysis of the data, therefore, is appropriate if Bartlett’s Test of Sphericity is significant (p<.05). 

The principal axis factoring with promax rotation method was used to uncover the factor structure of the scale items.  Principal axis factoring was chosen because it gives the best results for data that are either normally-distributed or significantly non-normal (Costello and Osborne, 2005). The promax rotation method was utilized because the factors are expected to be correlated. To determine the optimum factor solution, the following criteria were utilized: 1) computation of the percentage of variance extracted, and (2) interpretability of the factors (Comrey and Lee, 1992).  The selection of the items to be retained in the final scale was based on the rule of thumb of Tabachnick and Fidell (2001) discussed in Costello and Osborne(2005). Thus, a factor loading with absolute value of greater than .32 was considered sufficiently high to assume a strong relationship between a variable and a factor, while factor loadings of less than .32 in absolute value were regarded as insignificant and the items containing such loadings were removed from the scale.  In addition, items with communalities of less than .40 were not included in the final scale.  Moreover, factors with fewer than three items, even with loadings of greater than .32, were excluded from the final version of the scale. With respect to determining the number of factors, only factors with eigenvalues greater than 1.0 were considered significant.

After the exploratory factor analysis, the reliability coefficients (such as Cronbach’s Alpha and McDonald’s Omega) of the emerging factors were computed.  So that each factor is reliable, the reliability coefficients should be 0.7 or higher (Fornell & Larcker, 1981; Nunnaly, 1978; Nunnally & Bernstein, 1994).

 



References


Comrey, A. L., & Lee, H. B. (1992). A First Course in Factor Analysis (2nd ed.). Lawrence Erlbaum Associates.

Costello, A. B., & Osborne, J. W. (2005). Best practices in exploratory factor analysis: Four recommendations for getting the most from your analysis. Practical Assessment, Research, and Evaluation, 10(7), 1-9.

Field, A. P. (2000). Discovering Statistics Using SPSS for Windows: Advanced Techniques for the Beginner. Sage Publications.

Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(1), 39-50.

Nunnally, J. C. (1978). Psychometric Theory (2nd ed.). McGraw-Hill.

Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric Theory (3rd ed.). McGraw-Hill.

Tabachnick, B. G., & Fidell, L. S. (2001). Using Multivariate Statistics (4th ed.). Allyn & Bacon.



Comments

Popular posts from this blog

On the Minimum Sample Size Requirement in PLS-SEM

On the Minimum Sample Size Requirement in PLS-SEM The minimum sample size required for conducting Partial Least Squares Structural Equation Modeling (PLS-SEM) is influenced by several factors. These factors include the complexity of the research model, the number of latent variables and indicators utilized, the magnitude of relationships between the latent variables, the desired level of statistical power, and the desired level of significance. Recently, Kock & Hadaya (2018) developed two formulas for determining the minimum sample size in PLS-SEM: the inverse square root method and the gamma exponential method. In these two formulas, the minimum sample size requirement in PLS-SEM depends on the minimum absolute significant path coefficient in the model, statistical power, and level of significant. In practice, researchers want to determine the minimum sample size before the data analysis and/or after the data analysis. A. Minimum sample size before data analysis According to Koc...

Testing the Validity of Reflective and Formative Latent Variables in PLS-SEM Using WarpPLS

Testing the Validity of Reflective and Formative Latent Variables in PLS-SEM Using WarpPLS PLS-SEM is typically analyzed and interpreted in three sequential stages. The process begins with the analysis of the measurement model , which focuses on assessing the validity and reliability of the model. This stage is followed by the examination of model fit and quality indices . The final stage involves analyzing the structural model , which examines the relationships among latent variables used to address research hypotheses, including direct effects, indirect effects, and moderating effects. For guidance on the validity assessment of reflective latent variables using WarpPLS, refer to Amora (2021) . For the validity of formative latent variables, including both first-order and higher-order latent variables, consult Amora (2023) .   References: Amora, J. T. (2021). Convergent validity assessment in PLS-SEM: A loadings-driven approach. Data Analysis Perspectives Journal, 2(3), 1-6. h...

Box-Cox Transformation Using SPSS: A Practical Approach to Normalizing Skewed Data

  This is under construction. Abstract Skewed data distributions can violate key assumptions of parametric statistical techniques, potentially compromising the validity of research findings. One effective remedy is the Box-Cox transformation, a family of power transformations designed to normalize data and stabilize variance. This tutorial provides a clear, step-by-step guide for applying the Box-Cox transformation using SPSS, focusing on a user-friendly approach accessible to researchers with minimal programming background. The procedure involves ranking cases using fractional ranks, computing the mean and standard deviation of the original variable, and generating a normally distributed variable through SPSS's inverse normal function. Practical examples and detailed instructions are provided to facilitate implementation. This paper aims to support researchers in improving the statistical robustness of their analyses by addressing skewness through an accessible and replicable tran...