I continue with Chi-Square and technical details in different parts of SPSS' menu
======================
2. Correlations (Ordinal/Numeric Variables)
- Purpose: Measures the strength and direction of association between ordinal or continuous variables.
- Key Statistics:
- Pearson’s r: For linear relationships between continuous variables
- Spearman’s rho: For monotonic relationships between ordinal variables.
======================
2. Correlations (Ordinal/Numeric Variables)
- Purpose: Measures the strength and direction of association between ordinal or continuous variables.
- Key Statistics:
- Pearson’s r: For linear relationships between continuous variables
- Spearman’s rho: For monotonic relationships between ordinal variables.
3. Nominal Data Measures (For Categorical Variables)
- Purpose: Assesses association between nominal (unordered) categories.
- Key Statistics:
- Contingency Coefficient: Adjusts chi-square for table size (range: 0 to <1).
- Phi & Cramer’s V:
- Phi: For 2×2 tables (range: 0–1).
- Cramer’s V: For larger tables (range: 0–1; values >0.3 often indicate meaningful association).
- Lambda: Predicts one variable based on another (asymmetric/symmetric versions).
- Uncertainty Coefficient: Measures reduction in uncertainty when predicting one variable using another.
- Purpose: Assesses association between nominal (unordered) categories.
- Key Statistics:
- Contingency Coefficient: Adjusts chi-square for table size (range: 0 to <1).
- Phi & Cramer’s V:
- Phi: For 2×2 tables (range: 0–1).
- Cramer’s V: For larger tables (range: 0–1; values >0.3 often indicate meaningful association).
- Lambda: Predicts one variable based on another (asymmetric/symmetric versions).
- Uncertainty Coefficient: Measures reduction in uncertainty when predicting one variable using another.
📚 PREET Center presents:
🔴 A Practical Course on Quantitative Research Designs with SPSS (Module 1)
🔹 Instructor: Dr. Hessameddin Ghanber
🔹 Duration: Six weeks (six sessions)
🔹 Schedule: Tuesdays: 16:30-18:00
🔹 The workshops will be held on the Google Meet platform and will have the support.
📣 Registration has started. (Limited capacity)
▪️ For more information and registration:
Telegram: @preetadmin
WhatsApp & SMS: +989359289774
▪️For each participant in the workshop, a digital certificate signed by Dr. Minoo Alemi, Dr. Zia Tajeddin, and Dr. Hessameddin Ghanbar will be issued.
▪️Join our Telegram Channel: @preetcenter
🔴 A Practical Course on Quantitative Research Designs with SPSS (Module 1)
🔹 Instructor: Dr. Hessameddin Ghanber
🔹 Duration: Six weeks (six sessions)
🔹 Schedule: Tuesdays: 16:30-18:00
🔹 The workshops will be held on the Google Meet platform and will have the support.
📣 Registration has started. (Limited capacity)
▪️ For more information and registration:
Telegram: @preetadmin
WhatsApp & SMS: +989359289774
▪️For each participant in the workshop, a digital certificate signed by Dr. Minoo Alemi, Dr. Zia Tajeddin, and Dr. Hessameddin Ghanbar will be issued.
▪️Join our Telegram Channel: @preetcenter
Module 1
🔵 Part I: Data Structure Issues and Design Technicalities
Objective: Equip students with foundational knowledge to prepare and structure data for valid and reliable mean-difference analysis in SPSS.
1. Introduction to Mean-Difference Techniques
▪️Overview of t-tests and ANOVA
▪️When and why to use mean-comparison methods
2. Research Design Essentials
▪️Independent vs. repeated measures designs
▪️Between-group vs. within-subject variables
▪️Randomization, control, and matching
3. Variables and Measurement
▪️Levels of measurement (nominal, ordinal, interval, ratio)
▪️Dependent and independent variable specification
4. Data Structure in SPSS
▪️Setting up the variable view and data view
▪️Labeling variables and values correctly
▪️Coding group membership (dummy coding, grouping variables)
5. Data Cleaning and Assumption Checking
▪️Handling missing data
▪️Testing for normality and outliers
▪️Homogeneity of variance and sphericity
Part II: Application of Mean-Difference Techniques in SPSS
Objective: Develop hands-on skills to conduct, interpret, and report t-tests and ANOVAs using SPSS.
6. Independent Samples t-test
▪️Use cases, assumptions
▪️Running the test in SPSS
▪️Interpreting SPSS output
7. Paired Samples t-test
▪️Use with repeated measures
▪️Applications and SPSS procedure
▪️Reporting effect sizes
8. One-Way ANOVA
▪️Assumptions and post-hoc tests
▪️Conducting and interpreting ANOVA in SPSS
9. Repeated Measures ANOVA
▪️Structure and assumptions
▪️Sphericity and corrections (Greenhouse-Geisser etc.)
10. Factorial ANOVA (Two-way ANOVA)
▪️Interaction effects
▪️Main vs. interaction effects interpretation
▪️Visualizing interaction plots
11. Reporting Results
▪️APA-style reporting of SPSS outputs
▪️Tables, figures, and interpretation
12. Common Mistakes
🔵 Part I: Data Structure Issues and Design Technicalities
Objective: Equip students with foundational knowledge to prepare and structure data for valid and reliable mean-difference analysis in SPSS.
1. Introduction to Mean-Difference Techniques
▪️Overview of t-tests and ANOVA
▪️When and why to use mean-comparison methods
2. Research Design Essentials
▪️Independent vs. repeated measures designs
▪️Between-group vs. within-subject variables
▪️Randomization, control, and matching
3. Variables and Measurement
▪️Levels of measurement (nominal, ordinal, interval, ratio)
▪️Dependent and independent variable specification
4. Data Structure in SPSS
▪️Setting up the variable view and data view
▪️Labeling variables and values correctly
▪️Coding group membership (dummy coding, grouping variables)
5. Data Cleaning and Assumption Checking
▪️Handling missing data
▪️Testing for normality and outliers
▪️Homogeneity of variance and sphericity
Part II: Application of Mean-Difference Techniques in SPSS
Objective: Develop hands-on skills to conduct, interpret, and report t-tests and ANOVAs using SPSS.
6. Independent Samples t-test
▪️Use cases, assumptions
▪️Running the test in SPSS
▪️Interpreting SPSS output
7. Paired Samples t-test
▪️Use with repeated measures
▪️Applications and SPSS procedure
▪️Reporting effect sizes
8. One-Way ANOVA
▪️Assumptions and post-hoc tests
▪️Conducting and interpreting ANOVA in SPSS
9. Repeated Measures ANOVA
▪️Structure and assumptions
▪️Sphericity and corrections (Greenhouse-Geisser etc.)
10. Factorial ANOVA (Two-way ANOVA)
▪️Interaction effects
▪️Main vs. interaction effects interpretation
▪️Visualizing interaction plots
11. Reporting Results
▪️APA-style reporting of SPSS outputs
▪️Tables, figures, and interpretation
12. Common Mistakes
🛑How can I boost KMO value in EFA of questionnaires? I will give you eight strategies here:
Variables= Items
1. Remove Variables with Low Individual KMO Values
Use the Anti-Image
Correlation Matrix to identify variables with individual KMO values below 0.5.
2. Examine the Correlation Matrix
KMO depends on high partial correlations being low and zero-order correlations being high.If many correlations are low (e.g., below 0.3), variables may not be suitable for EFA. So, remove weakly correlated or uncorrelated variables.
3. Combine or Collapse Similar Variables
If two or more items measure the same thing and are highly correlated (r > 0.9), consider combining them or removing redundancies.This can reduce multicollinearity, which negatively affects KMO.
4. Increase Sample Size
A larger sample size improves the stability of correlation estimates, which may help raise KMO. A general rule: at least 5–10 participants per variable, but more is better. (I see that many researchers do not pay attention to this simple rule😊)
5. Conduct Item Analysis or Reliability Tests
Use Cronbach's alpha or Corrected Item-Total Correlation to detect poorly performing items.Removing items with low item-total correlation may improve KMO
6. Use Principal Axis Factoring Instead of Principal Component Analysis (but I always go for PAF😊)
While both are used in EFA, Principal Axis Factoring (PAF) works better with lower communalities and might provide better estimates for factorability.
7. Consider Transforming Variables
If items are highly skewed or non-normally distributed, transformation (e.g., log or square root) might reduce noise and improve correlations, potentially improving KMO.
8. Theoretically Reconsider Item Set
Review whether all items conceptually belong together and reflect potentially common underlying factors.
Remove or revise items that are theoretically inconsistent.
🛑I say that from the word go, you need to determine whether your factors are formative or reflective, as many researchers easily overlook this super important distinction 😊
Variables= Items
1. Remove Variables with Low Individual KMO Values
Use the Anti-Image
Correlation Matrix to identify variables with individual KMO values below 0.5.
2. Examine the Correlation Matrix
KMO depends on high partial correlations being low and zero-order correlations being high.If many correlations are low (e.g., below 0.3), variables may not be suitable for EFA. So, remove weakly correlated or uncorrelated variables.
3. Combine or Collapse Similar Variables
If two or more items measure the same thing and are highly correlated (r > 0.9), consider combining them or removing redundancies.This can reduce multicollinearity, which negatively affects KMO.
4. Increase Sample Size
A larger sample size improves the stability of correlation estimates, which may help raise KMO. A general rule: at least 5–10 participants per variable, but more is better. (I see that many researchers do not pay attention to this simple rule😊)
5. Conduct Item Analysis or Reliability Tests
Use Cronbach's alpha or Corrected Item-Total Correlation to detect poorly performing items.Removing items with low item-total correlation may improve KMO
6. Use Principal Axis Factoring Instead of Principal Component Analysis (but I always go for PAF😊)
While both are used in EFA, Principal Axis Factoring (PAF) works better with lower communalities and might provide better estimates for factorability.
7. Consider Transforming Variables
If items are highly skewed or non-normally distributed, transformation (e.g., log or square root) might reduce noise and improve correlations, potentially improving KMO.
8. Theoretically Reconsider Item Set
Review whether all items conceptually belong together and reflect potentially common underlying factors.
Remove or revise items that are theoretically inconsistent.
🛑I say that from the word go, you need to determine whether your factors are formative or reflective, as many researchers easily overlook this super important distinction 😊
I here explain different types of contrasts in repeated measures ANOVA. These are available in SPSS and R.
1. Polynomial Contrasts
Purpose: Test for linear, quadratic, cubic, etc., trends in the means across the repeated measures.
Use Case: When the levels of the within-subject factor are ordered (e.g., time points or doses).
Example: If you measure performance at 3 time points (T1, T2, T3), a linear contrast tests if the mean increases or decreases steadily, while a quadratic contrast tests for a curve (e.g., increase then decrease).
×××××××××××××××××××××××××
2. Deviation Contrasts
Purpose: Compare each level of the within-subject factor to the overall mean of all levels.
Use Case: When you want to know whether a particular condition differs significantly from the average of all conditions.
Example: Compare each teaching method (A, B, C) to the average effect across all three methods.
×××××××××××××××××××××××××
3. Simple Contrasts
Purpose: Compare each level of the within-subject factor to a reference level (usually the first or last).
Use Case: When you have a control or baseline condition and want to compare each other condition to it.
Example: Compare scores at Week 2 and Week 3 to Week 1 (baseline).
××××××××××××××××××××××××××
4. Repeated Contrasts
Purpose: Compare each level of the within-subject factor to the previous level.
Use Case: To test sequential changes (e.g., from one time point to the next).
Example: Compare performance at Week 2 to Week 1, then Week 3 to Week 2.
××××××××××××××××××××××××××
5. Helmert Contrasts
Purpose: Compare each level of the factor to the mean of subsequent levels.
Use Case: To examine whether early levels differ from the average of what comes next.
Example: Compare Week 1 to the average of Week 2 and Week 3; then compare Week 2 to Week 3.
×××××××××××××××××××××××××
6. Difference Contrasts
Purpose: Compare each level to the mean of preceding levels.
Use Case: The reverse of Helmert; used when later levels are to be compared to previous ones.
Example: Compare Week 3 to the average of Week 1 and Week 2.
××××××××××××××××××××××××××
Each should be based on your research questions and your research goal, as I summarised below:
For trends across time → Polynomial or Repeated
To compare to a control → Simple
To compare sequentially → Repeated
To compare each to average → Deviation
To test theoretical contrasts → Use Custom Contrasts (manually specify contrast weights)
1. Polynomial Contrasts
Purpose: Test for linear, quadratic, cubic, etc., trends in the means across the repeated measures.
Use Case: When the levels of the within-subject factor are ordered (e.g., time points or doses).
Example: If you measure performance at 3 time points (T1, T2, T3), a linear contrast tests if the mean increases or decreases steadily, while a quadratic contrast tests for a curve (e.g., increase then decrease).
×××××××××××××××××××××××××
2. Deviation Contrasts
Purpose: Compare each level of the within-subject factor to the overall mean of all levels.
Use Case: When you want to know whether a particular condition differs significantly from the average of all conditions.
Example: Compare each teaching method (A, B, C) to the average effect across all three methods.
×××××××××××××××××××××××××
3. Simple Contrasts
Purpose: Compare each level of the within-subject factor to a reference level (usually the first or last).
Use Case: When you have a control or baseline condition and want to compare each other condition to it.
Example: Compare scores at Week 2 and Week 3 to Week 1 (baseline).
××××××××××××××××××××××××××
4. Repeated Contrasts
Purpose: Compare each level of the within-subject factor to the previous level.
Use Case: To test sequential changes (e.g., from one time point to the next).
Example: Compare performance at Week 2 to Week 1, then Week 3 to Week 2.
××××××××××××××××××××××××××
5. Helmert Contrasts
Purpose: Compare each level of the factor to the mean of subsequent levels.
Use Case: To examine whether early levels differ from the average of what comes next.
Example: Compare Week 1 to the average of Week 2 and Week 3; then compare Week 2 to Week 3.
×××××××××××××××××××××××××
6. Difference Contrasts
Purpose: Compare each level to the mean of preceding levels.
Use Case: The reverse of Helmert; used when later levels are to be compared to previous ones.
Example: Compare Week 3 to the average of Week 1 and Week 2.
××××××××××××××××××××××××××
Each should be based on your research questions and your research goal, as I summarised below:
For trends across time → Polynomial or Repeated
To compare to a control → Simple
To compare sequentially → Repeated
To compare each to average → Deviation
To test theoretical contrasts → Use Custom Contrasts (manually specify contrast weights)
✅What is Type III Sum of Squares (SS) in output tables of many tests?
In SPSS, Type III Sum of Squares (SS) refers to a method used in ANOVA (Analysis of Variance) or general linear models (GLM) to partition the variability in the dependent variable among the different effects (main effects and interactions) in your model.
💡In ANOVA or regression, the total sum of squares represents the total variation in the data. It is partitioned into:
1-Model sum of squares (variation explained by the model),
2-Error sum of squares (residuals, unexplained variation),
3-And sometimes, further divided among the individual factors and their interactions.
✅ What does it do? Why is it important to be chosen?
Type III Sum of Squares tests each effect after accounting for all other effects in the model. This means:
🛑It evaluates the unique contribution of each factor or interaction, while holding all other variables constant, and it assumes the model is balanced or uses least squares adjustment for imbalance. This is why we cannot tolerate extremely unbalanced designs when using ANOVA or other related multivariate counterparts like MANOVA.
In SPSS, Type III Sum of Squares (SS) refers to a method used in ANOVA (Analysis of Variance) or general linear models (GLM) to partition the variability in the dependent variable among the different effects (main effects and interactions) in your model.
💡In ANOVA or regression, the total sum of squares represents the total variation in the data. It is partitioned into:
1-Model sum of squares (variation explained by the model),
2-Error sum of squares (residuals, unexplained variation),
3-And sometimes, further divided among the individual factors and their interactions.
✅ What does it do? Why is it important to be chosen?
Type III Sum of Squares tests each effect after accounting for all other effects in the model. This means:
🛑It evaluates the unique contribution of each factor or interaction, while holding all other variables constant, and it assumes the model is balanced or uses least squares adjustment for imbalance. This is why we cannot tolerate extremely unbalanced designs when using ANOVA or other related multivariate counterparts like MANOVA.
In SPSS, the Bivariate Correlations menu offers three main types of correlation tests to measure the strength and direction of association between two variables. These are Pearson, Kendall’s Tau-b, and Spearman. Each has different assumptions and is suitable for different types of data and research situations:
Forwarded from Hessameddin Ghanbar
One of the most insightfully reflective, thoughtful, and thought-provoking projects I know of in the entire field. So glad to see this outstanding volume, edited by Lee McCallum, Dara Tafazoli, and Ali H. Al-Hoorie, out and 100% openaccess
Research Cultures in Applied Linguistics and TESOL
Lee McCallum, Dara Tafazoli, & Ali H. Al-Hoorie (Eds.)
2025
(PDFs of individual chapters can be downloaded using the links below.)
https://drive.google.com/file/d/1_K3Xtp-BjYRiVzPI4j_4hqNBKHsUnLRB/view
Research Cultures in Applied Linguistics and TESOL
Lee McCallum, Dara Tafazoli, & Ali H. Al-Hoorie (Eds.)
2025
(PDFs of individual chapters can be downloaded using the links below.)
https://drive.google.com/file/d/1_K3Xtp-BjYRiVzPI4j_4hqNBKHsUnLRB/view
Forwarded from Hessameddin Ghanbar
Dear All
The new issue of Applied Pragmatics has been published.
Applied Pragmatics
Volume 7, Issue 1 (2025)
Frequency and listener perceptions of the nursery we in instructor-student interactions
Elizabeth Hanks | pp. 1–28
Does grammatical aspect convey pragmatic meaning? The case of believe in the progressive aspect
Marianna Gracheva | pp. 29–55
Ontological realism as a validity criterion in second-language strategic competence assessment
Stephen P O’Connell & Steven J Ross | pp. 56–81
Pragmatic research in English as an Asian lingua franca: A systematic review
Xianming Fang | pp. 82–108
Editors
Zia Tajeddin
Naoko Taguchi
Associate Editors
Minoo Alemi
Anne Barron
Zohreh R. Eslami
Thi Thuy Minh Nguyen
benjamins.com/catalog/ap.7.1
The new issue of Applied Pragmatics has been published.
Applied Pragmatics
Volume 7, Issue 1 (2025)
Frequency and listener perceptions of the nursery we in instructor-student interactions
Elizabeth Hanks | pp. 1–28
Does grammatical aspect convey pragmatic meaning? The case of believe in the progressive aspect
Marianna Gracheva | pp. 29–55
Ontological realism as a validity criterion in second-language strategic competence assessment
Stephen P O’Connell & Steven J Ross | pp. 56–81
Pragmatic research in English as an Asian lingua franca: A systematic review
Xianming Fang | pp. 82–108
Editors
Zia Tajeddin
Naoko Taguchi
Associate Editors
Minoo Alemi
Anne Barron
Zohreh R. Eslami
Thi Thuy Minh Nguyen
benjamins.com/catalog/ap.7.1
John Benjamins Publishing Catalog
Applied Pragmatics 7:1
Issue of Applied Pragmatics
📚 PREET Center presents:
🔴 A Practical Course on Quantitative Research Designs with SPSS (Module 1)
🔹 Instructor: Dr. Hessameddin Ghanber
🔹 Duration: Six weeks (six sessions)
🔹 Schedule: Tuesdays: 16:30-18:00
🔹 The workshops will be held on the Google Meet platform and will have the support.
📣 Registration has started. (Limited capacity)
▪️ For more information and registration:
Telegram: @preetadmin
WhatsApp & SMS: +989359289774
▪️For each participant in the workshop, a digital certificate signed by Dr. Minoo Alemi, Dr. Zia Tajeddin, and Dr. Hessameddin Ghanbar will be issued.
▪️Join our Telegram Channel: @preetcenter
🔴 A Practical Course on Quantitative Research Designs with SPSS (Module 1)
🔹 Instructor: Dr. Hessameddin Ghanber
🔹 Duration: Six weeks (six sessions)
🔹 Schedule: Tuesdays: 16:30-18:00
🔹 The workshops will be held on the Google Meet platform and will have the support.
📣 Registration has started. (Limited capacity)
▪️ For more information and registration:
Telegram: @preetadmin
WhatsApp & SMS: +989359289774
▪️For each participant in the workshop, a digital certificate signed by Dr. Minoo Alemi, Dr. Zia Tajeddin, and Dr. Hessameddin Ghanbar will be issued.
▪️Join our Telegram Channel: @preetcenter
Dear Colleagues,
Greetings. You might be interested in this monograph, the idea of which came to my mind four years ago after being enlightened by Yin et al.'s (2019) study on the Dispositions towards Loving Pedagogy (DTLP) scale: Instrument development and demographic analysis.
The new scale development and validation might be suitable for EFL/ESL contexts.
Derakhshan, A. (2025). _Loving pedagogy in second and foreign language education: Underlying components, measurement, and ecological systems_. Springer.
https://link.springer.com/book/9783031983849
This monograph was supported by my national project in China.
Best,
Ali Derakhshan
[11/06, 16:37] +98 912 634 7213: Dear Colleagues,
We are delighted to share that our new book, "Key Concepts in Syllabus Design and Materials Development," has just been published!
A heartfelt thanks goes to my esteemed co-author, Professor Zia Tajeddin, for his exceptional collaboration, dedication, and hard work throughout this rewarding journey.
📖 Explore our book here:
👉https://www.cambridgescholars.com/product/978-1-0364-4158-6
We hope scholars, educators, and practitioners find it insightful and enriching.
Greetings. You might be interested in this monograph, the idea of which came to my mind four years ago after being enlightened by Yin et al.'s (2019) study on the Dispositions towards Loving Pedagogy (DTLP) scale: Instrument development and demographic analysis.
The new scale development and validation might be suitable for EFL/ESL contexts.
Derakhshan, A. (2025). _Loving pedagogy in second and foreign language education: Underlying components, measurement, and ecological systems_. Springer.
https://link.springer.com/book/9783031983849
This monograph was supported by my national project in China.
Best,
Ali Derakhshan
[11/06, 16:37] +98 912 634 7213: Dear Colleagues,
We are delighted to share that our new book, "Key Concepts in Syllabus Design and Materials Development," has just been published!
A heartfelt thanks goes to my esteemed co-author, Professor Zia Tajeddin, for his exceptional collaboration, dedication, and hard work throughout this rewarding journey.
📖 Explore our book here:
👉https://www.cambridgescholars.com/product/978-1-0364-4158-6
We hope scholars, educators, and practitioners find it insightful and enriching.
SpringerLink
Loving Pedagogy in Second and Foreign Language Education
Presents theoretical foundations of loving pedagogy practice in light of positive psychology and educational theories
Dear Colleagues,
We are delighted to share that our new book, "Key Concepts in Syllabus Design and Materials Development," has just been published!
A heartfelt thanks goes to my esteemed co-author, Professor Zia Tajeddin, for his exceptional collaboration, dedication, and hard work throughout this rewarding journey.
📖 Explore our book here:
👉https://www.cambridgescholars.com/product/978-1-0364-4158-6
We hope scholars, educators, and practitioners find it insightful and enriching.
We are delighted to share that our new book, "Key Concepts in Syllabus Design and Materials Development," has just been published!
A heartfelt thanks goes to my esteemed co-author, Professor Zia Tajeddin, for his exceptional collaboration, dedication, and hard work throughout this rewarding journey.
📖 Explore our book here:
👉https://www.cambridgescholars.com/product/978-1-0364-4158-6
We hope scholars, educators, and practitioners find it insightful and enriching.