top of page

Methodology 

Study Setting and Participants

            This action research pilot study was conducted virtually and gathered data from participants attending a small, private, liberal arts university in the Midwest. This study gathered data from four participants. Two of these participants were enrolled in the Doctor of Education program and are current educators with more than ten years of teaching experience each. The remaining two participants were enrolled in the preservice Elementary Education program. All four participants have taken classes in the College of Education at the same small, private university. All four participants identified as female and shared an educational background. These participants were diverse in age, ranging from 20 years old to 50 years old, and had various backgrounds in teaching experience. Data were gathered through a two-step process: first, a 41-question survey was given for the participants to fill out and return to the researcher (see Appendix A). Following the dissemination of the survey, an interview was conducted via Zoom with the participants to discuss their responses on the survey.

​

Instrument

            To measure self-efficacy, participants were given a 41-question survey replicated with permission from Siwatu’s 2007 study, Preservice teachers’ culturally responsive teaching self-efficacy and outcome expectancy beliefs. This survey, titled Culturally Responsive Teaching Self-Efficacy Scale, tasked participants to rate their confidence in their ability to successfully accomplish tasks related to CRT. The survey items were rated on a scale of 0 (no confidence at all) to 100 (completely confident).  

​

Data Sources

            This pilot study used two data sources: Culturally Responsive Teaching Self-Efficacy Survey (Siwatu, 2007) and a subsequent interview. The use of multiple sources of data in this pilot study increased reliability and validity.

​

Data Collection Procedures

            Participants were recruited by request to the Education Department Chair at a small, liberal arts university. An informed consent form was sent to participants and signed. This study was approved by university Institutional Review Board. Following the completion of the survey, I conducted an open-ended semi-structured interview to answer the research question and further discuss the results of the survey with all four participants. Two of the participants self-rated a score of 70 or below on more than five items. One of the participants self-rated an 80 as her lowest score on four items. One participant self-rated a 60 as her lowest score on only one item; every other item was rated 80 or above. This caused each interview to consist of different questions and conversation prompts for each participant, which made coding for themes unreliable. Instead, I decided to focus on two emergent themes that all four participants discussed without prompting from me: virtual learning and relationships.

​

Analytical Strategies for Data Analysis

            Quantitative data were analyzed by comparing results of the preservice educators’ self-efficacy survey and the current educators’ self-efficacy survey. These scores were used to determine which group (preservice or current educators) had the lowest confidence and self-efficacy scores in CRT. The data show that the participant who gave herself the lowest scores was a preservice educator; she rated five items with a score of 30 or below and six additional items with a score of 50 or below. The participant who gave the next highest amount of low scores was a current educator; this educator self-rated nine items with a score of 70 or below. The other current educator self-rated with four scores of 80 or below. The last participant, a preservice educator, self-rated with only one score of 60; the remaining scores were 80 or higher, with 14 items earning a rating of 100 (completely confident).  These results were used to determine which items on the survey would be discussed during the interview. If participants did not rate themselves with a significant amount (five or more) of scores below an 80, the interview was guided by questions related to the items self-rated lower than others.

Qualitative data were measured through a comparative method. I attempted to code the interviews for themes, but since the interviews contained different questions and conversational prompts, themes were unable to be reliably determined. However, I did notice that all four participants discussed two topics naturally, without any prompting from me: relationships and virtual learning. These data were used to support the quantitative data results.

bottom of page