Team Read’s program evaluation completed by the independent evaluator Margo Jones took an overall look at Team Read based off of statistical analysis conducted in two different phases. These phases evaluated different scopes of Team Read, the readers the coaches and mentors. I Steven Wayock will critique this program evaluation based on the information at hand and will offer Team Read my insight to what this evaluation did properly and what this evaluation lacked based off my expertise. Team Read is a Seattle school district based cross-age program that has two direct goals and functions that have been evaluated for success just a year ago.
The first goal and function is to improve the reading skills of students who participate in the Team Read program. These are students who are in second to fifth grade in the Seattle school district who have been determined to have one kind of deficiency or another when it comes to their performance in the classroom or when it comes to their reading skills in general. The second function of Team Read focuses on the coaches and mentors who will tutor the second to fifth graders to improve their reading skills.
These Team Read coaches are high school students who demonstrate responsibility and a positive sense of motivation throughout the school year. The program goals for the coaches and mentors are to develop work experience through Team Read that will benefit them as they pursue careers and continue to serve their communities in the future. Margo Jones the independent evaluator chose three separate research questions that would ultimately relate directly to her findings and results in her evaluation. The first question Margo chose was to determine whether r not the reading skills of the student readers have improved significantly during their participation in the Team Read program? This question which is narrowed down by the results in a complicated statistical form does not address the fact that each comparison group that Margo chose was not labeled as deficient in the classroom or on their standardized test as the Team Read readers were when they were chosen for Team Read. These results that were detailed in Margo’s evaluation lacked credibility as the sample comparison group was generally not the same as the Team Read program participants.
The participant often took what is referred to as a similar test but differs in comparison which can cause validity issues around the board for the independent evaluator. The second research question the independent evaluator chose to ask was how does the program affect the reading coaches? This question was extremely general and could be answered in a number of different ways; Margo chose a questionnaire that was answered by Team Read coaches to determine whether or not they were impacted by the Team Read Program.
The results were generally positive as Trish McKay had felt the same way about the results, as they were looked upon in a positive light. The questionnaire was administered towards the late stages of the school year as students were beginning to wind down all programs and head into summer. This questionnaire differed in response from one area to the next on certain questionnaire questions which directly gives the Team Read program information and data that can be interpreted to change policy or create action in those areas that saw a deficiency or a less positive outcome from the questionnaire results.
Perhaps a follow up questionnaire that was area specific would help Trish McKay find ways to improve those participants at the Leschi and Van Asselet region. The third questions and perhaps most important question to the evaluation in general proposed by Margo Jones was aimed at determining whether or not Team Read is working well and finding out what can be improved? Margo reported a list of aspects that have been working well in Team Reading according to her evaluation and past evaluation of the Team Read program.
Margo also listed things that needed improvement based off of her results and previous results and evaluations. Although the information that is presented is good for Trish McKay it also leaves out some key unanswered questions. It is vital to Team Read as a whole to determine with great certainty if their program is increasing the reading level of the students who participate. Margo Jones seemed to answer this question through statistical analysis based off of comparison groups and data comparison. I will seek to determine whether or not these statistics tell the whole tale of Team Read.
The methodologies used in this evaluation range from a pretest/posttest method of collecting data to a quantitative/qualitative questionnaire to determine the success of the coaches and mentors. These methodologies in the manner they were used for Margo Jones evaluation presented many challenges and threats to validity in her evaluation as a whole. The pretest/posttest methodology was not consistently reliable as the conclusions drawn from the statistical data were skewed by comparison groups that weren’t exactly replicable.
The questionnaire used to measure the progress and success of the Team Read coaches and mentors lacked participant observation as well as a sample design or a comparison group and often had to straddle internal validity due to personal bias with question two of the second goal. Discussing the positive and negative aspects of the evaluation took me to through a journey of two completely different approaches Margo Jones took. These approaches helped to shed light on what needs to be improved in this evaluation and what Trish McKay can take out of the evaluation as a whole.
The negative aspect of this evaluation is based off of what Trish McKay needed most, and that is general clean statistics that tell a tale of improvement in the participants of Team Reads reading skills compared to a replicable sample of students. It is my critique that leads me to believe that this was few and far between since the statistical analysis did not show an improvement in each and every reading level and failed to present a confident clean comparable sample. I distinctly noticed the difficulty that Margo Jones fell into when she used the district as a whole in her pretest/posttest aspect of her evaluation.
The second goal and objective that Margo Jones aimed to answer provided positive feedback from individuals. I found it concerning that question two of her quantitative/qualitative question was aimed to determine whether or not each individual was successful at coaching and or mentoring the students. This type of individual question and answer leads the door open to individual bias based on each coach and mentoring believing they were successful in their objective and or goal.
The positive results that were yielded from this evaluation is that the coaches and mentors are generally seen to be working towards the goal of increasing the Team Read participants reading skills while working to gain a sense of accomplishment and responsibility as a coach and or mentor. Trish McKay can gain a sense of accomplishment as each assessment or evaluation has deemed the program a sense but have yet to give the necessary indicators to determine that the program is a complete success as it relates to its core functions to the readers.
Trish McKay should feel confident that Team Read is benefiting both the mentors and the readers as some level of success has been shown throughout both evaluations. Trish should understand that for future analysis and evaluations for her to improve her statistical analysis she gained from Margo Jones she will need replicable comparable data in statistical form to prove to Craig and Susan McCaw how successful the program actually is based off of confident comparisons and improved reading levels.
It is vital that if this data does not show improvement in the participants of Team Read’s reading skills that the mentors and coaches get more training as Margo Jones stated to find ways to improve the reading skills of each grade level that participates. In order to gain a measureable level of success for this program in its totality 60% of all students involved in Team Read must show statistical improvement from pretest to posttest when being compared to a replicable sample size.
This sample size must be based off of the same skill set level as the students in Team Read. The criteria for the sample size must be the same criteria used to fill Team Read in each evaluation moving forward as this was a main weakness in past evaluations. In conclusion the interest of the coaches and mentors must not interfere with the reader’s goal to increase their reading skills.
The impact for the readers of Team Read must match the impact the program has had on the coaches and mentors as this should be part of the evaluations main focus as this knowledge and information was already provided through the last evaluation. Reliable indicators must be present that focus directly on the credibility of each future analysis that correlate improvement to data collection and comparison, as this will only benefit the utilization of the data as it relates to improvement in reading skills.