We get data and teacher training so wrong in Connecticut.
Let's ignore that the teacher training regulations are older than every single new teacher (circa 1996) but we break almost every rule when it comes to even passable sniff test measurement.
Let me give you the SCSU perspective
-Field work a rubric in CPAST (a growth model of different rubrics for different stages...awesome for longitudinal work with ordinal data...not)
-Student Teaching uses the last CPAST rubric (three in total)
(stop calling it a growth model FYI...growth models require the same measure given or multiple times..test developers claim different stages require different measures...by definition a maturation model).
We then enter in all the data for CAEP who demands a specific kind of scale.
Student teacherr then take and pay for the Pearson EdTPA student teaching test. Again different rubric and different scale
Get a job! Wow you landed a job congrats!! Now do the TEAMS induction model...Different rubric again.
(yes a rubric is always a different measure given different raters and administrations let's pretend we can ignore this)
Now get evaluated on your job (don't worry has no real teeth) and now use the CT SEED rubrics based on Danielson MET stuff ( so is EdTPA)
But wait all our teachers work in New Haven...Oh different rubric....the TEVAL rubric...a 5 point rubric with three scale descriptors...huh?? oh btw nobody will get a five bc it means extra work for me and you.
STOP THE MADNESS
Let's use one framework the CT SEED model from training, induction, mentoring, and evaluation. One set of objectives, one rubric.
- Almost longitudinal data (it is still ordinal people).
- A shared language across the state from pre-service to evaluation.
- Being able to train school administrators on coaching in the seed model while they observe student teachers
- Connecticut doing something logical and for learners rather than external stakeholders