Last August we submitted the following grant application to @IESResearch to address the critical literacy online....We were not funded.
I attach the reviewer comments to help others who live and die by the same rat race as federal research.
One argument made in the application is that “for effective writing instruction to occur—reading instruction cannot be separate.” However, it was not clear how reading instruction was a part of this project. It is not entirely clear if this intervention involves providing students with the platform and simply letting them use it or if it involves some form of instruction other than chatbots providing suggestions.
Either the review quit reading too soon or we did not make the second phase of the intervention clear where participants encounter BIASED READ ALOUDs performed by avatars.....Literally, animated and annotated videos discussing text structure and reinforcing confirmatory bias......
But we will have to make the connection more clear that when students curate and resources they also have to READ. The web is a read/write environment. IES has never understood this.
It was not entirely clear why the project focuses on seventh grade. It would be helpful to provide a more detailed description of the instruction that will accompany the online platform that students will use. It was challenging to identify what students would be taught and how they would be taught, other than getting advice from chatbots. It was clear that the applicants want students to assume control over the process. It is important to explain how this approach aligns with what is currently known about effective writing instruction. It appears that the avatars will read aloud to students, but it was not clear what the rationale was for having students listen to the information rather than read it for themselves.
We literally cited the CCSS standards for 7th grade, plus COPPA...you can't do tech studies under 13 way too much work.
The reviewer equated the biased think alouds with "being read to" they missed the entire point of the study. We do not recognize bias because we do not consider how perspective shape truth....They were getting BIASED read alouds so they would recognize how credibility markers are reinforced or ignored due to perspective. We will need to make this more clear.
Psychometric information for the planned assessments needs to be provided (AWC-SBA, surveys). In the description of the writing assessment, the applicants plan to wait until Year 4 to score all the students’ writing. This seems like a missed opportunity to provide feedback during the design studies in the first years of the project.
I think the reviewer skipped the entire paragraph from their reading (taken from our proposal)
In 2013, NWP developed the AWC for Source-Based Argument (AWC-SBA) to focus on specific features of source-based argument writing. In adapting the AWC, NWP reviewed extant argument writing rubrics (e.g., the Smarter Balanced and PARCC rubrics). The AWC-SBA retain the AWC’s basic structure rooted in the “six traits” of writing but each has a particular focus on the attributes related to source-based argument writing. The AWC-SBA measures four attributes: content (e.g., quality of reasoning and strength of evidence); structure (e.g., organization to enhance the argument); stance (e.g., tone, establishment of credibility); and conventions (e.g., control of usage, punctuation, spelling, capitalization, and paragraphing). The AWC-SBA has been used in three large-scale scorings in large-scale scorings (n>5,000) and performed similarly to the original AWC. For example, reliability estimates for the AWC-SBA ranged from 89%–92% on each attribute (Gallagher, Arshan, & Woodworth, 2016).
We reported these measures and we also explained why waiting to year 4 to score the test actualy reduces measure bias. This is a major assessment provided by the National Writing Project...We did not pick some unheard of measure, nor develop our own measure to prove our own intervention works.....
At the very least, ICCs should be estimated, and the project should be prepared to run multilevel models if there is sufficient classroom-level variability.
On a 1.4 million dollar budget? You crazy???? They want sample sizes that will have enough power for multi-level models....for a Developmental Grant....a goal 4 grant maybe....but not a goal 2 developmental grant.
It was not entirely clear if the institutions supporting the research team offered the fairly sophisticated resources needed to mount and manage the online platform that is central to this project. Also, two of the institutions are not research-intensive institutions, so it would be helpful to have more details about their ability to manage both the communication challenges and data analysis needs for this project.
IES has such a bias against schools of access. The system is almost explicity designed to ensure the rich stay rich and federal dollars do not trickle down to the schools that actually serve communities of color. Just because I don't work at some fancy pants r1 school with the same white kids from the same white picket fence lives does not mean I can't do data analysis.
Instead of ensuring federal research dollars reach all institutions we create a silo where the reviewers ensure all dollars stay at the r1 levels thus protecting their own bottom lines.
We need to break the inequity machine that this bias reinforces.
The richness of the planned intervention components seemed somewhat connected from a strong, systematic theory of change. Vague references were made to concepts of discourse, community, and identity. However, ideally, each of the elements of the intervention would have been systematically justified within the context of the theory of change. As one example, how exactly might students benefit from curating and vetting sources for each other?
Reviewer Two did not like our Theory of change. I thought it was quite good. They wanted us to explain all of the reading and writing research into the theory including going as far as justifying why having kids work together is a good idea...I liked our theory of change. We will revise it. Then next year's reviewers will suggest we needed something more like we just proposed.
Reviewer A didn't like our quantitative measures and looks down upon faculty who do not serve in the ivory halls of R1 schools. Review B did not like our qualitative methods or our theory of change. Reviewer B could has some valid points but neither reviewer understood how we would use parsing, scraping websites for metadata to surface to new evidence of knowledge growth.
We will submit again. I hope the bias IED reviewers have against Universities who serve under-represented peoples does not doom our efforts forever.