Skip to main content

Greg McVerry

Update on the Technlogy Fluency Group @scsu

5 min read

The Southern Connecticut State University Tecnology Fluency Affinity Group met today. We are a 100% volunteer ad-hoc group of faculty who care about the direction of how technology education on the state.

The technology fluency classes are in our general ed program, the Liberal Education Program. Tier one classes are competency based classes. So in technology you have to do spreadsheets and presentations. You can find all the competency here. My class is here: http://edu106class.networkedlearningcollaborative.com/feeds.html for example.

Today's topic was assessment. It is contentious. Assessment of technology skills is never easy. We settled on three main pathways.

Common Measure

The physics department uses a task where students have to recreate a document using spreadsheets and the internet. I have seen them and it is a decent task that gets at the ways scientist now read and write.

These assessments are easier to score and provide greater reliability than the current rubrics nobody uses now.

Participatory Assessment

Others advocated for assessment models, such as partcipatory assessment  that better align to our schools mission of social justice and situate knowledge in cutural contexts rather than individual learners.

Faculty Choice

Given the academic freedom of faculty any TF instructor can assess the required competencies in any manner of their choosing. They just need to provide their data to the LEP Director.

Concerns

Each approach has merits and pitfalls. I think we also shuld begin with the question. "Why are we assessing? Who are we serving?" In Higher education this answer today is too often external forces far removed from actual students.

We also need to ensure we meet the theoertical assumptions of each assessment approach regardless of the path chosen.

Common Measure Concerns

If we want to use pre and post measures of learning this means meeting basic rules of item difficulty, score distributions, celing and floor tests if these scores are to be consequential.

Even at the programmatic level any growth score measure should meet basic reliability estimates. Though the LEP director stated the bar we are goign for is "face vadility" which is the "appearance" rather than the measure of validity...

We were told not to worry about reliabilty so we will never know if the measures are truly valid.

We were also told there will be no money to purchase instruments that have established reliability estimates. These must be instructor created, which as any reading comprhension researcher will tell you introduces all new levels of noise.

If we are only going for "face validity" then only giving the assessment at one time point would make the most sense as anyt conlcusion beyond presenting descriptive statistics would be impossible with a pre and post test.

Credentialing Concerns

A credentialing program, especially one built on participatory assessment paradigms are also frought with difficulty. Especially if they are rubric driven. Reliability can and usually will be even worse.

Rubrics are never reliable. That is a misconception. You need reliability ratings  and norming activines at each rating session. We try to reduce this in competency assessment with binary rubrics of a state being present or not "can download picture" As soon as quality scales get introduced rubric assessment gets messy.

There is a lack of consuming cases for badges as well. What does it mean. Any effort to badge competency must in some way be able to parse the data that makes analysis easier for both human and machine readers.What can you do with the data?

There is also a credibility issue with badges. We have been building and studying badges since the project began with Mozilla in 2011. The credibility, "the so what" has never been answwered, but this is where we as faculty build our credibility. If I say student earned X I am putting my stamp of approval, and there is an evidence trail to prove it.

Basic assumptions about classroom interactions between the instructor and student must be met as to the nature that reflecting on learning plays in the classroom. These are often steps skipped by instructors due to their the time committments. This can lead to threats of validity.

Next Steps

Members of the affinity group will chose their particular pathway and begin to develop the assessment and data colllection systems.

I will be focusing more on participatory assessment strategis that I feel better align to our schools social justice mission. We need to situate knowing in social and cultural contexts. A reflective portfolio where students document their own knowledge growth and instructors see if submitted artifacts meet specific competencies will be nice.

The faculty intertested in this apprach will have some work to do before the next semester.

Up first we need to take stock of what we teah. i know I don't and won't get t databases. I need to do find more time to do spreadsheets. I need to incorporate a survey builder in my make the workd better but it is just such a packed curriculum. Plus the students want to spend way more time on video editing and photo editing. Cross platform video editing hard to teach.

Then webe making the dichtomous measures that may contain multiple criterion for each badge. I have all of these done I just need to turn them into spreadsheets to share.

The badging platform is built but with the the interest the computer science deperatment is showing in blockchain technology the students may want to throw their hat in the ring and bake up an in-house solution with a ledger that will create learning logs. Make a partici[atory assessment about making assessment to assess a program. Get real meta.