The parties of assessment are Students v Academics (a direct relationship) and Students v Education (a sustained relationship) – and I’m talking about Education as a bureaucracy here.
I’m studying this social phenomenon through the lens of Cyberculture (Manovich) or better yet, what occurs to the social phenomenon when Students have agency and EduTech assists in “trust”. Trust for me is accountability – that students won’t be let down by what they’re offered (outcomes) and how it’s delivered. This can established with the application of a reputation system built from rating markers (lists, scores) to firstly, , and secondly) (Hearn 2010).
My proposal is a rating platform that provides agency to students in a shared online environment, in hopes to support an ideological shift within the Student v Education relationship. Much the same way that the cyberpunk genre is aimed at challenging authority, a rating product disestablishes the status quo set at University. A) B) . The assessment of subjects establishes that they can and should be improved upon, and that subjects are resources with assigned reputations. So, I’m proposing that Students v Education is missing a feedback loop, a central component to Cyber Cultures, and central to the development of a valuable relationship.
Students aren’t interested in giving feedback because Student Evaluations are not an effective feedback loop. Student Evaluations lack clear outcomes for students therefore students don’t believe their thoughts will be considered by staff (Bassett et al 2015). What horrifies me is when reviews on the effectiveness of subject evaluations (Stark & Freishtat 2014) come to the conclusion that to improve teaching, only teachers should assess teachers – because students are only good at assessing themselves. These conclusions are inflated when examined in relationship to online culture: online public ratings remove credibility, provide anonymity, and encourage negativity (Pfeiffer 2006).
This narrative reads like a Cyber Culture text, especially when the disruption of the social phenomenon of Student v Education is framed as a moral panic. The biggest concerns become the risk of slander and loss of employment for academics (both very real considerations) because we have to consider the contextual variables, possible biases, and validity that come into question when students assess a teacher or subject (Marsh and Roche 1997). But it’s not true, there’s no correlation between response rates and course outcomes, nor the effects of racial or gender biases (Benton 2014). My biggest concern is inciting an “us” vs “them” dichotomy instead of a relationship.
“In my 25 years of teaching college students the ones most likely to respond to my requests for anything were the responsible, high-achieving students” (Benton 2014).
Students need a platform that they believe is for students – that it’s a compensation for their disempowerment. But the big issue remains: is a rating effective? 5 star systems don’t work because a user has to evaluate a complex response to content on one scaling system (Youtube 2009). One solution is to separately evaluate our positive and negatives feelings similar to a ‘diamond of opposites’ model (Goodfil.ms 2011). Another is a Paired review model, but its harder to develop and doesn’t always apply (Taylor 2011). I’m also researching narrative feedback practices as an alternative. . I’m still entangled in this concept of trust and ratings and I need to unpack what trust is for each party, and whether it should be quantified. I’m aware that my bias for ratings could just be my generation’s “stubborn personalisation of subjugation” (Hearn 2010). If that’s the case then there’s a lot more to explore regarding the value of students and just how subjugated they are compared to past generations.
Bassett, J; Cleveland, A; Acorn, D; Nix, M & Snyder, T 2015, ‘Are they paying attention? Students’ lack of motivation and attention potentially threaten the utility of course evaluations’, Assessment & Evaluation in Higher Education.
Benton, S 2014, ‘An Evaluation of “An Evaluation of Course Evaluations” Part I’, IDEA, <http://ideaedu.org/evaluation-evaluation-course-evaluations-part-i/>.
Carrell, S & West, J 2010, ‘Does Professor Quality Matter? Evidence from Random Assignment of Students to Professors’, Journal of Political Economy, 118 (3).
Hearn, A 2010, ‘Structuring feeling: Web 2.0, online ranking and rating, and the digital ‘reputation’ economy’, Ephemera, 10 (3/4), 421 – 438.
Marsh, H & Roche, L 1997, ‘Making Students’ Evaluations of Teaching Effectiveness Effective: The Critical Issues of Validity, Bias, and Utility’, American Psychologist, 52 (11), 1187 – 1197.
Pfeiffer, S 2006, ‘Ratings sites flourish behind a veil of anonymity’, Boston Globe (MA), Newspaper Source Plus.
Stark, P & Freishtat, R 2014, ‘An Evaluation of Course Evaluations’, ScienceOpen Research.