Uncategorized
Comments 6

Students Have More To Say

The parties of assessment are Students v Academics (a direct relationship) and Students v Education (a sustained relationship) – and I’m talking about Education as a bureaucracy here.

I’m studying this social phenomenon through the lens of Cyberculture (Manovich) or better yet, what occurs to the social phenomenon when Students have agency and EduTech assists in “trust”. Trust for me is accountability – that students won’t be let down by what they’re offered (outcomes) and how it’s delivered. This can established with the application of a reputation system built from rating markers (lists, scores) to firstly, compensate for a student’s disempowerment, and secondly) create an environment for authentic expression (Hearn 2010).

My proposal is a rating platform that provides agency to students in a shared online environment, in hopes to support an ideological shift within the Student v Education relationship. Much the same way that the cyberpunk genre is aimed at challenging authority, a rating product disestablishes the status quo set at University. A) Students are not privileged to be where they are because B) University is a product that we pay for. The assessment of subjects establishes that they can and should be improved upon, and that subjects are resources with assigned reputations. So, I’m proposing that Students v Education is missing a feedback loop, a central component to Cyber Cultures, and central to the development of a valuable relationship.

Students aren’t interested in giving feedback because Student Evaluations are not an effective feedback loop. Student Evaluations lack clear outcomes for students therefore students don’t believe their thoughts will be considered by staff (Bassett et al 2015). What horrifies me is when reviews on the effectiveness of subject evaluations (Stark & Freishtat 2014) come to the conclusion that to improve teaching, only teachers should assess teachers – because students are only good at assessing themselves. These conclusions are inflated when examined in relationship to online culture: online public ratings remove credibility, provide anonymity, and encourage negativity (Pfeiffer 2006).

This narrative reads like a Cyber Culture text, especially when the disruption of the social phenomenon of Student v Education is framed as a moral panic. The biggest concerns become the risk of slander and loss of employment for academics (both very real considerations) because we have to consider the contextual variables, possible biases, and validity that come into question when students assess a teacher or subject (Marsh and Roche 1997). But it’s not true, there’s no correlation between response rates and course outcomes, nor the effects of racial or gender biases (Benton 2014). My biggest concern is inciting an “us” vs “them” dichotomy instead of a relationship.

“In my 25 years of teaching college students the ones most likely to respond to my requests for anything were the responsible, high-achieving students” (Benton 2014).

Students need a platform that they believe is for students – that it’s a compensation for their disempowerment. But the big issue remains: is a rating effective? 5 star systems don’t work because a user has to evaluate a complex response to content on one scaling system (Youtube 2009). One solution is to separately evaluate our positive and negatives feelings similar to a ‘diamond of opposites’ model (Goodfil.ms 2011). Another is a Paired review model, but its harder to develop and doesn’t always apply (Taylor 2011). I’m also researching narrative feedback practices as an alternative. So is a rating the most effective choice to produce trust and reputation? – I’m not sure. I’m still entangled in this concept of trust and ratings and I need to unpack what trust is for each party, and whether it should be quantified.  I’m aware that my bias for ratings could just be my generation’s “stubborn personalisation of subjugation” (Hearn 2010). If that’s the case then there’s a lot more to explore regarding the value of students and just how subjugated they are compared to past generations.

 

 

REFERENCES


 

Bassett, J; Cleveland, A; Acorn, D; Nix, M & Snyder, T 2015, ‘Are they paying attention? Students’ lack of motivation and attention potentially threaten the utility of course evaluations’, Assessment & Evaluation in Higher Education.

Benton, S 2014, ‘An Evaluation of “An Evaluation of Course Evaluations” Part I’, IDEA, <http://ideaedu.org/evaluation-evaluation-course-evaluations-part-i/&gt;.

Carrell, S & West, J 2010, ‘Does Professor Quality Matter? Evidence from Random Assignment of Students to Professors’, Journal of Political Economy, 118 (3).

Hearn, A 2010, ‘Structuring feeling: Web 2.0, online ranking and rating, and the digital ‘reputation’ economy’, Ephemera, 10 (3/4), 421 – 438.

Marsh, H & Roche, L 1997, ‘Making Students’ Evaluations of Teaching Effectiveness Effective: The Critical Issues of Validity, Bias, and Utility’, American Psychologist, 52 (11), 1187 – 1197.

Pfeiffer, S 2006, ‘Ratings sites flourish behind a veil of anonymity’, Boston Globe (MA), Newspaper Source Plus.

Stark, P & Freishtat, R 2014, ‘An Evaluation of Course Evaluations’, ScienceOpen Research.

 

Advertisements

6 Comments

  1. Pingback: APIs for Students | C O D I E N

  2. I’m really interested in and inspired by your passion for student engagement in the design of the tertiary education system. While some ranking systems already exist, like Rate My Teacher, these are aimed at ‘helping students make informed decisions’. Is this the goal of your project? Or do you envision a system where universities might consider implementing some of the feedback provided?
    If so, I’ll be very interested to see what mechanisms you propose to further validate the feedback system, and give agency back to students.

    • Thanks for the interest! It’s a part of my aim to assist students in making informed decisions. Rate My Teacher is an interesting case study due to the perception of its success/failure. But unlike RMT, I want to develop a dialogue between these parties to provoke change. The mechanisms to cause this are hazy still, but testing is needed.

  3. I think you’re definitely onto something. I think to get the result you are after it would have to be completely monitored by students, data collected and distributed by students and easily accessible by the students. The only flaw I can see in this concept is the potential misuse of such a platform. While this would be a fantastic method of knowing what to expect for a subject’s teacher or making necessary changes in teaching systems and practices, I could easily see students who feel they have been hard done by using this newfound empowerment to tarnish reputations of their academic supervisors.
    I think it would be worth looking into methods of controlling quantitative data as a means of collecting valid feedback (such as this article http://nsuworks.nova.edu/tqr/vol8/iss4/6/).
    Look forward to your coming research!

    • Thanks Amy. There are great concerns over the exploitation of rating systems. It is easy to hijack, but the percentage of users who would are statistically small. How such a platform would be monitored and moderated is up to debate – and it does lead to: who controls the data and how is it presented. My main aspiration is that the data used to develop the structures, values, and focus of the Education system.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s