Back again learning more about assessment methods.
Last week, I opened up my post stating that I’ve always referred to ‘Assessment as learning’ as ‘Assessment for Learning (AfL). The second week offered the definitions of this term, as well as AoL – Assessment of Learning.
The second week of the course began to move away from looking at assessment as a whole, moving towards looking at rationales for assessment types and assessment use. Again, this was very hard to relate to current practice at Coventry University London, but insightful all the same.
The opening discussion titled ‘Has assessment really changed since exams were first introduced?’ documented the claim that ‘0% of a typical university degree depends on unseen time-constrained written examinations, and tutor-marked essays and/or reports’ (Race, 2001). This itself was not surprising, nor dissimilar from British 11-19 education. The problem I see with such assessments is the labelling of such as ‘coursework’, appearing to advertise towards BTEC students, when in fact the essay task is far from a stereotypical coursework assessment.
Examining your assessment methods
It is introduced that some assessment types are settled into and become rudimentary.
It is therefore clear that certain discipline would fall into certain assessment practices, creating an repetitive assessment standard.
These assessment types grouped by discipline were communicated by other educators as the best method to assess knowledge of the area, rather than for simplicity. This standpoint was articulated here:
Although educators are sure of their reasoning for assessment types, as discussed in previous weeks, reasoning for assessment types are not always clear for the student and this should be accounted for. The type of assessment should be communicated, with links to previous knowledge/experience, current and future. One interesting comment was provided by a peer, relaying the issue of setting expectations to high for assessment types:
Principles of assessment design
Key principles recommended for assessment design include:
- Validity: the assessment should assess what is intended to assess – i.e. a practical assessment of practical skills.
- Reliability: the assessment should provide an accurate and precise measure of learning.
- Fairness: the assessment should not disadvantage any learner – the assessment should be inclusive for all, with our without RAPs
- Educational impact: relating to Van der Vleuten’s (1996) ‘Education Effect’, whereby the assessment should engage a student in learning. It should ‘stimulate the student to invest time and effort’, ‘allow students to have an insight into how an assessment will evidence achieving of course learning outcomes’ and ‘allow students to see how an assessment task will be instrumental in potential future careers’
- Authenticity: Gulikers, Bastiaens, and Kirschner (2004: 69) define this as assessemnt which requires students to ‘use the same competencies, or combinations of knowledge, skills, and attitudes that they need to apply in the criterion situation in professional life’. In addition, Villarroel et al. (2018) describes this as ‘integrating all activities and discussions which is happening in the classroom with what needed to be applied in the real world problem solving situations.’
- Inclusivity: to establish a level playing field within the assessment, combining all of the above.
By accounting for the above principles, it is hoped to decrease plagiarism and deter-contract cheating. To further prevent plagarism, it is recommended educators personalise assessment tasks, create assessments that are part of a journey (i.e. formative assessments – draft submissions, bibliographies) and encourage reflection.
Even taking into account the learning and recommendatons of the week, I do not believe plargarism can be truly eradicated. As expected, the course focuses on planning and designing assessment. What needs to be taken into account is wellbeing – I feel more should be done to promote self-help and access to support service, which may lead students to plagiarism.
This was a very long and somewhat repetitive week. I feel like no new knowledge was necessarily gained, but rather more instructions were given than any one educator could really beginning working on as a whole. Personally, if I was teaching in Social Sciences currently, I would use this knowledge to begin a flowchart of assessment design combining course 1 and 2. If I find the time available, I may still create this for future use and to clearly map all the recommendations into manageable chunks.
I understand most of what is provided on FutureLearn is not necessarily appropriate for all subjects, not is the information compulsory to act upon. However, the first two courses have presented so much information in a way that is not entirely coherent. I have personally found these two weeks harder to navigate, when I though with previous 11-19 Quality and Assessment responsibility and additional training, this would be a breeze. Perhaps my struggles are solely due to the fact I am not teaching my subject and instead trying to relate this to my own Personal Tutoring practice. I hope in a few weeks, when preparing for this assignment I will have a clearer understanding.