Course 4, Week 2: Making Judgements and Ensuring Quality

The second is an extension on week 1, but slightly harder to place into my current practice. In situations like this, I continue to check-in with FutureLearn regularly to gather perspectives, opinions and current practice of peers in academic teaching. I use this to help inform future teaching. 

The second week begins with this statement:

Now, I have issues with this belief. The view that the teacher or academic practice should never be questioned is absurd. Each person has their own belief systems, their own mindset and way of thinking, analysing and criticising. Once reaching teacher or academic status, this does not change and the individual does not becoming all-knowing, with all statements purely factual. But, this is very much the stance taken in education. What the assessor marks is often seen as final, as otherwise this would question the assessors abilities. I discussed issues with subjectivity further on the FutureLearn platform. 

Moderation, Calibration and Standardisation

To my relief, the issue of interpretation in assessing is one of the concerns presented in our online learning.

Several methods are provided for overcoming disparity in practice, the first is moderation. FutureLearn defines the difference between pre- and post- moderation clearly: 

Calibration is next introduced. This is a process of peer-review, similar to social moderation; it involves meeting with colleagues on a course to discuss and negotiate small samples of work. The aim is to ensure the team marking are consistent in their interpretation of what does/does not deserve merit. This was a standard practice in 11-19 teaching, especially in assessing coursework or mock assessments. It was a great CPD oppurtunity as well, working with colleagues on a shared subject to improve assessment and marking knowledge.

On FutureLearn, a fanastic document was further available for more information on calibration, which further improved my understanding of the practice in HE compared to FE and my comment on the section.  The document can be found here: 

The CU method falls closely in line with the multi-level model of consensus moderation, which provides reassurance to all stakeholders that multiple practices have been used to reach a final professional judgement. FutureLearn provided the multi-level model example for CU here:

Quality Assurance and Enhancement:Key places this will take are presented in the screenshot from the FutureLearn video here: 

Peer Review Assessment

To bring the course and the module to a close, students of the M08 FutureLearn module were asked to complete a Peer Assessment Activity. We were each required to share our plans for resigning our assessment and feedback practices, to gain peer feedback to be used for the final reflection assignment. 

The full peer review criteria can be accessed here: 

Due to the limitations of FutureLearn, I presented my assignment using this Coventry.Domains platform, which can be viewed here using the access password: 

My peer assessor was provided the link and the password to access the submission. 

As an early submission, I was lucky to be in a situation to receive feedback from multiple peers, as you can see below. 

From this feedback, I am confident that my redesign is aligned with the intended outcomes of the Personal Tutoring service. I am happy with actions taken so far so make our assessment-feedback processes more inclusive, authentic and innovative, as the comments given support. This feedback will reinforce my continued plans for progression, as I hope to discuss in the final assignment.

I  had also actively engaged in the peer-review activity, providing feedback on an Engineering module, which was deemed useful. n 

End of course reflection

These last two course weeks have been harder to apply to my own practice, but extremely insightful. This week, I have found the calibration exercises to be the most interesting, as I can clearly see how these can be applied to practice. I was also interested to see how moderation took place at CU and found the application on the Multi-level model of consensus moderation useful to my understanding of the Coventry Group’s processes. If an opportunity became available, I would like to see how the model is specifically applied at Coventry University London. 

Course 4, Week 1: Making Judgements and Ensuring Quality

Course 4, the last of the module! Following this course, I should be prepared to complete my assignment draft – how exciting!

I only wish we were able to access Course 4 earlier. The near-1 month break was very long between study and I would have much preferred the option to complete the final course earlier, to allow more time to focus on the final assignment. The gap between learning also makes it a little harder to get back into the study routine. I hope M09 breaks between each online course is reduced to 1 week!

Course 4 of M09 is titled ‘Making Judgements and Ensuring Quality’ and is of course focussed on our practice in ensuring quality within assessments and feedback. This course tackles subjectivities within assessing and providing feedback, providing methods to overcome these. 

Approaches to marking

Week 1 began with discussions surrounded judgement on assessments based on expereince. As in any position, the longer an individual works in a certain role, complacency towards procedures and processes begin (I vaguely remember from my Business GCSE a very long time ago that this is a reason why factory lines are not used). When applying this to HE assessment processes, we are presented with: 

On reflection of our own experience in UK HE, we were asked to take part in a poll – the results as on 21st January are as shown: 

One interesting comment, which I agree with states: 

Our approach to marking methods may be influenced by our perceptions, experience and out hopeful outcomes. Orrel (2008) is presented here with research in the area of how our judgments as educators can be affected when involved in assessing: 

Our judgement can then be categorized into two different approaches, which are referred to within assessment and feedback literature: Criterion-referenced and Norm-referenced (summary image below from FutureLearn).The approaches were of interest to me, as grade inflation in UK HE has been popular in the media – I began questioning whether a certain approach would be used as part of grade inflation. 

Criterion-referenced assessment is seen as the fairest of approaches throughout the comments, but there is some understanding for norm-referenced assessment. 

Support follows for the former as part of the M08 Future Learn Journey

Using marking criteria to make an informed judgement

Assessment criteria and rubrics are used through UK HEIs and we were able to gain and insight into how to develop each for our own taught subjects (here I applied to Sociology over the Personal Tutoring Service). 

The examples can be found here: 

Example 1:

Example 2:

Example 3: 

The students of M08 were asked consider how to deal with subjectivity within marking criteria. For me, the module so far had begun to tackle this through our own students developing assessment literacy. The answer was also eluded to in the materials provided, which can be accessed here: 

Finally, we were asked to work with our CLS to discuss five key elements we would consider if invited to act as a moderator of an assessment task. The task was discussed by email and as of 21st January 2019, the following was available on Padlet from the group (second bullet point submitted by myself):

Closing thoughts

Week 1 of Course 3 was interesting and I enjoyed getting involved in perspectives on subjectivity within our own judgement, which can impact assessing. I found the additional materials this week, such as the example assessment criteria’s extremely useful for understanding how to transfer my current practice and teaching experience to HE assessment. From my background in Sociology, the subjective-objective debate arises often, so it was great to apply similar thoughts to assessment. I look forward to Week 2. 

Returning from Winter Break

Back again to blogging in January 2019 after a short winter break over the Christmas period, but so much has happened!

I have decided to continue my professional development in 2019 by undertaking a Prince2 Foundation and Practitioner qualification this coming February. After working in a project management role for a couple of years, I have decided to take the leap and match my experience up with management study. 
I’ve very nervous to start as I will be self-funding the course and using a great deal of my annual leave to attend the course. I am confident that I will successfully complete the course and gain the qualification, but it’s a high cost for a role I am also comfortable in. 

I will also be moving this year, hopefully by Spring (crossed-fingers, touch-wood and all things related to luck). This means that the last module may be a little more difficult, as the distance to from my new home to Coventry will be a great deal longer. I am however, still enjoying the course and determined to complete the course with great success. 

Other than new courses and the move, I have no other plans changed for this new year. No resolutions, nor new fads. I’m just looking forward to embracing my studies and the upcoming change!

Course 3, Week 2: Meaningful Feedback

Week 2 was just as interesting as Week 1 – I’ll be honest, I completed both within the same week. There was a clear connection between each week, with week 2 looking into engaging students with the feedback, whereas week 1 focused more on the practice of giving feedback.

As always, I came in with an opinion:

(On reflection, I wish I had the time to provide my comments based on more research instead of the content provided and my own experience).

Student disengagement with feedback

As I have mentioned in my previous post, the feedback is only as good as the facilitator – students need to know how to use feedback and this needs to be facilitated, we should not expect students to be able to analyse feedback and take appropriate actions independently, if they have not been given the knowledge of how to do so.

Price et al. (2011) identified that disengagement with feedback, as a lack of clarity and purpose of feedback, can occur at different stages, which is illustrated here:

Student engagement with feedback

The course introduced Millar’s (2010) methods to engage students with feedback. Millar has the view that educators need to work on students ‘readinesss’ to engage with feedback, ‘a willingness and ability to pay attention to, value, and act upon feedback’. There are two main suggestions provided with this:

  1. ‘ Explore and develop an understanding about the purpose and value of feedback with students’ – as it says on the tin, efforts needs to be made for students to see the benefits of acting upon feedback.
  2. ‘Develop good relationships between staff and students to support feedback engagement’ – I feel this is very much unsaid in teaching. If you are a good teacher and have a good relationship with your students, they are more likely to react to you. This is applicable to the feedback the educator provides also. It seems simple, but if an individual is not qualified in teaching, has little interest in teaching and has taken on the role to fulfil other goals, or is overworked, their abilities to create rapports and engage are poor.

I feel  PTing can step in to support with feedback – the role of a Personal Tutor is to develop a rapport with students and to be in a position to support students, to use feedback to creating learning plans, etc.

Our approach is very similar to one of the examples offered

Sustainable and dialogic feedback

To ensure feedback takes on a dialogic approach, it is recommended to plan feedback into the assessment maps. The advanced planning, as documented in case studies on the course reduced assessments, improved assessment outcomes, reduced marking times, etc. These outcomes are very course dependent, but planning feedback opportunities would improve time management around teacher-marking, as well as planning in opportunities for developing peer and self-assessment skills.

A second recommendation to improve continuous engagement with feedback was to use technology. Examples included videos, audio recordings and social media.

Closing thoughts for Course 3

As with last week, this week was interesting and engaging. The content was easier to digest as the journey through the course was well-planned. There are a few things that I would like to consider and put forwards for the growth of personal tutoring (a growing team to offer specialist and personalised services, including feedback engagement). These actions will take time, attempting to close gaps between professional and academic services and will depend on upcoming developments in other departments.

Course 3, Week 1: Meaningful Feedback

Course 3 is an introduction to the importance of feedback as an active process, rather than a passive process.

I really enjoyed course 3, reminding me of my previous teaching role in which I had responsibility for 11-19 quality, assessments and feedback. I was able to become more involved in these two weeks, with a clearer link to current practice.

What is feedback?

5 definition were provided from 1983 to 2015, presenting the journey of feedback within teaching practice.

In a way, feedback has always been seen as something an educator gives with anticipated response of progression from the reciever. What we actually see in education is that feedback can often be a passive action. This could be due to poor quality of feedback, lack of understanding of feedback, no actions or support on how to progress, etc.

Wiggins (2012) provide Seven Keys to Effective Feedback:

  1. Goal-referenced
  2. Tangible and Transparent
  3. Actionable
  4. User-Friendly
  5. Timely
  6. Ongoing
  7. Consistent

The feedback culture

From a student perspective, we are introduced to how feedback is not only important for progression, but also for simple recognition of work completed. Assignments are time-consuming, involve hard work and application of learning – the final piece should therefore recognised as evidence of hardwork. From an educators perpsective, there is always the immediete workload of providing effective comments. As classes grow in HE, the time taken to complete marking and feedback activities grow and so do the pressures of managing this. However, the immediete work often overshadows the longterm benefits and time-saved later in the term/stage/course.

One method often used in 11-19, which I have found evidence of in HE, is ‘comment buckets’. There are different ways these can be employed and I feel a more transparent approach as mentioned in the screenshot for FE marking below is the better of the two. Using comment buckets to support a copy/paste activity for online feedback is not transparent and often the comments are not clear to receivers when standing alone from the assessment criteria.

To change the way feedback is perceived by educators and students, to promote a continuous dialogue around progression and development, the way feedback is given and received needs to shift.

Promoting the new paradigm, where feedback is an ongoing , students and educators become engaged in a process which creates independent student learners, who can reflect on their own work and set actions development.  The end goal is to reach the higher level of self-regulation –

Feedback design

Above, I provided the 7 principles of effective feedback of Wiggins (2012). In addition, the course offers the below as standards for providing feedback:

Self and Peer Assessment

When I was first introduced to self and peer assessment when working in a school prior to my first teaching qualification, self/peer-assessment was described as a ‘teacher’s cheat’ to marking. The way it was used was poor and more as a box-ticking exercise to show books had been marked in the correct coloured pens.

What I learnt early on was assessment and feedback is only as effective as the facilitator. I began sitting with students at the above school, working on how to give and use feedback to improve. When training to teach, assessment and feedback became one of the most important areas for me to work on and improve, knowing that the work I put in early would pay off in dividends. Self and peer assessment became a learning activity in itself and students became experts in marking, even daring to question my own in comparison to their knowledge and abilities in assessing – that in my mind was a win, students were able to self and peer assess using an assessment criteria with confidence.

This is no different in HE. Assessment and feedback is only as effective as the facilitator. Self and peer assessment is only possible when the students are trained how to assess, engage with the assessment process and clearly see the benefits of their engagement. My own thoughts are then reinforced with the online course development:

Analysing our own feedback

If feedback is a reflective tool for students, so it should be for educators. I was impressed with the courses inclusion of assessing our own marking and feedback practices.

The Feedback Profiling Tool developed by … was offered as a place to begin reviewing feedback practices. The tool is separated into 5 categories:

  1. Praise and progress
  2. Critique
  3. Advice
  4. Queries and questions
  5. Unclassified (does not fit into above categories)

Looking at the full document offered (Feedback_profiling_tool), this criteria is not much different from the WWW, EBI, Action and DIRT exercises used in 11-18. However, this tool is to be used to weight our feedback, how much is given to praise, critique and advice. It is to ensure we are trying to balance our feedback practice, meeting the principles of effective/good feedback.

Finishing thoughts

This was a really interesting week.

On reflection, I feel like I can see myself reacting to the content of the course according to my previous experiences and knowledge, as well as the way the information is presented. A realisation for me has been how online learning needs to be planned very meticulously, as there is little f2f to read reactions and understanding. This is done well for the most part of this PGCAPHE and I feel like our feedback has been used to improve the layout and activities of the online course.

Thinking on assessment and feedback, I feel again that much of what was presented is a standard of 11-19 teaching, but has been transferred and moulded to HE. I feel confident with my own abilities in feedback and my abilities to facilitate reflection through in-class activities. Unfortunately, any change for Personal Tutoring is minimal as much would be in relation to our optional workshops. I would like to use what I have learnt this week to improve verbal feedback and for updating Learning Plans, using the profiling tool as a structure.


Course 2, Week 2: Assessment as Learning

Back again learning more about assessment methods.

Last week, I opened up my post stating that I’ve always referred to ‘Assessment as learning’ as ‘Assessment for Learning (AfL). The second week offered the definitions of this term, as well as AoL – Assessment of Learning.

The second week of the course began to move away from looking at assessment as a whole, moving towards looking at rationales for assessment types and assessment use. Again, this was very hard to relate to current practice at Coventry University London, but insightful all the same.

The opening discussion titled ‘Has assessment really changed since exams were first introduced?’ documented the claim that ‘0% of a typical university degree depends on unseen time-constrained written examinations, and tutor-marked essays and/or reports’ (Race, 2001). This itself was not surprising, nor dissimilar from British 11-19 education. The problem I see with such assessments is the labelling of such as ‘coursework’, appearing to advertise towards BTEC students, when in fact the essay task is far from a stereotypical coursework assessment.

Examining your assessment methods

It is introduced that some assessment types are settled into and become rudimentary.

It is therefore clear that certain discipline would fall into certain assessment practices, creating an repetitive assessment standard.

These assessment types grouped by discipline were communicated by other educators as the best method to assess knowledge of the area, rather than for simplicity. This standpoint was articulated here:

Although educators are sure of their reasoning for assessment types, as discussed in previous weeks, reasoning for assessment types are not always clear for the student and this should be accounted for. The type of assessment should be communicated, with links to previous knowledge/experience, current and future. One interesting comment was provided by a peer, relaying the issue of setting expectations to high for assessment types:

Principles of assessment design

Key principles recommended for assessment design include:

  • Validity: the assessment should assess what is intended to assess – i.e. a practical assessment of practical skills.
  • Reliability: the assessment should provide an accurate and precise measure of learning.
  • Fairness: the assessment should not disadvantage any learner – the assessment should be inclusive for all, with our without RAPs
  • Educational impact: relating to Van der Vleuten’s (1996) ‘Education Effect’, whereby the assessment should engage a student in learning. It should ‘stimulate the student to invest time and effort’, ‘allow students to have an insight into how an assessment will evidence achieving of course learning outcomes’ and ‘allow students to see how an assessment task will be instrumental in potential future careers’
  • Authenticity: Gulikers, Bastiaens, and Kirschner (2004: 69) define this as assessemnt which requires students to ‘use the same competencies, or combinations of knowledge, skills, and attitudes that they need to apply in the criterion situation in professional life’. In addition, Villarroel et al. (2018) describes this as ‘integrating all activities and discussions which is happening in the classroom with what needed to be applied in the real world problem solving situations.’
  • Inclusivity: to establish a level playing field within the assessment, combining all of the above.


By accounting for the above principles, it is hoped to decrease plagiarism and deter-contract cheating. To further prevent plagarism, it is recommended educators personalise assessment tasks, create assessments that are part of a journey (i.e. formative assessments – draft submissions, bibliographies) and encourage reflection.

Even taking into account the learning and recommendatons of the week, I do not believe plargarism can be truly eradicated. As expected, the course focuses on planning and designing assessment. What needs to be taken into account is wellbeing – I  feel more should be done to promote self-help and access to support service, which may lead students to plagiarism.


Concluding thoughts

This was a very long and somewhat repetitive week. I feel like no new knowledge was necessarily gained, but rather more instructions were given than any one educator could really beginning working on as a whole. Personally, if I was teaching in Social Sciences currently, I would use this knowledge to begin a flowchart of assessment design combining course 1 and 2. If I find the time available, I may still create this for future use and to clearly map all the recommendations into manageable chunks.

I understand most of what is provided on FutureLearn is not necessarily appropriate for all subjects, not is the information compulsory to act upon. However, the first two courses have presented so much information in a way that is not entirely coherent. I have personally found these two weeks harder to navigate, when I though with previous 11-19 Quality and Assessment responsibility and additional training, this would be a breeze. Perhaps my struggles are solely due to the fact I am not teaching my subject and instead trying to relate this to my own Personal Tutoring practice. I hope in a few weeks, when preparing for this assignment I will have a clearer understanding.

Course 2, Week 1: Assessment as Learning

The second course focused on ‘Assessment as Learning’, or as I remember it from 11-19 teaching ‘Assessment for Learning’. These two weeks were again insightful, but posed a major barrier with application, as I am currently working in Professional service compared to Academic teaching. You will see my struggles and perhaps notice a reduction in my normal online discussion engagement in this and my next blog post, due to being unable to connect current practice to the online learning experience.

The main focus of week 1 was to introduce a course-based approach and navigating how this could be designed into our practice.

Introducing the course-based approach to assessment design

The course-based (also known as ‘programme-focused’) approach refers to a method used to tackle the modular breakdown of the British and European HE. What many find when attending university is that each module is standalone, therefore there is no clear follow-through between each module or term. In many cases this approach not only means disjointed understanding of content, but a disjointed approach to assessments, which can become repetitive and noninclusive. Furthermore, feedback given in modular teaching may not be as effective, with no clear advice on how to develop for a following module. The course-based approach is therefore used to create a journey through course assessments, as defined here:

In my originally took the above to meet end of year assessments, which as I learned through the module was not quite right. I hope to show this as I continue into the blog post.

The course-based approach is in fact brings together modules to create a more unified approach, using assessment to facilitate this. Coventry Universities has strongly supported this integrated approach creating a clearer student journey of learning and development, promoting use of formative feedback to improve attainment and assessment types that prepare students for employment.

Benefits of the course-based approach are supplied here: 

Coventry University have further developed their own  CU Group Assessment Strategy according to the 10 Principles of Assessment of the course-based approach.

It appears that this is having a positive impact, according to the feedback presented in the course from students. The greatest benefit to students was the importance of developing skills through workshops and formative assessments, which led to the end summative piece. In addition, staff supported the use of assessments which promoted employability. From my perceptive, workshop  preparation for assessments would be a great area for Personal Tutors to become involved. At Coventry University London, there is a dedicated team for Academic and Careers development, but there is still plenty of areas not covered which students need to manage their assignments, as well as navigating feedback. An approach that joins professional services to departments in this course-based approach would be extremely beneficial.

Course-based assessment design in practice

The first piece of advice when designing a course-based assessment approach was as expected, if following on from M01 – Constructive Alignment!

Of course, refering to constructive alignment alone is not sufficient planning for a course-based approach. It should be a collaborative work, including HoDs, CDs and MLs.

Unfortunately, with Personal Tutoring there aren’t any grand redesigns for the system, which would full under implementing a course-based approach, however I still thought about my own ideas for introducing students to a new course, mapping assessments to course progression:

My own proposal was not far off of the example given to prepare a course-based approached: to map course assessments over an entire course, including formative assessments. Maps created when designing this approach would also take into account how feedback can be used, as well as the use and frequency of high-stake assessments. The example provided through  on the course can be found here:  Course assessment map.

The course further provide 3 examples of intergating this approach into practice:

  1. Horizontal integrative assessment across one or more stages of the course: assessment of knowledge and learning from modules at a certain stage in a course (i.e. Level 4 or Level 5 or Level 6) , such as a Portfolio.

  2. Vertical integrative assessment across one or more stages of the course: assessment of knowledge and learning of modules at various course stages (i.e. Level 4 and Level 5), to bridge periods of study such as a development portfolio.

  3. Capstone assessment: ‘Holdsworth et al. (2009) summarise the key features of a capstone assessment to be:

    • Free-standing and authentic or ‘real-life’
    • Capturing learning in and out of class
    • Involving skills development that can enhance readiness for work or further graduate studies’


This week I have been much less involved in discussions, as I feel I have less to contribute. For A Level and GCSE teaching, there has to be an integrative assessment approach when summative assessments take place at the end of study. Formative feedback is key to development and progression, with plenty of opportunity to combine assessment types with skills developments.

This week I have learnt more of the Coventry University HE approach to assessment, but unfortunately have no opportunity to tie this back into my own practice. I think I’d appreciate this knowledge more when taking on a HE academic teaching role, compared to pastoral/study development role.

Course 1, Week 2: The Assessment Experience

Catching up with documenting my Week 2 experience of Course 1, but I am through Week 1 of Course 2 now so very nearly caught up. Looking at each course release date, it feels like time is going so quickly, we’ll soon need to hand-in submissions for the term!

Week 2 of The Assessment Experience course was titled ‘Becoming Assessment Literate’ and I feel at the end I did have a much greater understanding of promoting assessment literacy within a course.

This week opened with some comments I hear often from staff and I could easily imagine the perspective of the students. My response was (as always) with reference to my former life in FE…


The journey to assessment literacy

We began the journey with assessment literacy looking at the below image of stages and steps in designing assessment. At first, I became quite focused on the lack of formative feedback in the journey, but by taking a step back, looking at the key properly, I saw that there were plenty of formative assessment opportunities within each green stage. Formative feedback was commented as one of the most important stages documented and this was well-liked by the M08 group.

After considering the assessment journey, learners were introduced to Evans Assessment Toolkit (EAT), which can help educators review their own practice in developing students in 3 key assessment areas: literacy (AL), feedback (AF) and design (AD).  Of this, AL was extracted as an area for us to consider in relation to our own practice.


Now, moving along this path of assessment literacy, it is easy to relate all of this back regularly to written criteria and guidance related to the exam. It was nice to have a reminder from the LEAD Assessment session that we can promote assessment literacy in multiple ways. Suggestions from comments included: videoing the assessment guidance/criteria, audio podcasts, inviting past students in for f2f meetings with students undergoing the same/similar assignments. Personally, I would fully support these actions and if open access to materials were available, Personal Tutors could do more to breakdown information for feedback reflections in individual meetings.  Under the next heading, I will refer to a guidance and feedback activity I have led in FE, which I feel ties in well here for HE.

Growing on this, we are then asked to empathise with our students need to ‘Understand the task’ and ‘Understand the criteria’. It was recommended to use structured pre-assessment activities to help students with this process, deconstructing the assessment task and terminology, then introducing examples of previous assessments to rank.

– – – – – – – –

One method to fully involve students in assessments would be to allow them the ability to design the assessment criteria. The video presented on FutureLearn certainly supported this activity, however many concerns were raised by my peers studying M09 – one thread summarised the issues well led from this comment.

Your practice in supporting your students over the journey

Formative assessment is very important to preparing learners for their summative assessment, but this is only truly effective when feedback is present and/or readily available as part of a formative assessment. One method recommended on FutureLearn and used often in 11-19 teaching is peer assessment. Here I quickly attached a previous activity I led often (the activity referred to earlier in this blog post):

Moving into the importance of feedback, the need for feedforward and feedback oppurtunities are introduced ( I highlighted the impact of these last week in my previous blog post). To my delight, Personal Tutors were mentioned in the video presented, covering the use of such activities for skill development needed for summative assessments.



The course then led us into our own formative assessment and feedback opportunity – how fitting!


Final comments

This week,  I have enjoyed getting involved with discussion threads and combining experiences of assessment to move forwards with my own professional development.

The EAT was the most useful part of the week, as I feel this would improve our own methods of self-assessment (in essence self-assessing plans for assessment). I would like to see how I could use this more in the future as PT or academic.

The assessment was useful, however have strengthened my concerns about the final assignment piece, which is a reflection on an assessment. I will need to discuss this more in person at the next f2f session in Coventry.

Course 1, Week 1: The Assessment Experience

We began the first week of the new module focussing on the assessment experience and becoming assessment literate.

I thought I would be entering a whole new world, as my experience as a 11-19 teacher was focused upon assessments set by exam boards. However, I actually did not feel out-of-my-depth, as a HoD I was always preparing assessments so I really could relate. My main struggle has been trying to pull this learning into the Personal Tutoring service, which does not have it’s own summative assessments. You’ll see more about how I tackle this from my blog posts for the entire module – crossed fingers it all works out!

This week was a real reflection on my own experiences of assessment and what I know our tutees experiences. After reading the comments of my peers, I am deeply grateful for my assessment experiences at CCCU and have plenty of thoughts on how to take my teaching knowledge of assessments and bring them into HE, if I were an academic.

Getting started

When considering the assessment experience, we were first asked about how we actually learn the rules of assessment, referring to ‘osmosis’. Osmosis was explained well here:

Ofcourse, osmosis is far from ideal in learning how to understand and begin to tackle any assignment at HE level. However, I do not fault trial and error – reflecting and learning from our mistakes is an important part of being human but also creating an a frame of mind towards our work. My initial feelings were that new students should be given just as many opportunities to develop skills, which will support their ability to understand and complete assignments, as is given to content.

Here’s a little comment I gave in response to a trial-and-error point given later in this weeks content:

Assessment experience

As I have said, I was very lucky to have such a great experience in my UG that I did not terribly struggle assessment literacy. Reflection however was still a very important process, and using feedback was heavily promoted. I’ve surmised my UG experience here:

As we continued to review current student experience, it became clear that skills are just as important as content. Fears the students presented in the video of their first assessment were closely related to how they unable to manage so many new aspects of their course into an assignment which met the criteria. This is where engaging with the Personal Tutoring service is so very important to us!

Doing the right thing or doing it right?

Assessment literacy is a new term to me and it really does make sense! Here is a not-too-long definition:


It’s important for educators and students to be assessment literate for a few reasons: for student success in assignments, for student confidence, for fair and consistent marking of assessors.

So what is the best method that to prepare students to become assessment literate? This quote makes me think of the hidden curriculum a little…

Now, it is still hard to relate summative assessment back to Personal Tutoring, but we do assess!


The assessment matrix was introduced and very interesting:

Q1, The Traditional Approach: where students come to know assessment standards over time through trial-and-error, the osmosis standard.

Q2, The Explicit Approach: here, the goal is to clarify assessment standards using explicitly assessment briefs, criteria and rubrics but often students do not necessarily understand the briefs or interpret them differently from the assessor.

Q3, The Social Constructivist Approach: in this approach, the students understand and know the assessment standard and have opportunities to use, apply and create their own meanings.

Q4, Community of Practice Approach: here, deep learning occurs as a community, where people come together and develop shared understanding through social engagement. The students take an active role in the design of the assessment and the feedback – fantastic!

The final approach appears to be the most engaging and interesting, but I don’t think the Personal Tutoring service is quite there with creating a community approach as our main assessment is formative through workshops and learning plans, which for the most part are personalised per student.

After learning more about assessment literacy, we were able to revisit a video of student experience of and their preferred methods. What hit me again was how some assessment guidelines are really lacking and I was shocked that bench-marking assignments as was a standard practice in 11-19 wasn’t occurring at HE – a real learning curve!

Finally, we took a look at why we assess. Now, this is not always clear and very subjective according to each individual involved. This was clearly demonstrated when referring to the stakeholders in assessment:

That called it a wrap for the week!

To finish off, I’ve considered pulled together a few summary points:

  • From being a qualified 11-19 teacher I had this set standard in my head for summative assessment that I thought would still be at least somewhat included in HE, but that hasn’t been the case. Summative assessments in 11-19 always had an exam board in mind. Formative assessments had a lot more flexibility using peer assessors. My own experience of assessment was very different from my BSc to MA, but I have been able to experience both strengths and weaknesses to pull forward into this module.
  • Knowing more about how HE educators set their assessments and how assessors mark has been eye-opening and I really am considering how to create an improved integrated approach for Personal Tutoring to supporting each department in assessments.
  • I’m not quite sure how to move Personal Tutoring from Q3 to Q4 of the assessment matrix, but creating a ‘cultivated’ community of practice approach is an area to consider with my team.

Overall, it has been an insight first week back into the PGCAPHE and I’m happy to be here 🙂

I’m back!

So, after what feels like a very long break from the PGCAPHE, I’m back to studying and back to blogging.

I haven’t added any posts to the blog since the last module for a few reasons:

  1. I went to Wales and climbed Mount Snowdon, then battled my fears to get back down it again!
  2. My focuses have been elsewhere with project managing enrolment and social events.
  3. I have been taking part in other exciting training opportunities such as LEAD – both as learning and leading sessions.
  4. My team and I have been working hard on other projects such as our online presence.
  5. I’ve been implementing my learning into a new fresh term of Personal Tutoring.
  6. I’ve been preparing to move from London to the Kent!

It’s been a very busy but very rewarding few months. I’m all organised and prepared to be back and excited to learn more on ‘Developing and Enhancing Assessment and Feedback in Higher Education’.