Using competency assessment tools to enhance student learning

Assessing students’ clinical competencies using a competency assessment tool (CAT) is an essential part of nursing education. Assessors need to use CATs effectively to facilitate learning and students need to embrace this learning experience. This article identifies how assessment can be improved through CATs. It uses the example of the aseptic non-touch technique CAT, which reinforces safe clinical practice.

Citation: Duffy L (2019) Using competency assessment tools to enhance student learning. Nursing Times [online]; 115: 50-53.

Author: Lisa Duffy is senior lecturer, infection prevention and control at Swansea University.

Introduction

Learning clinical skills is an essential part of student nurses’ training so that they become competent, confident, safe and effective practitioners (Engström et al, 2017). Assessing students’ clinical competencies requires a method by which the assessor can capture the multidimensional nature of these competencies (Tommasini et al, 2017). This article explores the factors that influence the assessment of students’ clinical competencies and discusses the use of competency assessment tools (CATs).

Importance of assessment

The importance of assessment and feedback, both in clinical practice and higher education, is acknowledged (Kogan et al, 2011; Nursing and Midwifery Council, 2010). Given that assessment has significant implications for developing student nurses, it is paramount that nurse assessors have the skills to formulate and implement assessment methods effectively (Hughes and Quinn, 2013).

Student nurses have reported feeling dissatisfied, anxious, stressed and overloaded about assessment and feedback (McSwiggan and Campbell, 2017). Assessors should provide accurate assessment and feedback that take into consideration students’ perceptions and are relevant to the curriculum and practice, but remain objective and unbiased (Hughes and Quinn, 2013).

Aseptic non-touch technique

The aseptic non-touch technique (ANTT) was created by Stephen Rowley in the mid-1990s and provides a standardised approach to assessing safe aseptic technique in invasive procedures (Association for Safe Aseptic Practice, 2015). Rowley et al (2010a) demonstrated that using the ANTT led to safer clinical practice. Used in over 25 countries worldwide, the ANTT provides a de facto international standard.

The Welsh government has mandated the use of the ANTT (Welsh Government, 2015). This national initiative originated from a drive to provide health professionals and educational institutions with evidence-based, peer-reviewed guidance (Rowley and Clare, 2009). It supports the translation of robust theory for aseptic technique into practice, provides a clear and standardised aseptic technique, and promotes the use of the ANTT CAT.

ANTT CAT

The ANTT CAT provides standardised criteria for performing invasive procedures competently and safely (ASAP, 2015). It can be used, for example, to assess venepuncture, cannulation, urinary catheterisation, simple and complex wound care or intravenous drug administration. Uncommonly for a practical skills assessment, it includes theoretical questions as well as sections relating to direct observation. Assessment with the ANTT CAT can be done both in clinical and simulated environments (Rowley and Clare, 2011). Outcomes can be used to identify gaps in knowledge and practice, and address individual learning needs (McLeod et al, 2011). Box 1 describes the ANTT CAT.

The aim of using the ANTT CAT for the formative assessment of student nurses in a simulated clinical environment in an academic setting is twofold:

Box 1. The ANTT CAT

The ANTT CAT consists of a list of questions (answered by ‘yes’ or ‘no’) that the assessor uses to assess the student’s competency in undertaking aseptic non-touch technique. The questions are grouped in three areas: preparation (12 questions), procedure (eight questions) and decontamination (three questions); for example:

The ANTT CAT also features theory and practice questions relating to ANTT principles, which the assessor needs to ask the student either before or during the procedure; for example:

Benefits and risks of CATs

CATs are recognised as an effective assessment method to assess student nurses performing wound care. Hengameh et al (2015) demonstrated that CATs improved students’ wound dressing skills compared with the traditional method of subjective observation. Using an evidence-based assessment tool allows assessors to synthesise their observations into an overall score (Kogan et al, 2011). Using a recognised assessment tool ensures that, for each performance criterion, the assessor gives an accurate description of the knowledge or motor skill required from the student (Rowley and Clare, 2009). Fig 1 shows a procedural assessment undertaken in a simulated environment using a CAT.

fig 1 procedural assessment cat

However, even when using CATs, there is a potential risk of assessor subjectivity, which can negatively affect the reliability of the assessment (Hughes and Quinn, 2013). Factors such as institutional culture, the way assessors come across and the way they interact with students can affect both the individual student performance during assessment and the way they are rated (Hughes and Quinn, 2013; Kogan et al, 2011).

Individual student characteristics may also influence scoring; for example, students who give a positive impression of themselves may be rated higher than those who do not (Hughes and Quinn, 2013; Schartel, 2012). Assessors may unconsciously rate students higher than deserved; this ‘generosity error’ is associated with assessors’ tendency to adopt a caring attitude towards students (Hughes and Quinn, 2013).

Reducing variability and bias

To reduce variability in rating and feedback, CATs must be completed in the same way by all assessors (Kogan et al, 2011). Training needs to be undertaken across all institutional sites to ensure that all assessors apply the same standards and follow the same principles (Watson et al, 2014).

Assessors need to be proficient in the skills they are assessing and trained to a suitable standard in the use of the assessment tool (Cooper and Thomson, 2012) – although training may not totally remove the effect of biases (Watson et al, 2014).

Using an assessment tool that contains a list of essential criteria and a space to check/tick whether the required behaviour/practice has been completed can reduce subjectivity. Just having a tick box layout limits the ability to make notes regarding the individual’s performance, but this can be remedied by making notes separately.

Assessors may struggle to fit their observations into rigid criteria (Kogan et al, 2011). This raises the question of whether assessment should rely more on assessors’ comments than on fixed criteria (Kogan et al, 2011).

Assessing knowledge

To assess a student’s knowledge underpinning a clinical procedure, a pre-procedure interview gives the assessor an opportunity to ask the student direct questions (Watson et al, 2014). However, this may disadvantage certain students by heightening their anxiety and negatively affecting their ability to perform the procedure (Hughes and Quinn, 2013; Lanksbear and Nicklin, 2000).

The ANTT CAT is supported by a pre-educational strategy whereby students acquire the relevant theory before their assessment; for example, via e-learning. Using online self-assessment – for example, posting questions and receiving delayed answers on the university’s website – could help students overcome their anxieties about direct questioning. Students’ use of e-learning potentially allows assessors to spend more time observing practice.

However, not all students use digital technologies as a way of improving learning and students with a low self-efficacy are less likely to engage in self-assessment (McSwiggan and Campbell, 2017). Ultimately, time and resources would be required to monitor and support students’ access to, and engagement with, electronic resources.

Time constraints

CATs allow assessment of a procedure by direct observation of a procedure over a short time period (Hengameh et al, 2015; Jelovsek et al, 2013). There is a consensus that the assessment should generally last less than 20 minutes (Cooper and Thomson, 2012; Rowley et al, 2010b). However, the ANTT CAT does not dictate the length of the assessment but the correct procedure that is required.

Jackson et al (2012) found that assessments using the ANTT CAT required significant time and additional staffing resources. McLeod et al (2011) indicated that some students thought there was insufficient time (or a lack of protected time) for effective feedback. According to Cooper and Thomson (2012) students indicated that written feedback was often completed a long time after the assessment, and assessors found it difficult to allocate time to undertake direct observation of procedural skills (DOPS).

Feedback should be brief, given immediately after the assessment and delivered in a quiet and private place. If delayed, it loses its impact, validity and relevance (Copper and Thomson, 2012; Schartel, 2012). Recording the DOPS electronically could save time and ensure feedback is provided immediately (Cooper and Thomson, 2012).

Delivering feedback

Delivering feedback is a complex process influenced by the assessor’s approach, confidence, comfort and accuracy of judgement, as well as the student’s reaction (Kogan et al, 2011). It is recognised that assessors can feel uncomfortable giving negative feedback (Copper and Thomson, 2011; Plakht et al, 2013).

Using a step-by-step method (Fig 2) encourages students to actively participate in feedback. Assessors who feel uncomfortable giving negative feedback will be more comfortable first asking students what they think they have done well and could do better, before giving recommendations on areas for improvement.

The method encourages student self-assessment and may therefore be more appropriate than the traditional ‘feedback sandwich’ method (Schartel, 2012). The ‘feedback sandwich’ involves feedback given to a student by assessor with a positive comment first, then a negative comment and finally a positive comment but with no opportunity for the student to participate.

fig 2 step by step feedback method

Roles and responsibilities

The role of assessor in healthcare education can be compared to the role of a coach in athletics (Schartel, 2012). The athlete does the hard work but does not perform to their full potential without motivation, encouragement, feedback and direction from the coach (Schartel, 2012). Both coach and athlete share a common goal and work together to achieve it.

Nurse assessors’ need to ensure assessment and feedback happen in a supportive and non-judgemental manner, with a balance of positive and negative feedback to promote motivation while curbing inappropriate practice (Plakht et al, 2013). Feedback should focus on knowledge, behaviour and skills, not on the student as a person (Schartel, 2012).

Students may feel overloaded and stressed about assessments (Brown, 2015) and some may feel uncomfortable being observed (Kogan et al, 2011). These concerns need to be addressed by the assessor and student together (Ramm et al, 2015; Cobb et al, 2013).

Assessors’ role includes providing effective feedback, but it is students’ responsibility to seek and value that feedback (Schartel, 2012). Students should be made aware of their responsibilities in relation to assessments (Box 2).

Box 2. Student responsibilities

Patient models and settings

Students acting as patients are acceptable substitutes for real patients for the purposes of assessment (Ramm et al, 2015). Live models provide a more realistic experience than manikins, by allowing verbal and nonverbal interactions and affording the authenticity of human skin (Coffey et al, 2016).

Ramm et al (2015) reported that students who performed a dressing change were reassured by the presence of peers playing the part of patients. However, assessors should be mindful about students practising clinical skills on friends as some may feel embarrassed, anxious and less comfortable with peers. Embarrassment has been associated with models being touched and exposing unclothed body parts. Allowing students to choose their models, gaining informed consent from all and making participation as a model voluntary will all contribute to maintaining ethical practice (Grace et al, 2017).

Using live, unpredictable clinical settings potentially generates variability in the assessment and therefore affect its validity and reliability (Jelovsek et al, 2013). Actors should be briefed and the simulated environment made to resemble a clinical area but standardised (Pope et al, 2014).

Conclusion

Various factors influence the assessment of student nurses’ clinical competencies by nurse assessors. CATs provide an effective and less subjective method for clinical competency assessment than traditional methods based on observation only. The ANTT CAT, for example, reinforces safe clinical practice by allowing effective assessment of aseptic technique.

Assessors should take students’ perceptions into account and foster their participation in the assessment process. Feedback is key: without effective feedback, correct practice is not reinforced, errors are not highlighted and improvements are not made (Schartel, 2012).

Key points

Association for Safe Aseptic Practice (2015) Aseptic Non-touch Technique. The ANTT Clinical Practice Framework for All Invasive Clinical Procedures from Surgery to Community Care. London: The ASAP.

Brown C (2015) Assessment overload? Medical Teacher; 37: 3, 301.

Cobb KA et al (2013) The educational impact of assessment: a comparison of DOPS and MCQs. Medical Teacher; 35: 11, 1598-1607.

Coffey F et al (2016) Simulated patients versus manikins in acute-care scenarios. Clinical Teacher; 13: 4, 257-261.

Cooper J, Thomson A (2012) Why’s and how’s of directly observed procedures – tips for trainers and trainees. Paediatrics and Child Health; 22: 10, 448-450.

Engström M et al (2017) Nursing students’ perceptions of using the clinical education assessment tool AssCE and their overall perceptions of the clinical learning environment – A cross-sectional correlational study. Nurse Education Today; 51, 63-67.

Grace S et al (2017) Ethical experiential learning in medical, nursing and allied health education: a narrative review. Nurse Education Today; 51, 23-33.

Hengameh H et al (2015) The effect of applying direct observation of procedural skills (DOPS) on nursing students’ clinical skills: a randomized clinical trial. Global Journal of Health Science; 7, 17-21.

Hughes S, Quinn F (2013) Quinn’s Principles and Practice of Nurse Education, 6th edn. Australia: Cengage Learning.

Jackson D et al (2012) The sociocultural contribution to learning: why did my students fail to learn Aseptic Non-Touch Technique? Multidimensional factors involved in medical students’ failure to learn this skill. Medical Teacher; 34: 12, 800-807.

Jelovsek JE et al (2013) Tools for the direct observation and assessment of psychomotor skills in medical trainees: a systematic review. Medical Education; 47: 7, 650-673.

Kogan JR et al (2011) Opening the black box of clinical skills assessment via observation: a conceptual model. Medical Education; 45: 10, 1048-1060.

Lanksbear A, Nicklin P (2000) Methods of assessment. In: Nicklin P, Kenworthy N (eds) Teaching and Assessment in Nursing Practice: An Experiential Approach. Edinburgh: Baillière Tindall.

McSwiggan LC, Campbell M (2017) Can podcasts for assessment guidance and feedback promote self-efficacy among undergraduate nursing students? A qualitative study. Nurse Education Today; 49, 115-121.

McLeod RA et al (2011) The use of Direct Observation of Procedural Skills (DOPS) assessment tool in the clinical setting – the perceptions of students. International Journal of Clinical Skills; 5: 2, 77-82.

Newstead S (2003) The purposes of assessment. Psychology Learning and Teaching; 3: 2, 97-101.

Pendleton D et al (1984) The Consultation: An Approach to Learning and Teaching. Oxford: Oxford University Press.

Plakht Y et al (2013) The association of positive feedback with clinical performance, self-evaluation and practice contribution of nursing students. Nurse Education Today; 33: 10, 1264-1268.

Pope S et al (2014) Using visualization in simulation for infection control. Clinical Simulation in Nursing; 10: 12, 598-604.

Public Health Wales (2017) A Strategy to Standardise Aseptic Technique with ANTT® Across Wales. Cardiff: Public Health Wales.

Ramm D et al (2015) Learning clinical skills in the simulation suite: the lived experiences of student nurses involved in peer teaching and peer assessment. Nurse Education Today; 35: 6, 823-827.

Rowley S, Clare S (2011) Aseptic Non Touch Technique (ANTT): reducing healthcare associated infections (HCAI) by standardising aseptic technique with ANTT across large clinical workforces. American Journal of Infection Control; 39: 5, E90.

Rowley S et al (2010a) High impact actions: fighting infection. Nursing Management; 17: 6, 14- 19.

Rowley S et al (2010b) ANTT v2: an updated practice framework for aseptic technique. British Journal of Nursing; 19 (Suppl 1), S5-S11.

Rowley S, Clare S (2009) Improving standards of aseptic practice through an ANTT trust-wide implementation process: a matter of prioritisation and care. Journal of Infection Prevention; 10: (Suppl1), 18-22.

Schartel SA (2012) Giving feedback – An integral part of education. Best Practice and Research Clinical Anaesthesiology; 26: 1, 77-87.

Tommasini C et al (2017) Competence evaluation processes for nursing students abroad: findings from an international case study. Nurse Education Today; 51, 41-47.

Watson MJ et al (2014) Psychometric evaluation of a direct observation of procedural skills assessment tool for ultrasound-guided regional anaesthesia. Anaesthesia; 69: 6, 604-612.