Data Analysis

Having collected all the data, I recorded it on an Excel spreadsheet, including the additional observations from my field notes.

I collected 63 completed questionnaires in total. Not all the questionnaires were fully completed, with some questions left blank. I grouped the results by which workshop the participant undertook. The three workshops were categorised as Control (demonstration only), Video ( Video introduction), and PDF (Illustrated guide introduction).

 ControlVideoPDF
No of Questionnaires Returned222318

The first three questions were Yes/No questions. The data from these establishes the participants’ background knowledge and their learning outcomes as a result of undertaking the workshop. This also clarifies whether the learning outcome was a result of participation in the workshop, and not from prior teaching. N/A indicates no response.

Q1: Were you aware of how to sew a chain stitch belt loop prior to this demonstration?

Q2: Have you ever tried to sew a chain stitch belt loop before?

Q3: Were you able to complete a sample of the chain stitch belt loop today?

Q1Q2Q3
No82.5 %92.0%1.6%
Yes14.3%  4.8%95.2%
N/A3.2%3.2%3.2%

Most students had never come across this stitch technique, and only 4.8% of students had tried to do it before. Of these students who had tried the technique, none of them had previously been able to complete the stitch. This is an anecdotal observation, as I verbally asked all participants for this information. After the demonstration, 95.2% reported completing a sample and therefore meeting the learning outcome. This might have been higher, as 3.2% did not complete this section of the questionnaire, although they did fill in the later sections.

The next three questions used an interval scale of 1-10 for recording answers. These questions related to the participants perception of the demonstrations.

Q4: How clear was the demonstration?

Scale of 1 = Muddy to 10 = Clear.

The mean was 9.7, so almost all students felt that the demonstration was very clear. This average being high across all the three workshops, established that each method of delivery was of a good quality. There were no responses below an 8, so even with a wide variation of skills ability or differences in learning styles, participants felt the demonstration was clear. Any differences in learning outcomes would not therefore be a result of differences in the quality of the teaching resources.

Q5: After watching the demonstration, how confident are you that you understand how to sew a chain stitch belt loop?

Scale of 1 = Confused to 10 = Confident.

The mean was 9.3, so almost all students felt they had a good understood the process. However, these results, although also high, showed a greater range of responses, with the lowest being a 5. This showed that although participants felt the demonstration was clear, this did not necessarily correlate to feeling they could do the procedure themselves. When this information is combined with the 95.2% of participants who reported completing the stitch, understanding is possibly being reached during the practice stage.

Q5AllControlVideoPDF
Mean9.309.558.789.56

I also broke down these results by the three workshop variations and found the average confidence levels of participants presented with the video preview were lower than the overall average. I was surprised by this, as I had generally found students quite comfortable with video format. This could be attributed to the perception of the complexity of the skill, where as once students were able to do it themselves, it made more sense.

Q6: How much additional instruction did you need to complete a sample of the stitch?

Scale of 1 = None to 10 = A lot.

The graph above plots the results, with the red dotted line representing the mean. Looking at the individual results, I could see that they were very widely spread as well as see-sawing from one response to the next. I had expected this, based on my understanding of the varied ability levels across the student population.

This is the question that I designed specifically to measure the effectiveness of my intervention. I hypothesised if there was any improvement in learning as a result of the intervention, the difference would be seen here. I calculated the mean for the data from each of the 3 versions of the workshop separately. I also calculated the standard deviation for these data sets as well.

Q6ControlVideoPDF
Mean5.155.044.72
Standard Deviation2.762.512.13

The figures show a downward trend towards needing less assistance for those taking part in Video intervention and PDF intervention workshops. The standard deviation also falls, showing a downward trend towards less distance from the mean, which I interpreted as resulting from the method of teaching with the intervention being more effective for students of all backgrounds. This narrowing of the spread in the learning outcomes of individual students is happening in spite of variations in their ability, language skills and possible learning differences. I had not expected this, but am pleased because it correlates with inclusive teaching practice.

The final two questions were qualitative, and I used thematic analysis to gain some insights into how the students experienced the teaching interventions.

Q7: Was there anything particular about the teaching that helped you to learn the skill?

Q8: Was there anything that you found made it difficult for you to learn the skill?

Being a technician, and from an embodied teaching practice, of course I used an embodied coding method (Kara, 2022)! I highlighted key words and phrases that occurred often, were novel or reflective and categorised them into themes.

Thematic Analysis of Survey Data

Quite a few respondents answered Q7, but almost all replied ‘No’ to Q8. It was useful to refer back to my mapping of how students learn .

Factors to consider when teaching Garment Construction skills

I was surprised by comments which showed how self-aware students were of their own learning. The themes that emerged generally fell into one of 4 categories; Delivery of Teaching, Environment, Teacher, and Supporting material.

The most ubiquitous quotes mentioned clarity of instruction, slow pace and chunked, step-by-step teaching. These seemed mainly in response to the in-person demonstration that I delivered which all groups received. They did not specify between the in-person teaching and the supporting visual material.

“Steps were done in chunks to make it manageable. Clear instruction and a slow demonstration allows for easy understanding.”

The few comments in response to Q7 which students found difficult were all in this category.

“Maybe was done too fast.”

“When I’m shown too many steps at once.”

These comments reflected students’ awareness of how they learn and what they feel impacts their learning of technical skills the most. They drew attention to the pacing and almost all comments were positive about the amount of information presented in each step. There were a lot of comments about the clarity of the demonstration. I considered whether students were displaying Response Bias, where their perception of what they feel is socially acceptable affects the way they respond (Bogner & Landrock, 2016). The previous questions which mentioned clarity of demonstration would have been foremost in the participants minds when responding to this question, which could be inclining them to reiterate how clear they felt the demonstration was. However, data from Q4 where participants rated the clarity of demonstration very highly, reinforces the reliability of these responses.

The second category pertained to the environment of the workshop. There were quite a few comments about in-person teaching.

“Was easier to see in person”

This is perhaps a response to the Covid-19 pandemic and remote learning. It was notable that students felt live demonstration was important to their learning. But also, that live demonstration worked better for them in smaller groups. This is a finding that could be useful for wider university policy, when planning teaching activities and allocating room resources.

“Feels like you have to do it to understand it so maybe a big group would be hard?”

“There wasn’t a lot of people around. It made it easier to learn”

This comment could relate to the size of the group, but also to environmental factors like noise, movement and other distractions caused by larger numbers of people using the space. Other environmental factors mentioned were about the atmosphere in the classroom.

“Comfortable environment, easy to ask when mistakes were made”

The possibility of being able to ask questions and interacting with the teacher was something that students reported as important, which also relates back to in-person teaching. This comment, however, seemed to stem more from the intangible leadership with which a teacher creates an open, welcoming atmosphere in the classroom, rather than the physical space. It overlaps into the third category of the Teacher.

The teacher was mentioned as a positive element affecting students’ learning, both for care for the students, and for the effective delivery of instructions.

“Very patient”

“Tutor was extremely helpful when asked questions”

“Excellent instruction”

A number of participants mentioned particular characteristics of the tutor as something they felt helped their learning, and uncomfortable as I am with praise, it is important to recognise that being an expert in ones field does not necessarily make one a good teacher. And that the way in which that knowledge is communicated is also important to students learning outcomes. Qualities such as patience and care may not have anything to do with the demonstration of the skill but support the formation of an environment conducive to successful learning.

The last category pertained to the application of the visual supporting material. A number of comments point directly to the ideas I set out when designing the interventions; that the visual preview would help to focus attention on seeing, before the process is explained in detail.

“Watching the visual demonstration and talking through each step.”

“Video then in person increased confidence. Speaking and visuals helped focus.”

“The step by step photos in addition to the physical demonstration.”

These comments support my interpretation of the quantitative results.

The Third Pillar

So far, the data has positively supported my research question. However, I had a third pillar to my data collection; my field observation notes (Link). As I have written about in an additional blog post (Link), this data has opened up some alternative lines of inquiry.

My field notes were quick snap shots of what the workshop was like. I did not set out with specific criteria to record, or not. I simply wrote observations of things I thought might be important to my teaching practice. However, they were informed by my awareness of the issues I regularly encounter when teaching technical workshops; so not random. An entry from one of the first workshops I delivered highlighted the noise and chaos in the classroom.

‘Class very distracted as students were finishing up their jacket, which many had struggled with. I showed the video on the large teaching screen. Not sure students were concentrating… Noisy and a lot of background moving around as students packing up and leaving. Students more attentive when I was physically demonstrating and asked for help when needed.’

(Field Observations)

These observations pointed to other factors which could be affecting my results. But it was this note that caused most concern.

‘As I repeat the demonstration, I am varying the way I explain the process, incorporating adjustments in response to my observations of areas that students are finding problematic. I am drawing attention to where to apply the tension on the thread. I am stressing the importance of holding on to the exit thread and how to vary the size of the loop to make it more manageable. Could the iterative development of my own skill as a teacher be affecting the responses?’

(field observations)

When I designed the workshop, I chose the chainstitch technique to teach because it looked difficult, although the process was short. I had not taught this stitch in a workshop setting before. So this observation had the possibility to lead to a different interpretation of the data. Going back to the Q6 results, the steady reduction in help reported by the students from the first workshops to the last ones, might actually be more representative of my improvements as a teacher, than because of the intervention. I would not be able to validate that without carrying out further workshops using the control format to see if the results would follow a further path of reduction in the numbers or remain as high as  the original control numbers, the more experienced I became teaching this specific skill.

This uncertainty led me to re-evaluate my data analysis methodology, and I applied some further criteria to the grouping of the data, to determine if there were other conclusions to be drawn from the it. Adding the data from my field observations to the  spreadsheet, I re-grouped the data sets according to these new categories and calculated the means from Q6.

  1. Group Size by number of participants (Small 1-3, Medium 4-7 or Large 8+)
  2. Environmental Conditions (Organised, calm and quiet vs. Chaotic and noisy)
  3. Physical Space (Open plan space vs. Enclosed room)
Q6 by Group SizeSmallMediumLarge
Mean5.005.475.44
Q6 by EnvironmentCalmChaotic
Mean4.935.12
Q6 by Physical SpaceOpen PlanEnclosed
Mean5.164.60

These calculations represent the same amount of improvement between the variable for Group Size and Physical Space as those between the Control workshops and the Intervention ones. The difference between calm vs. chaotic environments is not large enough to be statically significant.

However, these numbers do not necessarily negate the outcomes from the Control/Intervention workshops. Looking closer at the Physical Space data, all the later workshops, the ones with the Intervention, were in enclosed rooms, while the Control workshops were more often delivered in open plan spaces. The group sizes were more evenly spread between the Control/Intervention workshops, so possibly factors conditional to smaller group sizes could have as much effect on learning outcomes as teaching with the Intervention. This substantiates some of the conclusions of Giacomino’s (2020) review of Peyton’s 4 Step teaching approach, where a smaller size of student learner group increased positive learning outcomes.

References

Bogner, K., & Landrock, U. (2016). Response Biases in Standardised Surveys.GESIS Survey Guidelines. Mannheim, Germany: GESIS – Leibniz Institute for the Social Sciences. doi: 10.15465/gesis-sg_en_016

Coffield, F. et al. (2004) Learning styles and pedagogy in post-16 learning: A systematic and critical review. London: Learning and Skills Research Centre.

Giacomino, K. et al. (2020) ‘The effectiveness of the Peyton’s 4-step teaching approach on skill acquisition of procedures in health professions education: A systematic review and meta analysis with integrated meta-regression, PeerJ, 8(10129), pp.1-26. Available at: http://doi.org/10.7717/peerj.10129

Kara, H (2022) Embodied Data Analysis. Available at: https://www.youtube.com/watch?v=k79AWH59JpQ/ (Accessed: 7 January 2024)

Leave a Reply

Your email address will not be published. Required fields are marked *