Sunday, January 3, 2021

If you record it, will they watch?

I’ve been a lecture capture advocate for years, but that was before virtual instruction and the purposeful push to build more flexible online classes. While my YouTube analytics from past semesters told me that students do take advantage of recorded content, it certainly isn’t every student that does this. I was curious whether my upper-division genetics class redesign this past fall, in which live attendance via Zoom was not required, influenced the use of lecture capture videos.

Fortunately, Fresno State added Panopto to our portfolio of digital tools just before fall term began. The video analytics reports are great when integrated into our LMS (Canvas), because Panopto logs all video accesses by individual user! So, finally, I have a dataset for some deep investigation.


Ultimately, I find that no simple metric of video use (like number of minutes watched, or view completion) correlates with student performance. However, not surprisingly, student attendance at synchronous class meetings statistically correlates with earning a top quartile total score. Student feedback suggests that the flexibility of recorded class video provides a benefit to those with conflicting obligations. Further, the vast majority of students reported relying heavily on lecture capture video, which should motivate instructors to adopt or continue the practice of providing such videos for their students. 

About the focal course

69 students were enrolled in Biology 102, which is an upper-division genetics class that is required for our biology majors.


Attendance


Attendance started strong, with only a few students missing the first day of class. Given the COVID-19 pandemic and all that was going on in mid-August 2020, I was impressed this many successfully navigated Zoom and Canvas and were able to log on. However, attendance quickly plummeted to an average of around 50%, which is much lower than typical for my prior face-to-face semesters for the same course.



Course attendance

I was pleased to learn that my course redesign helped students in the pandemic era, and that the reduced synchronous attendance was often by choice of students that had other priorities:


“I had work every day during the time that the synchronous classes were live.”
“I had to do school with my younger brothers”
“worked 2 jobs this semester”
“I've had to work much more than usual”

“I attended when I could but the lecture being smack in the middle of the day made it hard for me to attend every time with also working 2 jobs.


Video usage


  • The course involved 45 instructional sessions of 50 minutes, which should yield 2,250 minutes of lecture capture, but I had a bit more, because I also recorded my discussions with students after class was dismissed each day (2,364 minutes total).
  • 25,902 minutes of lecture capture video were watched

In general, this is a good sign, I think. On average, that’s 375 minutes of video watched per student. But, no student is average, so let’s delve into anonymized individual data.


Student Data


I’ll take this opportunity to point out that many of the following graphs have their x-axes reversed (with small values on the right and large values on the left, near the origin).


Across the semester, I had two students who each watched over 2,500 minutes of video. However, it is important to note that even though Panopto noted that the videos were played for that amount of time, that has nothing to do with whether the students were actually engaged in the material. Regardless, it was discouraging to see that a third of my students viewed fewer than 100 minutes, with seven never taking advantage of those course materials.



With that information in hand, I was then curious how those minutes of viewing by individual students were distributed: did they watch a couple segments of videos over and over again, desperately trying to grasp some concept, or were they sitting down and watching entire videos at a time? Although I don’t yet have the qualitative data to address this detail, I do know that, on average, only two students watched 60+ minutes per individual video. To me, this pattern might suggest students who didn’t attend class and were generally watching each class from start to finish.


I also don’t know (yet) how “minutes watched” was influenced by students watching lecture videos at enhanced speeds (like 2x) or skipping around through the videos. One student did comment,

“Would rather just watch on my own time instead of having to sit there 50 mins, I could watch maybe 30 mins of it on 1.5x speed or skip through parts I didn’t need.”

Critically, the majority of students watched between 5 and 45 minutes on average. This suggests to me that I should further consider how I produce and curate videos: perhaps some short ones for the students that don’t tend to watch lots of video and a time, as well as the full-length lecture capture for students that want that experience.



Then I began to wonder whether and how often each student was making use of the videos. This histogram shows that less than a quarter of students even took a peek at 2/3 or more of the class recordings, with over half of the students viewing any part of 1/3 or fewer of the videos.



Attendance, redux


Returning to attendance, I was curious how a remote, virtual experience influenced individual attendance patterns. The histogram below shows that about a third of my students were regular attenders, present on Zoom between 80 and 100% of our class meetings. Many of the rest attended sporadically. Survey responses from students suggest some of the reasons why, which leveraged the flexible course design I created. I had told students that synchronous attendance would not be mandatory, and that they would be able to review lecture capture videos afterwards when they were not able to (or chose not to) attend. Students were prompted to describe “aspects of the online course design that were beneficial in facilitating the ability to complete the course asynchronously:”

“I did not attend the live sessions and I am doing well on my own. The asynchronous aspect of this course greatly helped me have time to deal with attending school while handling daily life.”


“I worked more Asynchronously towards the end of the semester, and I found that the class recordings were adequate for me to accomplish the homework.”


“The way the class set-up in my opinion was perfect. Many students like myself have had to pick up extra shifts at work just to make ends meet. Having the class set up as non-mandatory attendance allows students to work and attend class at the same time. Students can rewatch the class session they missed at their convenience and still do well in class.”


“The asynchronous aspect helped me out, especially when I had to put focus on other classes, then I could come right back and not be left behind.”


“I felt very comfortable going throughout this course asynchronously. I was able to work more hours and support myself through this.”

In sum, for the population I serve, students who chose to respond indicated most frequently that work schedules, and prioritizing efforts in other courses, were the main causes of non-attendance.





Attendance, video use, and student performance


Finally, let’s explore the interplay between synchronous attendance and asynchronous video use, and grades. Perhaps some of the students who didn’t watch much video were the ones who attended regularly? Maybe the students who watched a lot of video were ones who rarely or never attended synchronous sessions? Hopefully those who attended and/or watched more video earned better grades in class?


In the following scatterplots, the percent of the 45 synchronous sessions attended by each student (each point) is plotted on the y-axis. The students have been divided into approximate score quartiles, where the black squares represent students earning the bottom 25% of scores, the yellow diamonds the next higher quartile, and then the green circles, with the blue triangles being students earning the top 25% of scores.


What I’m looking for when analyzing these plots is whether one or both axes discriminates the four groups of students: do the quartile datapoints (like colors and shapes) cluster together? If they do, then the variable on the axis that creates such clusters is correlated with student performance. 



Comparing attendance to total minutes of lecture capture video watched, I see that more video watched (right side) does not correlate with frequency of synchronous attendance (top). Some students watched lots of minutes, attended every class, and were in the top quartile of grades (e.g. upper right blue triangle).


I should reiterate the caveat that lots of video watched (or class attendance, for that matter) doesn’t necessarily mean the student was really getting anything useful from the experiences.


More upper quartile students attended regularly and watched no video (upper left blue triangles). Some high-performing students only attended around half of the synchronous meetings and still earned upper quartile grades. Likewise, one student who attended almost every synchronous class earned a bottom quartile score (upper left black square).


Overall, the blue triangles tend to be in the upper half of the plot, indicating that students who performed well were generally ones who attended class at least half of the time. Likewise, the black squares are mostly in the lower half. Using one-tailed unpaired T-tests with a Bonferroni-corrected alpha (significance threshold) of 0.017, I find that the top quartile of students significantly attends more frequently than the next quartile, and the rest of the quartile groups do not significantly differ based on attendance. Thus, in my class, attendance at the optional synchronous course meetings is predictive of student performance.


The unsurprising take-away for instructors and students is that, at least the way I designed my class, encouraging synchronous student attendance is important. Why?


This is a question that deserves more attention in future. I see at least two potential answers that are not mutually exclusive:

It could be that I’ve designed an engaging and informative synchronous experience that helps students learn the course material


However, it might be that the students who were able and/or willing to attend synchronous classes are those who are already strong students or have some other advantage (e.g. not a first-gen college student, or not working in a job during the semester, or not an URM…).

In other words, attendance, as a quantitative descriptor of students, is possibly acting as a proxy for some underlying factor.


View Completion


Perhaps students who watch most of every video they start are the ones who do not attend class regularly. This was my intention with building an attendance-optional class: that students who didn’t want (or weren’t able) to attend a synchronous online class would still be able to access all of the content.


Here I introduce a new metric: average percent view completion.


In Panopto, every time a student begins to play a video, that access records the number of minutes played, and then calculates what percent of the total video length was watched. So, if I have a 50 minute lecture capture video, and a student plays ten minutes, then that is a 20% view completion. If they watch a second class video and watch 25 of 50 minutes, that’s a 50% view completion. What I’ve done here is not necessarily robust: I calculated the average view completion of all video accesses. In my example here, that would be (20% + 50%)/2 = 35% average view completion.


A major problem with using average as the metric is that it is not sensitive to the number of videos accessed, so if a student watched all of one video, and that was it, they’d register as having a 100% average view completion. However, a student who watched half of every lecture capture video would show a 50% average view completion.


Also, it is worth noting that 100% view completion should not necessarily be the goal. One student commented

“I was able to watch during work occasionally.”

If students are only able to watch in brief increments, then I’d rather have that than nothing at all!


When comparing class attendance (y-axis) to average percent view completion (x-axis) below, the quartiles are not distinguished by the x-axis. Ideally, I expected to see that student data fell along a diagonal between upper-left (high attendance; low video use) and lower-right (low attendance; high video use). However, there are high-performing students who didn’t watch much video at all (blue triangles toward the left) and some students in the bottom quartile of grades who look like they watched most of each video they watched.



This plot is deceptive, though, because the two black squares at about 80% view completion only watched two videos, and those were videos from the start of the semester. Their engagement with course materials plummeted after that.


How to Proceed


What useful insights can be gleaned from these data? First, total “minutes viewed” was a huge number at 25,902 minutes. Even if that doesn’t ensure student eyes were watching the video (or listening to it) for each of those minutes, that’s still enough to motivate me to continue providing lecture caption video for students to review.


Because synchronous attendance has a clear (but never perfect) correlation with student score, I’ll continue encouraging attendance as much as possible. Students realize the importance of attendance, commenting:

“Always attend class because it helps students to understand the content better than viewing the recorded video themselves.”

and

“I attended almost every single live lecture, but it was good to know that it’s not the end of the world if I had to miss a meeting since it is asynchronous.”

(THIS! is exactly the goal)


However, I still won’t make attendance mandatory for online courses - at least not in the current circumstance, where students aren’t opting into a virtual experience; it is being forced on them by COVID-19 distancing restrictions.


Also, I am not surprised to find that student completion of videos is low, and that could be for reasons mentioned above (e.g. jumping around inside each video, searching for specific content). I have two related solutions to this possibility that support students:

  1. Provide additional videos with more targeted content, so that students can watch short and content-specific micro-lectures on specific topics instead of looking through lecture capture videos to find that content
  2. Likewise, I am providing Tables of Contents to each video, so students can select the appropriate link that will jump them to the timepoint in the video when a topic is starting to be discussed.

In conclusion, and unsurprisingly, some students heavily leverage lecture capture videos, and others do not. There is more of a correlation between synchronous attendance and earning a top 25% score than the effect of lecture capture video watching. Because of the minimal effort required to record and post lecture capture videos, I will certainly continue this practice, with some additional video curation.


As a final note, some students commented on the importance of providing lecture capture videos promptly, and I strongly suggest doing the same. I routinely posted the video within two hours of the end of class, and I received affirming comments like,

“He would record and post his lectures in a timely matter every class period”

so provide lecture capture materials immediately, or at least at a consistent schedule, so that students will come to know what to expect from you. But, the earlier, the better: the goal is to offer students flexibility, and the longer the instructor takes to provide lecture capture video, the less flexibility students have to access it.


Final Thoughts


Ultimately, student populations are extremely diverse, and providing as many ways as feasible for students to access course content will benefit your classes. Here are some summary data from my students about the importance of lecture capture videos:


Did you use lecture material recorded during class to study for exams?

Yes: 52

No: 4


How much did watching recorded lectures contribute to your success in the course?

None: 0

A little: 2

Somewhat: 9

A lot: 21

Essential: 24


As we’ve seen, using video usage data to evaluate and predict student performance is riddled with caveats. Interpreting such quantitative metrics by themselves will rarely be sufficient to understand how to adjust teaching practices to improve the student experience. However, at least some students take advantage of these resources, and that by itself is enough to warrant the continued production of lecture capture videos.