Sunday, June 21, 2020

ExplainEverything meets Zoom

I just discovered that there is a way to share screen in Zoom to access ExplainEverything, whose whiteboard function is way more useful, user-friendly, and powerful than Zoom's. Here's what I'll be doing for my synchronous class meetings this coming term! This walkthrough is written for Apple/iOS use, and it might be that the same process works for other operating systems.

First Impressions

Zoom, allow me to introduce you to ExplainEverything. It is a fantastic app that I've known for a long time. Its main purpose is as a digital whiteboard. Yes, Zoom, you have a sharable whiteboard, too, and I've written about best practices in how to use that tool while in a meeting with students. However, your whiteboard is not your main purpose and not where your power lies.

Many teachers leverage ExplainEverything as a presentation and annotation tool by loading existing files (photos, PowerPoint slides, PDFs…), projecting to the classroom, and then annotating those files as they are displayed. Because ExplainEverything also records audio and the video of what is happening in the app (the images and any annotations that are added), many educators use it to record video lectures for students to watch. Like all great apps, ExplainEverything is not free (there is a free version available to try out), and its educator bundle allows multi-user synchronous activities, like group editing of the same whiteboard in real time. Because I use ExplainEverything so much, I purchase the individual plan for the "unlimited slides" (and "unlimited projects") - think of this as being allowed to save an unlimited number of individual class presentations.
I've written extensively about ExplainEverything in the past. If you'd like to explain its vast uses more in depth, please consider reading "Explain Everything and blended learning,"  "Digital Classroom and Lecture Improv," "Animation in Live Presentation," "Tablets in Office Hours," "ExplainEverything Tips for Live Presentation," and "ExplainEverything & Quick-Loading Images."
ExplainEverything, meet my newer acquaintance, Zoom. You may have heard of Zoom already: it is a leader in the videoconferencing world, especially in education - so much so that, like its predecessors, the noun has already been verbed, so that two people now Zoom with each other.

The reason I wanted to introduce the two of you is because I think you would make a beautiful couple. ExplainEverything, you have incredible presentation and annotation tools, and you've been developing your real-time collaboration tools. Zoom, your abilities to connect multiple users in synchronous conversations and to allow users to "share" (display) windows on their devices to all participants at once are amazing. Together, you two have a lot of potential!

Courtship

It was years ago that I first described how to use two devices (e.g. laptop and tablet) to host a Zoom meeting (on the laptop) and display Zoom's whiteboard to the participants (using the tablet, which is easier to draw on by hand than using a laptop trackpad).

Although I've used ExplainEverything much longer than I've used Zoom, I don't know why it took me so long to realize that I had never thought to share my ExplainEverything screen in Zoom, instead of using Zoom's whiteboard.

Why would I do this? Like I mentioned above, not only does ExplainEverything have much more extensive (and user-friendly, I think) whiteboard tools, but also it can have a series of images (e.g. PowerPoint slides) loaded on sequential whiteboards (think "pages") that can be navigated through like a PowerPoint presentation, all while using the annotation tools. There are other approaches to the same end, like sharing a Google Slides or a PowerPoint screen and using their annotation tools, but I also find their annotation tools clunky. You can't do any of this with Zoom's whiteboard, which is basically only for drawing and typing on a white screen.

I'm so excited that this actually worked that I'm spending my Sunday afternoon writing this description of how to share an ExplainEverything presentation via Zoom. It is very straightforward, but there is one small detail to pay attention to…and none of the credit goes to me: the ability to share an iOS (iPhone, iPad) mobile device screen lies entirely with Zoom's development team. If you don't use iOS, I'll be interested in hearing from you whether other operating systems also have support in Zoom for sharing mobile device screens!

Let's get these two apps together!

Step 1: host a Zoom meeting from an Apple laptop connected to wi-fi

Step 2: ensure your mobile device (iPhone or iPad) that has ExplainEverything is connected to the same wi-fi network

Step 3: the Host (laptop) selects the "Share Screen" button (at the bottom center of the Zoom app screen):

Screen capture of the "Share Screen" dialog box in Zoom


and select "iPhone/iPad via AirPlay" (highlighted in blue in the image above). If this is the first time you've used this approach, Zoom will ask for your permission to download and install a plug-in that allows Zoom to connect to Apple devices using AirPlay. Select the "Share" button to advance to the next step of AirPlay screen sharing.

Step 4: the Zoom app will display an additional dialog box with instructions for how to connect your iPhone/iPad to your Zoom meeting:

Zoom directions for how to connect an iPhone or iPad to the Zoom meeting to screen share
in this case, my home wi-fi network that the host (laptop) is connected to is named "elegans-5," and Zoom is indicating that the mobile device must be connected to that same wi-fi network. Zoom has also provided the name of the "Device" to mirror to: this is the name of my laptop ("JoeMBPro") appended to "Zoom." Once those steps are taken, then your mobile device screen should appear in the Zoom window on the Host (laptop): it is now being shared to all meeting participants.

Step 5: open ExplainEverything, as you would normally, on your iPad/iPhone, and it will be displayed to all of the Zoom participants:

The laptop (host) next to the screen-shared iPad, demonstrating that the iPad screen is visible in Zoom to the participants
Here, the laptop on right is the host, and the AirPlay screen sharing has succeeded: the ExplainEverything app, from the tablet at left, is being shared. Also note that the participants don't see the ExplainEverything toolbars at the top, left, and bottom of the iPad screen (which is intended).

Now, from the iPad, the annotation and slide navigation tools can be used to move back and forth between slides in this ExplainEverything file and to annotate them by hand on the iPad touchscreen. Although ExplainEverything can perform audio and video recordings itself, I plan to use the Zoom recording tool, because it captures more than just the shared screen (e.g. the chat and the video thumbnail of the host/presenter, seen at upper-right in the screenshot above).

One caveat: I haven't been able to test this process on my school's wi-fi network, and I'm not sure whether the AirPlay connection will work as seamlessly there (depending on network configuration)

I am so glad I discovered this workflow (again: designed by Zoom; my addition is the idea of using it to screen share ExplainEverything, which is no great mental victory)! I hope you find it useful, as well; please let me know additional tips and any shortcomings that you discover.

Friday, June 19, 2020

Bloom’s Grading: Reduce Cheating and Improve Student Outcomes

With the advent of more courses moving to virtual delivery, particularly during the COVID-19 era, I've had more frequent conversations with my faculty peers about how to prevent cheating in online exams. Here, I explain how I’ve addressed this concern, while also improving student learning. In summary, I've made three simple changes to my course design:
  • I changed from a 60/70/80/90% point cutoff for D/C/B/A letter grades to 20/40/60/80%
  • I aligned each letter grade (A, B, C, D, F) with the levels of Bloom's Taxonomy (I'll refer to these as "Bloom's levels")
  • I designed my assessments (especially exams) to provide an equal number of available points for each Bloom's level (and thus for each letter grade).
The outcomes are that instructors are more conscientious about actively assigning more point weight toward exam questions requiring higher-level Bloom's functions (like Analyze, Evaluate, Create). These are inherently more cheat-resistant question types, because such questions tend not to have answers that are easily found by search engine query. Also, student responses to such prompts are expected to be unique, resisting peer plagiarism. This approach also tends to reduce the number and/or point weight of exam questions that are more easily cheated.

Together, this suite of changes, which are more fully described below, have not only discouraged cheating but also provided students with more resources to know my expectations of what they should be able to do in order to demonstrate their understanding of course topics and concepts.

This post is lengthy, for those who want details for implementing this approach. For the TL;DR version, here is a PDF of an essay that condenses this post into a version I pitched to the Chronicle of Higher Education for publication.

Introduction

A history of letter grading

Humans do have a tendency to organize and categorize, which isn't always a good thing. Somewhere along the way, categorizing learning by ranking students, either against each other or against an instructor standard, became the norm. According to Durm's (1993) historical reconstruction of letter grading (download a PDF of my graphical interpretation of the history of letter grading, also shown below), Yale (at least in the USA) is the first university with evidence of such a system, from 1785, where students were graded into four tiers. Multiple higher ed institutions tinkered with grading concepts over the next century, including the introduction of the Pass/No Pass concept, which morphed into Pass/Conditional/Absent at the University of Michigan.

In the late 1800s, Harvard switched from a six-tier grading system, in which placement was, for the first time, based on a percentile scale, to a five-tiered system, in which the bottom tier (Roman numeral "V") was not considered passing. In 1897, Mt. Holyoke finally arrived at a five letter grade scale based on particular percentiles…but this was A/B/C/D/E grading! At some point, E disappeared and was broadly replaced by F.


A timeline of years that particular universities adopted different grading scales
One of the more notable contemporary changes was Brown's decision in 1969 to allow A/B/C/no pass and Satisfactory/no pass grading. What was innovative here is that Brown does not report "no pass" grades on student transcripts, so that failed courses do not count against the grade point average (GPA)!

This brief survey of grading underscores one key conclusion. I hope you either already recognize this or you will now agree with me:

Letter grading is arbitrary

Let's face it: instructors spend a lot of time grading coursework. I suspect that most of us have realized that multiple factors interact to produce a single letter grade for each student in each course, integrating multiple factors of a student's performance. The myriad variables under the instructor's control include:
  • The types of point-earning activities (e.g. attendance, participation, homework, exams)
    • and the relative weights of those activities
  • The particular questions asked on assessments
    • and the relative weights of those questions
  • When, if ever, point values are rounded
    • and the decimal place to which any rounding occurs
  • Ultimately: the score thresholds separating letter grades
Making things worse, I suppose that most instructors (at least in higher ed) make many of the above decisions on their own without consultation with other instructors (past or contemporary) of the same course. As such:

Letter grades have poor inter-rater reliability

What earns a student a "B" in my genetics course is not what would earn that student a "B" in somebody else's course. When I evaluate a transcript of a student who wants to join my research group, I have no real concept of what the grades they earned really indicate! We could throw up our hands in despair and conclude that it isn't worth our additional effort to try to rectify this mess about the arbitrary-ness of grading. We could just agree to give all of our students "A"s and save ourselves so much time and effort. Instead, I argue that we should use the fact that letter grading is arbitrary, and historically has been very flexible, to our advantage and to our students' benefit!

My philosophy of assessment and grading

To understand my perspective, a bit of an introduction is in order. I'm an Associate Professor of Biology at California State University, Fresno, which is a regional and minority-serving institution in central California, with mainly undergraduate and Masters programs. If you're not a biologist or scientist, please don't hold that against me. Indeed, the grading approach I describe here can easily adapt to any discipline, so please keep reading!

My overarching philosophy on teaching and evaluating students is that I hope all of my students will earn an A grade. I don't curve my grades, and I also believe that students should have a clear rubric for earning a letter grade, beyond just describing what percent of points must be earned to achieve particular grades.

I tend to teach upper-division courses required of biology majors, particular genetics and molecular biology. The course I'm going to describe as a case study today is our required genetics course, which tends to be attended mainly by junior (third year) students. The course tends to have an enrollment of fifty to sixty students in two sections, offered by two different faculty, each semester. The main point here is that all biology students, including students who may be more interested in topics other than genetics (e.g. ecology), take this course, and that the enrollment is moderate - this is normally a lecture-style class in a relatively large room with fixed, stadium-style seating. I've taught this course about eight times in the last seven years, but I began to design and adopt the reforms I'm suggesting here about three years ago.

I'm also keen to leverage technology to support student learning. I'm not talking about technology for technology's sake, but about using technology to provide students with on-demand resources for learning, to reduce the cost of course materials (I create many of my own written materials and videos), and most academically importantly to help students have "authentic" disciplinary training. In my field, for example, that means students should be trained how to access online databases containing genetic information and how to use computer programs to perform analyses and statistical tests.

In this face to face course (before COVID-19), I had adopted a blended learning ("flipped") course design, where students would read the textbook and watch introductory videos before coming to class. During class, I would have question and answer sessions, provide additional practice activities, and conduct other active learning exercises. Because of my expectations that students would learn to use online resources specific to genetics, I felt it necessary for them to have access to computers during exams. Then, I began to redesign my class to try to figure out how best to encourage academic honesty with open-internet exams. I do not use any technological tools (like limiting browser access via campus wi-fi to a limited selection of websites) to prevent cheating on exams. I just tell students my expectations:
  • exams are open-book, open-note, and open-internet
  • thus, I make my exams a bit harder than they would otherwise be
  • to prevent distractions, I don't allow students to access the audio of any videos they might want to watch
  • real-time interactions and collaborations with others (e.g. chat) are not allowed (but I have no way to catch this; I hold students to the academic honesty code of the University)
  • students may only use one mobile device (laptop, tablet, or smartphone) during an exam
Over the years, I've learned how to design open-note/text/internet exams that students can complete in their fifty minutes and that, even in the potential presence of cheating, still produce a bell curve grade distribution (assuming that is desirable…). I'm writing today to share some of these approaches that I've taken. One of the main insights I will share is that it is important for instructors to be aware of at least two aspects of their assessments (e.g. exams), especially in preventing or discouraging cheating: the relative weighting of points per question, and the Bloom's Taxonomy level of each question.

students taking an exam using notes and laptops to look up information if they want
Students taking an open-resource (notes, internet, textbook) exam in one of my courses

Bloom's Taxonomy

Here is a link to a fantastic resource with a background on Bloom's Taxonomy. Briefly, Bloom described a hierarchy of six different educational goals, from basic to advanced. The 2001 revised taxonomy lists them as: Remember, Understand, Apply, Analyze, Evaluate, and Create (Anderson and Krathwohl 2001.) This structure has been widely used as a framework to scaffold instruction. For example, students first remember key concepts and facts, and then they can demonstrate Understanding by comparing or classifying those concepts and facts. The next higher goal is Applying the concepts, facts and understandings - perhaps in a novel situation, and so on. Many agree that, in practice, it is ideal to have students demonstrate their facility with course content and practices by Creating (e.g. a computer program, or a poem, or a song, or a laboratory report, or an analysis of different stock market investing strategies).

The most critical aspects of Bloom's taxonomy for our current purpose are to appreciate that:
  • each assignment or quiz/exam question in your course can be categorized by its Bloom's level, and
  • the higher up Bloom's taxonomy one goes, the less easy it is to cheat

Instructor Actions

So, what does it look like, in practice, when letter grades and assessment questions are aligned to Bloom's Taxonomy?

1. Student Knowledge Survey (SKS)

Although this first step is optional, I highly recommend it as a valuable pedagogical practice. The basic concept of an SKS is that you provide students with an "exam" at the start of the term. The exam (actually described and delivered as a survey) comprises a task for each student learning outcome, and the format of each question is to ask the student to self-rate how well they think they could do at that task. Some, like Bowers et al. (2005), suggest that the SKS be given at the start and end of each term as one way to measure student learning based on whether confidence in their abilities improved. I use it for a different purpose.

Here is a link to a PDF of my SKS for my genetics course. Hopefully you'll see that, as in the first task,
"Arrange nucleotides by chemical structure and hydrogen-bonding capability"
this is a specific task that begins with a verb, "Arrange" (the relevance of the verb will be explained shortly!) Even though you might not be a biologist or a chemist, I hope you can conceive how this task is clearly assessable. However, this is also clearly not an exam question, because it provides no specific content or details for a student to work with.

By providing students this exhaustive list of tasks on the first day of class, I accomplish a few things. First, I have provided them with a complete study guide for the course. I am being transparent by sharing my expectations of what I think students should be able to do (tasks) to demonstrate their mastery of course topics and concepts. Have you ever had a student ask you, "What's going to be on the test?" From here on, you'll be able to refer them to the SKS from the first day of class as a great resource!

Second, building an SKS essentially creates a checklist that focuses my instruction on what I've told the students I expect them to be able to do. Likewise, the SKS serves as a template for building assessments (e.g. exams). All I need to do to write an exam is to look at the SKS tasks my class has covered. I then fill in the details of each task to generate an exam question that is directly aligned with what I've told the students I expect them to be able to do. You may recognize this process as "Backward Design," which Wiggins and McTighe (1998) have helped popularize. Read more about Backward Design in the context of my course at my prior post. Ultimately, our student learning outcomes and tasks should drive our assessment content, and our assessment content should drive our instruction. Some negatively call this "teaching to the test" - and it absolutely is, and we should strive to do so!

Third, and most critically here, by writing that exhaustive list of tasks, which begin with verbs, I can easily look over all of my student learning outcomes/expectations/tasks and categorize them by Bloom's level.

The big idea here is to ensure that my tasks are roughly evenly distributed across all of the Bloom's levels, so that I am providing multiple opportunities from the basic (factual recall in the Remember level) to the advanced (synthesizing and generating information in the Create level) to allow students to demonstrate the extent of their mastery.

If you are interested in building an SKS, don't panic! It can be overwhelming to think about doing this. Don't try to do it all at once - consider giving yourself a few terms to develop this resource. The way I proceeded was to build a document where I collected tasks in real time, over the course of a semester, as I assigned homework and created exams, and then I built an SKS from there.

Remember, you do not need to have an SKS to achieve the major goal of reducing cheating described below, but it will eventually help the process be more efficient.

2. Create a letter grade scheme using Bloom's levels

Most instructors have the power to place their letter grade thresholds wherever they like. In the USA, it seems most common (and arbitrary) for 0-59% of points to equate to an F, 60-69% a D, 70-79% a C, 80-89% a B, and 90-100% an A.

I use the point-to-grade alignment shown in the image below (a PDF version of this graphic is available here), where 0-19% of points is an F, 20-39% is a D, 40-59% is a C, 60-79% is a B, and 80-100% is an A. In other words, the percentile scores are evenly distributed across the letter grade range.

Diagram of how the six Bloom's taxonomy levels are aligned with point percentages, letter grades, and Bloom's tasks

There are a few aspects of this letter grade scheme that I think are worth additional exploration.

First, and most importantly, spreading the letter grades out (equally) over the entire percentile range makes tremendous sense. Why should we cram the "passing" letter grades (D through A, or maybe C through A, depending on your program) into the top half or quartile of possible percentiles? What this traditional practice indicated to me, when I started teaching (and writing exams) was that I had to include questions, worth about 60% of points, that would be relatively easy for most students to earn. This would ensure that most students would get at least to the D level (60%+ of points in most "normal" courses) and hopefully then produce a bell curve of letter grades. I wonder, if you reflect on your own assessments and exams, if this is also true for you?

To address this shortcoming, I aligned my new letter grade scheme with Bloom's levels. This is perhaps the most fundamental paradigm shift I am proposing. For example, if all a student can accomplish on my exam is Bloom's level 1 (Remember) work, e.g. "define," "label," then they will earn an F. If they can also complete some of the Understand level tasks, then they move into the D letter grade range.

By the way, you might have noticed above that I combined two of the six Bloom's levels (Create and Evaluate) so that there are the same number of levels as letter grades. If you like the general concept I'm sharing here, you can certainly make other such arbitrary alignments to modify this approach.

Second, by aligning the letter grades with Bloom's taxonomy, which is a well-described and widely-used concept, we have a letter grade rubric! Does your course syllabus go beyond displaying how many points have to be earned to achieve each letter grade? Heck, even the USDA has a rubric for grading beef, which depends on the amount of fat marbling in the muscle! Imagine how much more useful it is to students to be given an actual rubric, that more explicitly defines what (tasks, verbs) is expected of students to earn particular letter grades! Leveraging the verbs that accompany Bloom's taxonomy is the real strength of this approach.

Aligning letter grades with Bloom's will also improve inter-rater reliability, I suspect. For example, now, when I write recommendation letters, I can efficiently explain what types of work a student did to earn their letter grade in my class.

Third, I want to note that this is not a curve! My grade rubric doesn't mandate that a particular number or percent of students will earn a particular letter grade.

Finally, although this is just a mindset hack that I decided to include: you might also have noticed that I gave names to each of the letter grades. In my classes, an F grade doesn't stand for Failing, it stands for Foundational. Students who can perform Bloom's level 1 work have provided me with evidence mainly of Foundational accomplishment. Then students move up to Developing, Competent, Burgeoning, and Accomplished evidence.

3. Provide an equal number of points per letter grade (Bloom's level)

Step 1 (Student Knowledge Survey or SKS) was optional. Step 2, adding the new letter grading rubric to your class, is easy to do - just add it to your syllabus. Together, those two steps can help improve student outcomes by providing them resources and concrete expectations. That achieves one of the goals from the title of this post, "Bloom's Grading: Reduce Cheating and Improve Student Outcomes."

The first two steps laid the foundations for designing cheat-resistant exams. Step 3, then, will be the main focus of your efforts, and this is where Reduce Cheating will be addressed. Here's how.

When you write an assessment (e.g. exam), make sure that there are roughly the same number of points available from questions of each Bloom's level: 20% of points from level 1 type questions, 20% from level 2 type questions, and so on. This doesn't have to be a difficult process. Here's the ideal approach:
  1. Write your exam questions
  2. Assign each question to its Bloom's level
  3. Choose the total points available for the exam
  4. Assign point values to each question so that the sum of points in each level is about 20% of the total points available
Here's an example from one of my exams. If you follow this link to a PDF, then you will see all of the questions, Bloom's level assignments, and point values. Below, I summarize this document. My exam had 15 questions, and the image of my spreadsheet table below shows how I weighted the exam question point values. For example, 27% of the questions were Bloom's level 1, and 20% of the available points were available from those four questions.

Table listing the number of exam questions and their point values available in each of the Bloom's levels
Clearly, I did not succeed on this exam in creating a truly even distribution of point values across the five letter grades, with only 7% of points available in Level 4-type questions.

Now let's see how this approach translated into student grades:

Table listing the number of exam questions and their point values available in each of the Bloom's levels, including the percent of students earning each letter grade

On this exam, five percent of students earned an F, and nineteen percent of students earned an A. More than half of the students earned an A or a B. We can debate whether this type of grade distribution is desirable (but it is for me)! As I mentioned earlier, my goal as an instructor is to help students succeed in learning and expressing their learning. It might be that I write easy exams, but the exam items didn't change from prior semesters, when my grade distributions were lower. Instead, it might be that the SKS study guide and the letter grade rubric actually helped students better prepare for the exams. Perhaps most likely, my having gone through the process of creating the SKS might have led me to focus instruction on the student learning outcomes, and so that is why students did so well on the exam!

How might all of this instructor effort reduce cheating?

In my grading scheme, one letter grade's worth of points (20%) are available from Create+Evaluate questions, which are essentially cheat-proof. When a student is asked to create or to provide an evaluation of something, the instructor should have every expectation that each response will be unique. When two responses match or are very similar, then plagiarism has been identified. At least most of our students already know to avoid this kind of cheating, because it is so easily detected.

Likewise, the Analyze and Apply type questions, depending on how you word them, are highly cheat-resistant. For these types of questions, I often append a "Briefly explain your response" requirement, to elicit a unique response from each student.

The above question types are also resistant to cheating because they're often questions for which answers are not readily available by querying a search engine. That's not necessarily true of the lower Bloom's question types, which are more often (in my discipline) delivered as labeling, matching, true/false, multiple-choice, or fill in the blank questions.

However, please note: because of the even distribution of points across Bloom's levels, even if students can (and do) Google answers to your questions, that still won't earn them a good grade. For example, if they succeed at cheating by looking up the answers to all of your Level 1 and Level 2 questions, and if they got the answers all correct (40% of total points), they would only have earned a D letter grade by their cheating.

Moreover, cheating takes time! On a timed exam, students can't afford to look up the answers to the questions, or they won't have enough time left to address the 40-60% of available points on your cheat-resistant and cheat-proof upper Bloom's level questions. As I mentioned above, I hold open-resource (notes, internet) exams, and even under those circumstances, where I expect students to look information up online during the exam, I've had many students mention that they spent too much time on those portions and did not have enough time to complete some of the other questions on the exam.

So, modifying the number of questions on a timed exam is another aspect of exam preparation, independent of the question score weighting I'm proposing, that you can also use to discourage cheating.

I hasten to confess that this approach isn't ideal in all circumstances, it has flaws and drawbacks. For example, in any online exam, students can cheat by communicating answers with each other. Higher-level Bloom's questions are still safe, but chat/messaging can provide a faster method of cheating on the lower-level questions without resorting to web searching. Also, the instructor has to make a judgement about how much effort they're willing to put into enforcing academic honesty. There is no equitable way to make any exam cheat-proof, especially online. There is a clear and direct trade-off between how cheatable an exam is and how much effort the instructor puts into creating the questions and into grading the responses. The process I've described here is not great for a multi-hundred student section of an introductory class, where there is not enough time available to grade free-response higher Bloom's level questions.

I hope you find some or all components of this process useful to implement and/or at least to think about! Please share comments with me - I'll enjoy reading additional perspectives on the topic of academic dishonesty in online (presumably open-internet) exams.

References

  • “A taxonomy for learning, teaching, and assessing: a revision of Bloom’s Taxonomy of Educational Objectives.” Anderson and Krathwohl, eds. New York: Longman. 2001.
  • Bowers, Brandon and Hill (2005) “The Use of a Knowledge Survey as an Indicator of Student Learning in an Introductory Biology Course.” Cell Biol. Ed. 4:311-322.
  • Durm (1993) “An A is not an A is not an A: a history of grading” Educ Forum 57.
  • Wiggins and McTighe (1998) “What is backward design?” in Understanding by Design (7–19). Merrill Prentice Hall.

Saturday, June 13, 2020

Helping Teachers Prepare for Virtual Instruction

At my university, like so many others, faculty made the rapid transition to online "virtual" delivery of courses in March. Since then, student voices have been amplified arguing that they should receive tuition refunds because of sub-standard experiences (in their opinions - which matter!) and because they didn't pay tuition to receive online instruction.

Rock, meet hard place. On one hand, many faculty are devoted to doing their best and making their courses great learning environments. They were caught off guard when mode of delivery suddenly had to change (for all of their courses, all at once). On the other hand, now faculty are being asked (sometimes for additional compensation, but most often not) to prepare for more virtual instruction this coming fall. And now we have months, not days (despite general lack of compensation for our time) to prepare, so the expectations of quality will be much higher!

In addition, now, more than ever before, our livelihoods depend on our willingness to go above and beyond to vastly improve the quality of our teaching! If fall instruction is mostly virtual, as currently planned, then how many students will actually "show up" (online) and agree to pay tuition? What sort of long-term impact could student opinion of low-quality online teaching have on our employers?

Summer Faculty Professional Development

For compensation, I'm now leading a cohort of twenty of my faculty peers, across disciplines, in a introductory training course for improving virtual instruction. They're being compensated for their time, as well. In the forthcoming series of posts, I'll be describing the emerging concerns that they have collectively voiced and some of the tools and techniques we have found, shared or developed to make Fall 2020 the best semester we can.

By way of background, the faculty who are part of this course elected to enroll in it. The course itself was created in my campus' learning management system (LMS - in my case, Canvas) by our campus instructional designers, and multiple faculty are leading multiple sections of the same course. The course is delivered on the web through Canvas, and I've opted to make the course asynchronous: the faculty participants can move at their own pace through the course content, reading text, watching videos, participating in discussion boards including peer feedback, and completing exercises and quizzes. The course was designed to incorporate three synchronous Zoom meetings, and the content of those meetings was not suggested. In my section, we used the first meeting for introductions and to discuss the course structure and schedule - like any typical first meeting of an in-person course, even.

The student perspective

One of the best experiences I've had so far (the course launched about a week ago, and there are two weeks left to go) is experiencing a Canvas course both from the student perspective (I didn't create the course, so I went through all of the content before I published the course to my faculty participants) and from the faculty perspective as the instructor (more "facilitator," in this case), where I field questions both on content and also on mechanics (e.g. technical issues, interpreting assignment instructions).

It is not surprising, but still remarkable, that my students in this course are just as diverse as any students I'll have in any undergraduate course I teach. Average age might be the only difference! This has been remarkably impactful to my own professional development. I'm afforded not only the two views of virtual instruction (student perspective and teacher perspective), but my students are also my peers, and I think I'm receiving more (honest or forthcoming might not be the correct words…) raw feedback from them than I normally hear from undergraduates. When my faculty peers experience something they don't understand, or when something goes wrong, they are letting me know! So, the first concept of virtual instruction that has been reinforced for me is:

Don't design a class for the "average student"

There is no such student. Design for the students who might be at the margins, and then your course might be as accessible, engaging, and welcoming to all as it can be.

I categorize some of the most vocal feedback I've received so far into two categories:

  • The course content isn't what I expected, and I'm unhappy about that
  • I'm not a digital native, and I'm confused about what I'm being asked to do / this is taking too much time / I need technical help

You might recognize these as common issues for any course, even before virtual delivery was demanded!

The first point might be out of our control as teachers. Messaging about course content when students enroll often occurs before we become involved.

The latter point is the focus of the rest of this inaugural post. Like our usual classes, my training course has no "average student." Some of my faculty peers were on sabbatical when our campus switched LMS venders from Blackboard to Canvas, and they're not used to how this system works. Some are semi-retired and self-identify as not tech-savvy; they feel particularly overwhelmed (like our undergraduate students!) at the prospect of learning how to operate Canvas from the teacher side. We have careers are are probably financially and professionally secure: imagine the additional pressures and risks that our students face navigating the same transition!

Moving Toward Fall

The faculty I'm working with have expressed interest into deep exploration of at least a few key topics over the next couple of weeks, and these will form the basis of future post topics:

  • How to motivate/reinforce/monitor academic honesty in online assessments
  • How to facilitate student small group interactions
  • How to decide the appropriate balance of synchronous (real-time) vs. asynchronous course activities
  • How to record, deliver, and make video content accessible for students

While I work on those additional posts, I'll conclude now with three pieces of advice that I've distilled from years of blended learning ("flipped classroom") instruction and that I think will be relevant to all of us who are pushed to virtual instruction:

Whatever changes you make, ensure they're sustainable!

There is a LOT that goes into making any course a great course, especially for virtual instruction to those of us who have not had much experience. There are many foundational concepts of instruction that transcend the mode of delivery (face-to-face vs. virtual), like the importance of building community in the class. Then there are the nice extra things we can do to further enhance our classes. Are you going to record all of your synchronous videoconference meetings with your students and post them online? Make sure you have time to devote to ensuring that those videos are all captioned for accessibility! Are you going to commit to grading all submitted discussion board posts within 24 hours? Make sure that is reasonable. Aiming high and then having to reduce efforts on one or more resources that some of your students have come to depend on is not good. Start small, and consider your course's virtual redesign a process that will take at least a few iterations to arrive at what you thought would be a good starting point. Build from there.

You cannot overdo the resources and instructions you provide students

As long as your delivery is organized, there is practically no limit to the number of pertinent resources you provide. For example, in my course I'm facilitating now (which, again, I did not create), I was really impressed with the number of links to YouTube videos our instructional designers had built into this virtual course. For example, one assignment was to take a screenshot of a webpage showing that I had obtained a particular minimum score on a quiz. I know how to take a screenshot on my laptop, but some don't! On the same assignment page, there were links to YouTube tutorial videos on how to take screenshots on multiple mobile devices. This is absolutely the way we all should be thinking about designing courses delivered virtually.

Try to consider your course from the student perspective

We often lose perspective of materials we create. When writing, it is sometimes hard even to catch our own misspellings and grammar errors. When creating instructions for an assignment, of course we think that our own end product is as clear as possible (but usually that means "as clear as practical, given the amount of time we're willing to devote"). In virtual courses, even more attention should be paid to being as clear and descriptive as possible. This goes beyond my advice in the previous section: you might even consider the "unwritten rules" of academia and particularly of virtual instruction (e.g. "netiquette") that students might not be familiar with and need additional resources to understand. Because communication is often (almost always?) asynchronous in online courses, it is more important than ever to be proactive in addressing questions or concerns.

Finally, the most surprising thing I've learned so far by helping my peers work through this course:

Never assume participants will remember all of the details; always be kind

Even though you have worked diligently to put all of the necessary resources at the hands of your students, and even when you have gone particularly out of your way to be as proactive as possible, you will have students who forget a detail, and can't remember where to find it, and will contact you directly. Take a minute, or a few hours, or sleep on it, and then provide a helpful response!

In my case, in our first synchronous class meeting, I spent time discussing an intricacy of how to perform one specific task of one assignment, because the required web-based approach was not intuitive. I walked through the steps, and the Zoom meeting was recorded and posted on YouTube for reference. Two (10%) of my participants e-mailed me within a week complaining about not being able to figure out how to accomplish that same task. My response e-mails contained links to the specific timestamp in the class Zoom recording that contained the step-by-step instructions, and that was enough.

All of our class participants have different circumstances, needs, challenges…they won't all be able to keep track of all aspects of all of their classes, especially when moving fully online is still new, and especially when those students aren't just taking one class at a time! Please consider that, even though you might have invested a lot of time designing your course, it is not an affront that a student asks for help that you foresaw.

In the next post, I'll address one of the major talking points I've heard faculty discussing with increased intensity over the last three months: how to encourage and enforce academic honesty during online assessments.