Carnegie Units (CU) quantify the time spent by students in and out of class in terms of content engagement. The rule states that for every one hour spent in a face-to-face setting, students spend an additional two hours of study time or engagement with the material). For colleges offering both online and residential courses with the same content, equating the time online students spend with course content is not difficult – the Registrar aligns online courses with their residentially offered matches. However, what does a college do when 100% of its courses are offered online, with no onsite match? What follows is a discussion of how one college ensured and quantified engagement in online courses.
As online courses and degree programs developed, accreditation standards to monitor and assess their educational merit did not evolve nearly as quickly (Craig, 2015). Students in these courses and programs began to incur large amounts of debt without obtaining credible degrees or securing degree-related employment. The U.S. Department of Education and accrediting bodies began to take notice and put regulations in place to safeguard students and protect the integrity of degrees awarded (Perry & Cochrane, 2018).
A recent example of such a safeguard came about when the Higher Learning Commission (HLC), one of six regional accreditors in the United States and which oversees 19 states, began to look more closely at online education practices to ensure that credits earned from online courses were reflective of the time students engaged with course material. What follows is a discussion of how one college ensured and quantified engagement in online courses.
In 1999, A.T. Still University, with campuses in Phoenix, Arizona, and Kirksville, Missouri, created the School of Health Management (SHM) to house five of its online programs. In 2011, as programs and enrollment grew, the school adopted the Quality Matters (QM) course design rubric (Higher Ed Course Design Rubric, 2019). Quality Matters began in 2003 as a grassroots initiative to figure out “how do we measure and guarantee the quality of a course”(Why Quality Matters, n.d., para. 1). QM established best practices for the delivery of online courses and is considered a gold standard for online course design. SHM adopted the QM standards in part because some SHM courses across multiple programs were difficult for students to navigate; feedback indicated that content was not delivered consistently.
In January 2014, the University merged all non-clinical online programs into one college. As a result, three programs from the SHM, along with two programs from the Arizona campus, came together to form the College of Graduate Health Studies (CGHS), offering programs and degrees in Health Administration, Health Education, Health Sciences, Kinesiology, and Public Health. During the first six months of 2014, the CGHS instructional design (ID) team began applying a design template based on the QM rubric, which SHM had previously used, to all CGHS courses. Before merging with CGHS, instructional designers did not work with the two Arizona programs. Instead, Arizona faculty members were responsible for both the content and its online delivery. For some of these faculty, who had always designed their courses, the involvement of instructional designers was not welcome. Others, however, were open to the instructional designers’ assistance and skill. By July 2014, the new CGHS began offering courses.
Since then, as part of the overall instructional design process at CGHS, each program determines through curriculum committee deliberations its own measurable program competencies and course descriptions. Each course scheduled for development or revision is assigned a subject-matter expert (SME) by the program chair, and the associate dean assigned an ID. The ID is required to ensure that all courses across all programs employ the course-design template based on the Quality Matters rubric (Higher Ed Course Design Rubric, 2019). Every three years, under the direction of the program’s curriculum committee, the SME reviews and revises each course to update content and assessments and to include the most current literature.
In the first meeting of the review, the ID helps the SME determine the course’s learning objectives by applying the use of backward course design which “asks instructors to consider first the desired student-learning outcomes, then appropriate assessments of those goals, and finally the activities that would support those outcomes” (Reynolds & Dowell Kearns, 2017, p. 17). During this backward course design process, the course outcomes, assessments, and learning activities are aligned.
In 2015, ATSU’s Office of Assessment staff began to prepare for the 2018 HLC reaccreditation visit by attending annual Higher Learning Commission conferences to gather information on accreditation standards. As the associate dean for academics and assessment, I attended many of these sessions and completed HLC Peer Reviewer training in 2017. The training provided a basis for implementing necessary processes at CGHS to bring the online college into compliance. The immediate pressing need was to figure out how to verify Carnegie Units for our online programs.
Carnegie Units (CU) quantify the time spent by students in and out of class in terms of content engagement. The rule states that for every one hour spent in a face-to-face setting, students spend an additional two hours of study time or engagement with the material (Structure of the U.S. Education System, 2008). For colleges offering both online and residential courses with the same content, equating the time online students spend with course content is not difficult – the Registrar aligns online courses with their residentially offered matches. However, what does a college do when 100% of its courses are offered online, with no onsite match? With ATSU’s 2018 Higher Learning Commission (HLC) reaccreditation visit fast approaching, CGHS needed to figure this out.
In May 2016, the Registrar set the number of student engagement hours per credit to 45. The Registrar developed a matrix and required every program to use it to report the course format, the number of course meetings and meeting length, and to account for overall instructional hours in each course taught by each program. As we worked to complete the Registrar’s matrix, we realized it was developed to calculate residential courses and would not work for the online college to account for the overall instructional hours.
Also, the Registrar mandated courses above or below the 45 hours per credit threshold would need to change either in terms of content or in the number of credits awarded to reflect student content engagement. Adjusting credit hours was not a feasible option because students would need to take additional credits to meet degree requirements, thus increasing their tuition cost and the time needed to complete the program, and perhaps drive students into shorter programs. Conversely, in courses with CUs greater than 45 hours per credit, students would be earning more credits than needed and be forced to pay higher tuition. Both situations would create a financial impact on students.
While the HLC does not dictate how an institution accounts for credit hours, it does require the assignment of credit to be “consistent, reasonable, and documented” (Peterson-Solinski, 2019). In early 2016, we reviewed the literature to find models to use with our programs. The only available credit-hour calculator was by Loyola University (n.d.). However, from the HLC perspective of “consistent, reasonable, and documented,” the calculator did not provide enough detail or explanation for our purposes. Further research provided us with documentation of basic time frames to account for the time students spend in online activities (Powell et al., 2012). The IDs, along with CGHS’s dean, associate dean, and program chairs, reviewed the time frames outlined in Powell et al. and decided their framework was reasonable in terms of calculating the time students spend in content engagement. The information below was used as a basis to begin the assessment of courses.
Table 1: Homework Time Factors and Associated Time Measurements (excerpt)
(Powell, Stephens Helm, Layne, & Ice, 2012, p. 87)
The table above provided vetted time allotments for many online activities; however, as we began to look at our courses, we found significant amounts of student engagement time (content engagement) was missing.
As a result, in fall 2016, we created our CU table to quantify time allocations of all student activity in online courses. We defined content engagement as any activity in which the student interacted with course content or was doing research for a graded assignment tht would be assessed. Overall, we reviewed 138 course syllabi, which we had collected every term and kept on file. We grouped and analyzed activities related to learning and assessment engagement, assigning time frames to all student interactions with content, using the information in Table 1 as a basis. We also accounted for the number of students in each class because we determined that the number of students affected the calculations for those activities in which students engaged with one another directly, such as reading discussion posts or writing peer reviews. The basis for our calculations are reflected in Table 1, but as we identified a new method of engagement, we amended the list and estimated the associated time on task. We solicited the help of volunteers to engage in the various tasks listed in Table 2 as a means to verify if the time on task was reasonable. Adjustments were made based on their feedback. Students, faculty, and staff made up the team of volunteers. Below is the CU table we created of learning activities and their associated time allotment.
Table 2: Activity With Associated CU Calculation
As we calculated course content and engagement, we found some programs required more engagement, in the form of learning activities and assessments than the established 45 hours per credit, and some programs did not require enough. Further, for those courses used in multiple programs, student feedback indicated that the amount of work required was not consistent across the courses. As a result, the Registrar authorized us to quantify the amount of work students were being asked to complete using our method of calculation and to require the SME to make adjustments to increase or decrease content or time on task. Our goal was to balance the student learning experience in courses across the five programs.
After the associate dean and the ID team identified all course learning activities, the five program chairs reviewed the CU calculations. Program chairs discussed time allotments, along with the methodology of calculating the total time required for a specific activity. Following the meeting with the program chairs, the dean scheduled a Town Hall in late 2016, to apprise faculty of the CU calculations attached to current courses. These calculations would also apply to any new course development or revision. To keep faculty informed and to communicate progress on the project, we held additional Town Halls in March 2017, February 2018, and July 2018.
Over the next year, in preparation for the HLC visit, we evaluated 138 courses using the CU calculations and provided the information to the Registrar. In doing so, we applied the same CU calculation to all courses, and those within the required 45 hours per credit and not scheduled for the three-year review/revision were left as is. Below is a summary of each program’s CU calculations.
Based on the calculations, the Registrar charged the College with redesigning courses that were over or under the 45-hour designation by more than one-half credit hour or more (22.5+/- hours) by the October 2018 HLC visit. Twenty-five courses fell into this critical group and were addressed first.
The associate dean and ID team identified the most common issues in course development that affected overall CU:
- weekly modules with learning activities but no assessment;
- assumption of the amount of time students spent on activities,
- lack of content to support the perceived time students engaged with course content, or
- too much required content covering the same topics;
- required learning activities with no associated assessment to gauge student mastery of competencies;
- assessments to gauge student mastery with no associated learning activities;
- redundancy in assessments (discussion prompts and papers asking for the same evidence of learning); and
- optional assignments.
Table 3: CU Calculation per Program
Out of the College’s five programs, offering six degrees, three programs had always engaged with instructional designers and, therefore, the process to amend courses, for the most part, ran smoothly. Two programs had only started working with instructional designers when they merged with the College in 2014. For these two programs, working with IDs and establishing requirements for student engagement was more difficult and required intense one on one work between the SME and ID.
Finding1: Too much content
The review found that most courses in Degrees D and F aligned with the established credit requirements and courses were balanced with student engagement equating to 45 hours per credit. In Degree D, with the highest course CU at 172.33, a review of learning activities was conducted and duplicative content (e.g., journal articles, webpages) removed. In Degree F, the lowest CU course of 17.58 was a practicum. Working together, the SME and ID were able to identify student activity that was occurring in support of practicum completion and established course activities such as weekly journals and activity logs to capture this activity.
Courses in Degrees B and C were grossly over the 135-hours-per-three-credit course threshold, were not balanced in terms of student engagement, and required significant learning activity and assessment reduction. The lowest CU course (106.33) in Degree B required a restructuring of the discussion platforms to include additional word counts to align the course with the minimum amount of CUs. The highest CU course (293.67) in Degree B had duplicative learning activities and assessments. In other words, students were given multiple learning activities relating to the same content and were then assessed multiple times on the same content. Working with an ID, the SME was able to remove duplicative content and assessments, thus aligning the course with student engagement targets. This program had struggled with student retention, and monitoring the program will confirm if reducing learning activities and assessments will result in a better balance for students, thus positively affecting retention.
Finding 2: Too little content
Degree E needed to increase course content in multiple courses (average CU/credit hour 38.15) to further engage students in assessment activities. Course reviews identified learning activities without assessment. That is, courses were providing students with wide-ranging learning activities, but students were not assessed on mastery of the content presented. Initially, SMEs wanted to add additional content to the courses, but the approach would not allow for assessment of student mastery of the activities and alignment with established course competencies. As a result, IDs worked with SMEs to create authentic assessments that aligned with all learning activities and supported the course competencies.
Finally, we found that Degree A needed to increase student engagement (average CU/credit hour 33.88) and lacked sufficient learning activities. Review of the program’s courses identified assessments without associated learning activities. In other words, program instructors were tasking students with completing assessments without supported learning activities, although, the IDs did find that the instructors were helping students learn material and complete assignments in one-on-one meetings. However, these tutorials placed too much of the onus for learning on the instructor rather than on the student. The feedback provided in student evaluations indicated the lack of learning resources available in the course, but the willingness of instructors to have one on one sessions with students to cover content. IDs and SMEs worked to increase the student learning activities in the course, allowing instructors to engage students as a group during discussion board interactions.
As programs worked to align courses with the Registrar’s 45-hours-per-credit directive, the ID team was instrumental in identifying redundant learning activities, assessments with no prior learning activities, and assessments not tied to course competencies. During the revision process, SMEs continued to be responsible for choosing the course content, and the IDs were responsible for designing how the course content was incorporated into courses, which remains standard practice for course design (Miller & Stein, 2016).
However, as the process of aligning courses with CU calculations began, some faculty did not understand the scope of the requested changes, and some believed the administration was challenging their academic freedom because the instructional design team, not the faculty member, calculated the number of CUs in a course, resulting in faculty needing to revise their courses by adding or removing content.
Likewise, with student engagement assessed as too high, faculty in Degrees B and C believed the administration was “dumbing down” their courses. The number of assessments in these courses was unusually high, and faculty spent more time grading than interacting with students in the discussion boards. Furthermore, the CU calculation process revealed that SMEs were duplicating learning activities and assessments. For example, faculty were assessing students on the same topic in both a discussion prompt and a written paper. Together the SMEs and IDs worked to achieve the 135-hour target for three-credit-hour courses. Through the redesign process, the ID and SME removed redundant learning activities and restructured assessments, reducing the amount of grading for the instructor and allowing for more learner-instructor interaction. After the SME and ID redesigned a course, and the curriculum committee approved the course, the approved design became the template for all sections of that course, regardless of the instructor. This approach ensured the course covered all competencies as defined. Instructors were not permitted to modify course content or assessments once finalized.
The CU process helped the college prepare for what turned out to be a successful 2018 HLC reaccreditation visit. The HLC had made clear the need for online programs to be compliant in providing student engagement for the number of credits awarded, and it was equally clear that before its 2018 visit, we needed to move swiftly to bring the college in line with HLC standards. The fact that these changes needed to occur in time for the reaccreditation visit did not allow us to engage faculty as fully as we would have liked in the process of calculating CUs. Doing so may have helped curtail some of the faculty push-back with the process.
As a result, we, along with the Faculty Council, are currently revisiting some of the CU calculations. However, we know that whatever the Faculty Council may recommend in terms of revisions to the calculations, courses still must meet the HLC criteria of “consistent, reasonable, and documented” (Peterson-Solinski, 2016).
As a result of the process, even though it is ongoing, courses are balanced for learners in terms of learning activities, assessments, and engagement. Reviewing courses allowed for SMEs to align content better to support student learning and then the student’s ability to demonstrate mastery of associate course competencies.
Craig, R. (2016, November 6). A brief history (and future) of online degrees. https://www.forbes.com. Retrieved February 20, 2020, from https://www.forbes.com/sites/ryancraig/2015/06/23/a-brief-history-and-future-of-online-degrees/#145495aa48d9
Higher ed course design rubric (6th ed.). (2019). Quality Matters.
Miller, S., & Stein, G. (2016). Finding our voice: instructional designers in higher education. Educause Review. Retrieved September 13, 2019, from https://er.educause.edu/articles/2016/2/finding-our-voice-instructional-designers-in-higher-education
Online content calculator. (n.d.). Online content calculator. Loyola University. https://www.loyola.edu/-/media/department/digital-teaching-learning/documents/online-teaching/detailed-content-calculator-user-guide.ashx?la=en
Perry, A., & Cochrane, D. (2018). Going the distance: Consumer protection for students who attend college online. The Institute for College Access & Success.
Peterson-Solinski, K. (2017, March 31-April 4). Preparing for the credit hour review. What you need to know to be ready! 2017 HLC Annual Conference, Chicago, IL https://s3.amazonaws.com/v3-app_crowdc/assets/d/d4/d47d6da891af9e7b/Credit_Hour_Workshop_Presentation_2017.original.1490628108.pdf?1490628110
Powell, K., Stephens Helm, J., Layne, M., & Ice, P. (2012, January 1). Quantifying online learning contact hours. Administrative Issues Journal: Education, Practice, and Research, 2(2), 80–93. Retrieved September 2, 2019, from https://doi.org/10.5929/2012.2.2.7
Reynolds, H. L., & Dowell Kearns, K. (2017). A planning tool for incorporating backward design, active learning, and authentic assessment in the college classroom. College Teaching, 65(1), 17–27. Retrieved September 17, 2019, from https://doi.org/10.1080/875675...
Structure of the U.S. education system: credit systems. (2008). International Affairs Office, U.S. Department of Education. www2.ed.gov/about/offices/list/ous/international/usnei/us/credits.doc
Why Quality Matters. (n.d.). www.qualitymatters.org. Retrieved October 31, 2019, from https://www.qualitymatters.org/why-quality-matters/about-qm