Abstract

Unlike training and professional development, hiring faculty for online teaching remains underexplored by scholars. It is imperative that faculty hiring be approached thoughtfully and strategically so that institutions can deliver quality online instruction in alignment with their missions, visions, and values. To address this need, this article illustrates how to use job analysis as the foundation of an institution’s effective, strategic selection of online faculty. As institutions clarify the place and shape of online instruction given their approaches to market differentiation and branding, a future-oriented, strategic approach to job analysis is an asset in faculty hiring. To equip distance learning administrators with the knowledge to contribute to strategic job analysis at their institutions, this article outlines human resources practices in conducting job analysis, illustrates their application to faculty jobs with online teaching responsibilities, and elaborates how job analysis shapes effective, valid, and legally defensible recruitment and selection processes. With these techniques in hand, distance learning administrators can promote effective hiring of faculty for online teaching aligned with their institutions’ missions, visions, and values.

Introduction

Decades-long changes in higher education and the recent COVID-19 crisis have highlighted universities’ need for employees who can support online instruction. While sudden, mid-semester, industry-wide shifts from face-to-face to remote instruction quickly focused attention on faculty training and support, attention should also turn to how online faculty are hired. Online faculty jobs may be advertised with qualifications only consisting of a terminal degree (University of Northwestern Ohio, 2020). Literature on the roles, competencies, and tasks involved in online teaching and distance education has largely been framed as valuable for training and professional development (Thach & Murphy, 1995; Varvel, 2007; Bawane & Spector, 2009; Diehl, 2016; Magruder & Kumar, 2018; Martin et al., 2019). Few scholars have suggested that competency studies might inform recruitment and selection (Goodyear et al., 2001; Williams, 2003; Darabi et al., 2006). Unlike training and professional development, hiring faculty for online teaching generally remains underexplored by scholars (Schnitzer & Crosby, 2003; Patrick & Yick, 2005; Portugal, 2015). Yet effective faculty hiring is essential for organizations to manage effectively student recruitment, budget, and program quality. And as more administrators grapple with the impacts of online instruction on their institutions’ futures, it is imperative that faculty hiring be approached thoughtfully and strategically so that institutions can deliver quality online instruction in alignment with their missions, visions, and values.

To address this need, this article illustrates how to use job analysis as the foundation of an institution’s effective, strategic selection of online faculty. Job analysis at one institution, in contrast to competency modeling of an occupation across an industry, provides an accurate prioritization of job tasks and requisite knowledge, skills, and abilities based on how the organization has conceived the job. Particularly as institutions clarify the place and shape of online instruction given their approaches to market differentiation and branding, a future-oriented, strategic approach to job analysis is an asset in faculty hiring. To equip distance learning administrators with the knowledge to contribute to strategic job analysis at their institutions, this article outlines human resources practices in conducting job analysis, illustrates their application to faculty jobs with online teaching responsibilities, and elaborates how job analysis shapes effective, valid, and legally defensible recruitment and selection processes. With these techniques in hand, distance learning administrators can promote effective hiring of faculty for online teaching aligned with their institutions’ missions, visions, and values.

Job Analysis in Human Resources Literature and Practice

Job analysis is a systematic process for gathering and analyzing information about important aspects of a job. Job analysis captures information about work activities or tasks, tools and equipment, elements of the work environment, and the knowledge, skills, and abilities (KSAs) needed to perform a given job.

A foundation for sustaining organizational capability, job analysis is a prerequisite for numerous human resources processes including hiring employees, appraising and managing their performance, determining their compensation, developing training to support their growth, and planning how to fulfill an organization’s changing workforce needs (Aguinis, 2013; Bernardin & Russell, 2013; Morgeson et al., 2019). Such processes for faculty employees may be conducted under other names (e.g., tenure and promotion) or overseen outside human resources offices. These human resources processes may be shaped by shared governance, institutional norms for policy development, and unions for faculty and other academic personnel.

Data Sources and Collection Methods

Human resources literature delineates numerous sound methods for conducting job analysis within an organization. Data about a job can be gathered through observation of incumbents, doing the job oneself, questionnaires, activity checklists, interviews, critical incident technique, work diaries, and existing records (Bernardin & Russell, 2013; Morgeson et al., 2019). Sound job analysis typically draws on multiple data sources to identify specific job tasks and broader behaviors, as well as the KSAs needed to perform the job. The latter are essential to the use of job analysis for selecting new employees (see Figure 1).


Figure 1. Job analysis produces key information for both the selection of new employees and the evaluation of current employees’ performance.

Three favored data collection methods are questionnaires, interviews, and critical incident technique. Questionnaires and interviews can be used to gather detailed information from incumbents, supervisors, subordinates, clients, and others who have relevant knowledge about the job. Through critical incident technique, individuals in these roles share behavioral descriptions of good and poor job performance, contextualized by descriptions of specific situations and the consequences of those job behaviors.

Job tasks can also be gathered through observations of incumbents or analysis of their work diaries. Observations can be documented in varied formats, including narratives and checklists. Observation periods should be selected carefully to capture representative samples of work behaviors (Marrelli, 2005). In work diaries, incumbents log the frequency, duration, and timing of individual job tasks. These diaries are analyzed for patterns and translated into prioritized duties. Diary formats can range from checklists to open-ended entries (Marrelli, 2007; Mandernach & Holbeck, 2016).

Job analysis can also draw on existing records. Traditionally, these include existing job descriptions, organizational charts, training manuals, and policy and procedure manuals (Bernardin & Russell, 2013). However, information in such records may be outdated or irrelevant. For example, some job-related information in the O*Net database of information on standardized jobs may not apply to a job in a given organization (National Center for O*NET Development, 2019). Therefore, this information must be evaluated for organizational relevance in conjunction with other data collection methods. In future-oriented, strategically aligned job analysis, relevant documents also include an institution’s mission, vision, and values, as well as strategic plans outlining how to achieve that vision (Sanchez & Levine, 2009; Schippmann, 2013).

Distinguishing Job Analysis from Competency Modeling

Job analysis differs in several respects from human resources approaches to competency modeling. While traditional job analysis focuses on a single job, is oriented towards the past, and describes incumbents’ behavior, competency modeling focuses on the organization, is oriented towards the future, and can influence incumbents’ behavior. Competency modeling within an organization can encompass behaviors that are essential to the employer brand and organizational strategy and therefore span all jobs in the organization. Yet competency modeling lacks the specificity and validity necessary for use on its own in developing selection systems (Sanchez & Levine, 2009).

Job analysis and human resources approaches to competency modeling should also be distinguished from other approaches to competency modeling. Academic researchers conducting competency modeling often use data collection methods seen in job analysis. However, they commonly investigate an occupation across many organizations (Patterson et al., 2008; Gardner et al., 2018; Chongwony et al., 2020). These researchers may emphasize the percentage of subject matter experts who acknowledge a skill, task, or competency, rather than how often incumbents in one organization perform it, and they may use a variety of procedures to validate competencies (Darabi et al., 2006).

Specific validation methods are central to employee selection. Procedures used to select employees must be both job-related and valid, where validity refers to the extent to which a selection measure predicts job performance. Common approaches to validation in employee selection include criterion-related validation and content validation, both of which have been shaped in response to U.S. employment law, court rulings, and the Uniform Guidelines on Employee Selection Procedures with particular attention to ensuring equal opportunity in employment. Criterion-related validation uses statistics to determine if selection procedures are accurate predictors of successful job performance. Correlational analysis clarifies statistically significant relationships between two sets of data, such as current employees’ performance appraisal scores and their performance on a test proposed for use in selection. Criterion-related validation can be used when a job attracts numerous applicants and is stable in content. When a job has fewer applicants, content validation may be most appropriate. Content validation determines if selection procedures and criteria address tasks, behaviors, and KSAs that are representative of the job and successful performance in it (Gatewood et al., 2016). Determining representativeness relies on systematic job analysis.

Traditional and Strategic Job Analysis

Both traditional and future-oriented, strategic job analysis use a systematic, stepwise process for gathering and analyzing data from institutional SMEs about job tasks and KSAs. Based on the assumption that jobs are stable, traditional job analysis uses data on how a job has previously been performed to inform how employees are selected for the job in the future (Gatewood et al., 2016). This data about job tasks and KSAs is typically gathered from incumbents and their supervisors using data collection methods such as interviews, questionnaires, focus groups, and observations (see Table 1).

But as organizations have faced rapid environmental changes requiring adaptation for success, human resources researchers have called for alternatives that are more strategically oriented and based on future work activities and corresponding KSAs. Techniques for ensuring a strategic orientation include environmental scans and review of strategy documents (Singh, 2008; Schippmann, 2013). Techniques for clarifying future job activities and corresponding KSAs include considering what-if scenarios and using future-oriented questions in questionnaires and interviews (Schneider & Konz, 1989; Sanchez, 1994; Landis et al., 1998; Morgeson et al., 2019). A dual response questionnaire format can gather SME ratings for job aspects in both the past and future (Watkins et al., 2012). A thorough approach begins with traditional job analysis, gathers data about the future, asks SMES to rerate tasks and KSAs based on anticipated future conditions, and prompts them to identify any new tasks and KSAs needed based on those conditions (Schneider & Konz, 1989). Scholars advocating strategic and future-oriented job analysis stress the importance of establishing the validity of selection systems given the legal context (Schneider & Konz, 1989; Sanchez, 1994; Singh, 2008).

Table 1. Comparison of steps in traditional and strategic, future-oriented job analysis. Shaded steps are particularly important for designing a legally defensible selection system.


Conducting Job Analysis for Online Faculty Jobs

In addition to the legal and strategic contexts, conducting an institution-specific job analysis for online teaching jobs is important because of how the roles and responsibilities of online faculty can vary between institutions (Williams, 2003; Baran & Thompson, 2011; Martin et al., 2019). Such differences may surface in curriculum development, course design, and learning asset creation. For example, those teaching a centralized curriculum typically spend less time on content development and more time on grading, feedback, and facilitation of discussion (Mandernach & Holbeck, 2016). Job responsibilities for online instruction may also be spread among multiple job types, such as faculty, instructional designers, multimedia specialists, and instructional technologists. Because institutions differ in their use and delineation of these jobs, the tasks and KSAs that are prioritized for online faculty jobs will also differ. These institutional differences should be reflected in effective and legally defensible selection processes. Job analysis is applicable whether an institution is seeking external candidates or selecting internal candidates for new online teaching assignments.

Initial Steps

Several initial steps distinguish strategic, future-oriented job analysis from traditional job analysis. These include gathering information about the future by conducting an environmental analysis, reviewing the organization’s future-oriented documents, and choosing an appropriately broad spectrum of SMEs, including those who can view the job from a future perspective. The environmental analysis should include a scan of internal and external opportunities, threats, and likely changes, including markets, technology, and economic and legal conditions. Relevant documents to review include the organization’s mission, vision, values, competitive strategy, and strategic initiatives. The environmental scan and review of the organization’s defining and future-oriented documents will clarify important conditions likely to impact future performance needed in the job and how future-oriented data can best be gathered. Critical is identifying a broad spectrum of SMEs for involvement in that future-oriented data collection. These SMEs should include members from varied organizational areas to ensure sufficient breadth of knowledge and a mixture of visionaries, incumbents, supervisors, and customers (Singh, 2008; Schippmann, 2013). Holding focus groups with SMEs in diverse roles can efficiently gather future-oriented data.

Identifying Job Tasks

In addition to focus groups, individual interviews with these SMEs is a common method of collecting tasks and duties involved in the job. In preparation, the roles of online instructors identified in research literature can help structure interviews to thoroughly identify tasks. Drawing on Martin et al.’s (2019) recent literature review, common roles include:

  • Subject matter expert
  • Pedagogue/andragogue
  • Group process facilitator/Social roles
  • Assessor
  • Instructional designer/course designer
  • Technologist
  • Administrator
  • Advisor
  • Researcher

Within any given program, department, college, or university, online faculty may not play all these roles. Indeed, that is a primary reason why systematic job analysis conducted within an institution is so important. However, these roles can be used in interviews to focus attention on relevant areas when eliciting tasks. Within any given role, tasks can vary from one institution to another, due, for example, to different enterprise software, policies, and strategic priorities such as open educational resources (OER). Examples of appropriate interview questions include “Describe the major tasks in your job” and “What are the tasks that you perform less frequently but are still important for the job?” Future-oriented data can be gathered by changing the tense of such questions or by including others, such as “What tasks will become more important in the future?” and “What tasks in the job support elements of the organization’s (or unit’s) strategic plan?” Online faculty roles can be incorporated in subsequent questions to focus attention on areas in which tasks might have been overlooked. An example is “Are there any tasks where you function in an advisor role?”

Rating Job Tasks

Data gathered through interviews then is typically used to create a questionnaire that prompts SMEs to rate each task (see Table 2). Tasks should be rated on at least the first two of the following factors:

  1. the task’s criticality (importance) to the job
  2. the frequency with which it is performed by an incumbent
  3. task difficulty
  4. time needed to learn the task on the job

“Frequency” refers to how often one person performs the task while carrying out the job duties.

Table 2.Questionnaire format for SMEs to rate job tasks previously elicited through interviews.



Multiplying the average ratings of criticality and frequency for each task enables an efficient quantitative ranking of the tasks and clarifies what constitutes the most important tasks of the job (Gatewood et al., 2016; Aguinis, 2013; Bernardin & Russell, 2013). An illustration of such calculations and their utility for ranking tasks is shown in Table 3.

Table 3. Average ratings of frequency and criticality (importance) are multiplied to rank job tasks.


Deriving KSAs

In the next two steps, SMEs derive and rate KSAs. Given the list of highly ranked tasks created in the previous step, they derive the KSAs that are or will be needed to perform them. The focus is on deriving KSAs for tasks deemed essential or critical. SMEs are guided to derive specific statements, rather than simply appending “knowledge of,” “skill in,” or “ability to” to the beginning of task statements (Gatewood et al., 2016). Knowledge encompasses information that is applied to carry out job tasks. Examples of knowledge relevant to a faculty job in history with online teaching responsibilities could include knowledge of historical events, knowledge of instructional strategies, knowledge of methods of historical research, and knowledge of undergraduate history degree requirements. Skills are typically observable and can be measured, such as through various performance tests (e.g., typing tests). Skills can often be developed through training. Abilities are current, demonstrable capacities to apply multiple bodies of knowledge and skills to carry out a job task. Abilities may encompass attributes that can be acquired without formal instruction. Based on this distinction between skills and abilities, KSAs for a faculty job with online teaching responsibilities might include “skill in using the learning management system” and “the ability to collaborate in problem-solving.”

Individual job tasks commonly require multiple KSAs. For example, the task “provide timely feedback to students using appropriate technologies” may require several KSAs, including skill in time-management, ability to provide written and/or oral feedback on student work, and skill in using the learning management system. When SMEs derive KSAs from job tasks, one KSA may emerge as necessary for multiple tasks. It is appropriate to record the KSA for each relevant job task, rather than only recording it once.

Rating KSAs

KSAs must be accurately prioritized based on the job at one institution to support an effective and legal selection process. To do this, typically 10 to 20 SMEs rate the KSAs along various criteria. Rating scales may address the importance of each KSA, its job-relatedness, and the extent to which individual incumbents who have more of the KSA do a better job than those who have less of it (Gatewood et al., 2016). The questionnaire may also ask whether each KSA is necessary at job entry (see Table 4). This data will help identify which KSAs can be developed later through training and other learning activities, and which must be addressed earlier through selection procedures.

Table 4. Questionnaire format for SMEs to rate KSAs they previously derived from essential and critical job tasks.



While the question and rating scale regarding importance appears in both Table 2 and Table 4, the rating of KSAs for importance is not a repetition of the rating of tasks on this dimension. Including this rating for KSAs distinguishes the relative importance of component KSAs that are needed for any one task. In addition, one KSA may be needed for multiple tasks across job. For example, the ability to collaborate in solving problems may be necessary for course design tasks and service tasks. The rating of individual KSAs for importance can clarify how a KSA’s cumulative importance may differ from that of some of those individual tasks.Linking KSAs to Tasks

The next step clarifies how important KSAs are connected to important tasks by gathering ratings from multiple SMEs. This linkage can be done through a matrix in which all important job tasks intersect with all important KSAs. Each intersection can be filled with a rating question such as “How important is this KSA in successfully performing this job task?” (not at all, slightly, moderately, considerably) (Gatewood et al., 2016, p. 87). This linkage may be oriented to the present or future. Linkage is needed not only to defend selection procedures against charges of discriminatory practices, but also to enable the selection process to focus on the KSAs that are most essential for successful job performance. For a KSA to be essential to the selection plan, it must be linked to at least one important job task.

Create Selection Plan

Linkage data provides the foundation for the selection plan. After identifying the KSAs to be assessed, a plan is created for using appropriate selection procedures. Common selection procedures include evaluation of samples of job performance, candidates’ oral explanations, and their past educational and work experiences as represented in written materials. A selection plan typically includes the weighting of KSAs and shows which KSAs will be assessed through each selection procedure (see Table 5). Appropriate weighting can be derived through calculations using previously gathered data. For example, relative KSA importance can be derived from previous questionnaires by multiplying KSA importance ratings by the importance ratings for the linked tasks. These products can be treated as points that, when totaled, determine what percentage of the total points each KSA contributes (Gatewood et al., 2016).

Table 5. Format of a selection plan that clarifies the relative weight of KSAs and how each will be assessed through selection procedures.




Using Job Analysis in Recruitment and Selection

Job analysis impacts both recruitment and selection. In recruitment, people are motivated to apply for a position. In selection, information is gathered from those applicants to evaluate their qualifications to perform the job. In this context, job analysis has two purposes: communicating accurately to applicants what a job is like and supporting systematic selection procedures (Gatewood et al., 2016).

Job Ad Wording

In recruitment, job analysis informs the wording of the job advertisement. Job analysis shapes the description of responsibilities and qualifications. The description of responsibilities should accurately reflect the prioritization of tasks revealed through the job analysis. While institutions occasionally list KSAs directly in their job advertisements (Northcentral University, 2020), they more commonly list qualifications that reflect the KSAs in the selection plan. These qualifications are often distinguished as two lists: minimum qualifications and preferred qualifications. Minimum qualifications are essential to performing the job at entry. Preferred qualifications enable one to perform the job at a higher level. These lists may include a mixture of individual KSAs and education and experiences that are taken as concise proxies for multiple KSAs. The list a qualification appears in will vary according to the job and the institution based on its job analysis.

Evaluation of Written Application Materials

The evaluation of written application materials should be guided by the qualifications in the job advertisement, which were derived from the highly ranked KSAs in the job analysis. This entails an analytical approach to evaluating these materials, rather than a holistic approach. Applicants’ written application materials are first considered with respect to the minimum qualifications, and then the preferred qualifications, ideally using a rubric. Only those applicants who possess all the minimum qualifications can remain under consideration for the position. Those who possess all the minimum qualifications can be hired for the position in the event than none have one or more of the preferred qualifications.

Interview Questions

Job analysis also informs questions asked in structured interviews. Questions in screening interviews and on-site interviews will normally be keyed to qualifications in the job advertisement and based on KSAs in the selection plan. Questions can focus on qualifications for which written application materials provided some evidence, but for which further evidence is needed based on the selection plan. Questions can prompt applicants to discuss their past actions relevant to the qualifications, as in “How have you changed your approach to instruction based on feedback you have received from students in end-of-semester course surveys?” Among selection procedures used as predictors of future job performance, such behavioral questions take past behavior as a predictor of future behavior. Situational questions, by contrast, ask applicants what they would do in a future situation. For example, applicants shown hypothetical student feedback from an end-of-semester course survey could be asked “What steps would you take as an instructor in response to this feedback?” Situational questions take stated intentions of behavior as predictors of future behavior. Both types of questions are regarded as having high validity (Klehe & Latham, 2005; Oostrom et al., 2016)

When critical incidents technique is used in job analysis, it can be used to generate interview questions and accompanying rating scales. A panel of SMEs develops critical incidents aligned with the most important tasks and KSAs. Situational information provided in their descriptions of critical incidents can be used in behavioral or situational interview questions (Gatewood et al., 2016). Interview questions based on critical incidents reflective of important tasks and KSAs provide more valuable job-related information than questions generated from generic lists or issues of personal interest to individual search committee members. The most robust type of rating scale for use with these interview questions is a behaviorally anchored rating scale (BARS) (see Table 6).



Work Sample Tests

Work sample tests require applicants to perform part of the job in a realistic or simulated setting. Examples of work sample tests for varied jobs include roleplaying customer service interactions, writing memos, and troubleshooting a machinery breakdown. Classroom teaching demonstrations and research presentations are examples of work sample tests commonly used when selecting faculty. The work sample test noted in Table 4 could be an online teaching demonstration encompassing both synchronous and asynchronous instruction and designed to provide information relevant to KSAs addressing use of the learning management system, facilitating discussions, providing students feedback, and use of presentation and webconferencing software.

Conclusion

Systematic job analysis is the foundation of legally defensible hiring and, when conducted with a future orientation, an effective way to ensure that hiring is aligned with an institution’s distinctive mission, values, and vision for market differentiation as pursued through strategic planning. Job analysis guides sound hiring beyond accreditation requirements and resolves at the institutional level ongoing debates over what kind of knowledge should be emphasized when hiring faculty for online teaching (Magruder & Kumar, 2018). Distance learning program administrators, human resources staff, online faculty, and their supervisors can collaborate in conducting sound institution-specific job analysis. Once established in practice, using job analysis iteratively to update job descriptions enables institutions to address the evolving nature of the roles and competencies of online instructors and the dynamic nature of online learning environments (Bobko et al., 2008).

While the systematic approach to job analysis detailed here has been significantly shaped by U.S. employment law, it can also be used to select online faculty abroad. A broader international context has already shaped studies of online instructor roles and competencies, often to inform training and development (Thach & Murphy, 1995; Goodyear et al., 2001; Aydin, 2005; Bawane & Spector, 2009). Job analysis at the institutional level is a logical and systematic method of creating effective selection procedures that complement training and development initiatives. Whether used in the U.S. or abroad, job analysis can help educational organizations respond to dynamic learning contexts by focusing their online faculty hiring efforts on the abilities and work most aligned with their distinctive missions, visions, and values.



References

Aguinis, H. (2013). Performance management (3rd ed.). Upper Saddle River, NJ: Pearson.

Aydin, C. S. (2005). Turkish mentors’ perception of roles, competencies and resources for online teaching. Turkish Online Journal of Distance Education, 6(3) (2005), 58–80.

Baran, E., Correia, A.-P., & Thompson, A. (2011). Transforming online teaching practice: critical analysis of the literature on the roles and competencies of online teachers. Distance Education, 32(3), 421–439. https://doi.org/10.1080/01587919.2011.610293

Bawane, J., & Spector, J. M. (2009). Prioritization of online instructor roles: Implications for competency-based teacher education programs. Distance Education, 30(3), 383–397. https://doi.org/10.1080/01587910903236536

Bernardin, H. J. & Russell, J. E. A. (2013). Human resource management: An experiential approach (6th ed.). New York: McGraw-Hill.

Bobko, P., Roth, P. L., & Buster, M. A. (2008). A systematic approach for assessing the currency (“up-to-dateness”) of job-analytic information. Public Personnel Management, 37(3), 261–277.

Chongwony, L., Gardner, J. L., Conklin, S., & Tope, A. (2020). Instructional design leadership and management competencies: Job description analysis. Online Journal of Distance Learning Administration, 23(1). http://www.westga.edu/~distance/ojdla/spring231/Chongwony_Gardner_Tope231.html

Darabi, A. A., Sikorski, E. G., & Harvey, R. B. (2006). Validated competencies for distance teaching. Distance Education, 27(1), 105–122.

Diehl, W. C. (2016). Online instructor and teaching competencies: Literature review for Quality Matters. https://www.qualitymatters.org//sites/default/files/research-docs-pdfs/QM-Online-Instructor-Teaching-Competencies-2016.pdf

Gardner, J., Chongwony, L., & Washington, T. (2018). Investigating instructional design management and leadership competencies—A Delphi study. Online Journal of Distance Learning Administration, 21(1).
http://www.westga.edu/~distance/ojdla/spring211/gardner_chongwony_washington211.html

Gatewood, R. D., Field, H. S., & Barrick, M. R. (2016). Human resource selection (8th ed.). Boston: Cengage.

Goodyear, P., Salmon, G., Spector, J. M., Steeples, C., & Tickner, S. (2001). Competencies for online teaching: A special report. Educational Technology Research and Development, 49(1), 65–72. https://doi.org/10.1007/BF02504508

Klehe, U.-C., & Latham, G. P. (2005). The predictive and incremental validity of the situational and patterned behavior description interviews for teamplaying behavior. International Journal of Selection and Assessment, 13(2), 108–115.

Landis, R. S., Fogli, L., & Goldberg, E. (1998). Future-oriented job analysis: A description of the process and its organizational implications. International Journal of Selection and Assessment, 6(3), 192–198.

Magruder, O., & Kumar, S. (2018) e-Learning instruction: Identifying and developing the competencies of online instructors. In A. Piña, V. Lowell, B. Harris (Eds.), Leading and managing e-learning. Cham: Springer.

Mandernach, B. J., & Holbeck, R. (2016). Teaching online: Where do faculty spend their time? Online Journal of Distance Learning Administration, 19(4). https://www.westga.edu/~distance/ojdla/winter194/mandernach_holbeck194.html

Marrelli, A. F. (2005). The performance technologist’s toolbox: Observations. Performance Improvement, 44(2), 39–43. https://doi.org/10.1002/pfi.4140440210

Marrelli, A. F. (2007). The performance technologist’s toolbox: Work diaries. Performance Improvement, 46(5), 44–48. https://doi.org/10.1002/pfi.133

Martin, F., Budhrani, K., Kumar, S., & Ritzhaupt, A. (2019). Award-winning faculty online teaching practices: Roles and competencies. Online Learning, 23(1), 184–205. https://doi.org/10.24059/olj.v23i1.1329

Morgeson, F. P., Brannick, M. T., & Levine, E. L. (2019). Job and work analysis: Methods, research, and applications for human resource management. Los Angeles: SAGE Publications.

National Center for O*NET Development. (2019). History teachers, postsecondary. O*Net Report No. 25-1125.00. https://www.onetonline.org/link/summary/25-1125.00

Northcentral University (2020). Professor: School of Technology - data science. https://www.ncu.edu/careers?p=job%2FoAqAcfww

Oostrom, J. K., Melchers, K. G., Ingold, P. V., & Kleinmann, M. (2016). Why do situational interviews predict performance? Is it saying how you would behave or knowing how you should behave? Journal of Business and Psychology, 31, 279–291. https://doi.org/10.1007/s10869-015-9410-0

Patrick, P. K. S., & Yick, A. G. (2005). Standardizing the interview process and developing a faculty interview rubric: An effective method to recruit and retain online instructors. Internet and Higher Education, 8(3), 199–212.

Patterson, F., Ferguson, E., & Thomas, S. (2008). Using job analysis to identify core and specific competencies: Implications for selection and recruitment. Medical Education, 42(12), 1195–1204. https://doi.org/10.1111/j.1365-2923.2008.03174.x

Portugal, L. M. (2015). Hiring, training, and supporting online faculty for higher student retention efforts. Journal of Instructional Research, 4, 94–107.

Sanchez, J. I. (1994). From documentation to innovation: Reshaping job analysis to meet emerging business needs. Human Resource Management Review, 4(1), 51–74.

Sanchez, J. I. & Levine, E. L. (2009). What is (or should be) the difference between competency modeling and traditional job analysis? Human Resource Management Review, 19(2): 53–63.

Schippmann, J. S. (2013). Strategic job modeling: Working at the core of integrated human resources. New York: Psychology Press.

Schneider, B., & Konz, A. M. (1989). Strategic job analysis. Human Resource Management, 28(1), 51–63.

Schnitzer, M., & Crosby, L. S. (2003). Recruitment and development of online adjunct instructors. Online Journal of Distance Learning Administration, 6(2). https://www.westga.edu/~distance/ojdla/summer62/crosby_schnitzer62.html

Singh, P. (2008). Job analysis for a changing workplace. Human Resource Management Review, 18(2),87–99.

Thach, E. C., & Murphy, K. L. (1995). Competencies for distance education professionals. Educational Technology Research and Development, 43(1), 57–79.

University of Northwestern Ohio. (2020). Multiple online adjunct faculty positions. https://www.unoh.edu/careers/index.php?career_id=263

Varvel, V. E. (2007). Master online teacher competencies. Online Journal of Distance Learning Administration, 10, 1–47. http://www.westga.edu/~distance/ojdla/spring101/varvel101.htm

Watkins, R., Meiers, M. W., & Visser, Y. L. (2012). A guide to assessing needs. Washington, D.C.: The World Bank.

Williams, P. E. (2003). Roles and competencies for distance education programs in higher education institutions. The American Journal of Distance Education, 17(1), 45–57.