Community Readiness and its Importance to Departmental Readiness
About the Tri-Ethnic Community Readiness Model (CRM) and Community Readiness Tool (CRT)
The Tri-Ethnic Center for Prevention Research at Colorado State University created a Community Readiness Model (CRM) that identifies community readiness on many dimensions and levels. The model also includes an instrument, the Tri-Ethnic Center Community Readiness Tool (CRT) that community members can easily use and score to determine community readiness (Oetting, et al., 1995). Since the CRM/CRT defines community readiness as issue- and community-specific, we were able to modify the CRT to define our community as a biomedical department and our issue as “the recruitment and retention of faculty from diverse backgrounds.” We determined that the academic department is a community based on the shared characteristics of its members and/or the strength of the relationships between them. First departments are typically characterized by a significant degree of face-to-face interaction among its members. Second, beyond the many responsibilities that accompany the basic function of teaching and research, members are likely to share superordinate goals, as a result of their collective responsibility for managing the department’s daily responsibilities (for ex staffing and scheduling courses, recruiting, mentoring and admitting new students). Lastly, Tolbert, Simmons, Andrews et al. (1995) suggest that academic departments often exert a great deal of authority on personnel assignments, including the admission of new members and the promotion of existing employees to higher ranks. These personnel assignments are frequently portrayed as a scarce resource that may be subject to competition (p.565). Because different divisions or committees within a department may be affected by different stated concerns, intervention efforts are typically fragmented. For departmental change to be long-lasting, members of the department must work together to develop interventions tailored to their specific circumstances (Plested et al., 1998). Based on its resources, capacity, and willingness to change, the CRM is an innovative tool for assessing a department’s readiness to develop and implement programming related to an issue.
Since the goal of this study is to assess readiness for a behavior change at the department level, the CRM and CRT are innovative tools for assessing a department’s readiness to develop, prevent, or implement intervention efforts.
The CRT has evolved over time, and the current version evaluates six dimensions of readiness. In addition to the six dimensions of community readiness, the CRT includes nine stages of community readiness, ranging from “no awareness” of the problem to “high level of community ownership” in response to the issue. The readiness scores for each dimension are added together to produce the community’s overall readiness score on a specific issue and are useful in learning more about how prepared the community is to deal with the issue and making plans to deal with it. Thus, in addition to an overall stage of readiness, a community can be at various stages of readiness on each of the six dimensions of community readiness.
Dimensions of Community Readiness:
On each of the six dimensions of community readiness, a community’s level of preparedness can vary. Dimensions of readiness are crucial factors that influence the community’s readiness to address the issue. Six dimensions have been identified in the model:
Table 1 Definitions and Descriptions of Tri-Ethnic CRM
|Dimension A: Efforts||To what extent are there efforts, programs and policies that address to recruit and retain faculty who come from a diversity of backgrounds?|
|Dimension B: Department Knowledge of Efforts||To what extent do department members know about any of the department’s efforts to recruit and retain faculty who come from a diversity of backgrounds and how it impacts your department?|
|Dimension C: Leadership||To what extent are appointed leaders and influential department leaders supportive of to recruit and retain faculty who come from a diversity of backgrounds?|
|Dimension D: Department Climate||What is the prevailing attitude of the department toward the recruitment and retention of faculty who come from a diversity of backgrounds?|
|Dimension E: Department Knowledge of the Issue||To what extent do department members know about departmental effort and their effectiveness and are the efforts accessible to all segments of the department?|
|Dimension E: Resources||To what extent are people, time, money, space, etc. available to support this effort?|
Community Readiness Described on Nine Different Levels
The different readiness levels indicate how ready the community is to confront the problem at hand. The overall level of community readiness is determined by how well your community performs in all domains. Keep in mind that readiness can vary–sometimes significantly–across dimensions, such that, for instance, the levels of community activities and resources can vary greatly. Table 2 presents a complete list of the stages of community readiness and a brief example of each stage.
Stages of Community Readiness in the Community Readiness Model (CRM)
|1.||No Awareness||“It is just the way things are”|
|2.||Denial||“We can’t do anything about it”|
|3.||Vague Awareness||“Something should be done but what?”|
|4.||Pre-planning||“This is important. What can we do?”|
|5.||Preparation||“We know what we want to do and we are getting ready”|
|6.||Initiation||” We are starting to do something”|
|7.||Stabilization||“We have support, we’re leading and we think it is working”|
|8.||Expansion||“Our efforts are working; how can we expand?”|
|9.||High Level of Departmental Ownership||“These efforts are part of the fabric of our department”|
Leaders must grasp the readiness level of their community to organize their activities to begin at that level, progress the community to the next level, and continue to advance the community incrementally. Once leaders have determined their community’s readiness, they can plan the steps necessary to accomplish the desired intervention outcomes.
Definition of the issue and the community
In 2020, the leadership team of the AGEP PROMISE Academy Alliance (APAA) launched discussions with our External Advisory Board addressing our commitment to guarantee that departments are “prepared” to admit Fellows from underrepresented backgrounds. Given the vast body of research demonstrating how bias, climate, and microaggressions–both personal and structural—discourage underrepresented scholars from remaining in the academy, we were asked to implement a method for determining whether departments were safe and supportive of our Fellows.
According to Jumper-Therman et al., (2003), the concept of community is dependent on a particular problem or situation. Community members can also be organization members; hence, the Community ready concept applies to both places of interest and communities of place. Consequently, for the sake of this project, we define community as a department within a single institution that shares an interest in biomedical sciences, and we apply the community readiness theory and model (CRM) to examine departmental readiness in the USM.
Discovering the tool
As part of the UMBC Fellowship for Faculty Diversity, departments are required to submit a mentoring plan for their incoming pre-professoriate fellows. It is used by the UMBC executive committee to determine the readiness of departments for the UMBC project. Although the set of questions explored departments’ attempts to recruit and retain prospective URM pre-professoriate fellows, no quantitative evaluation rubric had been devised. We needed a mechanism that could be used objectively to evaluate the department’s readiness to support any new program involving the recruitment and retention of faculty from different backgrounds (Reed, 2018). Our internal evaluator conducted a comprehensive literature search on organizational readiness and community readiness over the winter of 2020–2021 in search of an existing measure or one that could be modified to assess what we refer to as “Department Readiness.” A comprehensive examination of this literature is beyond the scope of this report; however, Castañeda, Holscher, Mumman, et al. (2012) provide a thorough analysis of the research on readiness and capacity to change. Their review provides an overview of the various constructs, and models which emphasize four domains of readiness, such as (1) a community and organizational climate that supports change, (2) attitudes and ongoing prevention efforts, (3) commitment to change, and (4) the capacity to implement change.
The Internal Evaluator discovered a proven Community Readiness Model (CRM) instrument (Oetting, Donnermeyer et al., 1995; Oetting, et al., 2014) for an internal assessment of five Alliance biomedical departments in the spring of 2021 to examine whether departments were planning, revising, and committing meaningfully to the recruitment and retention of faculty from diverse backgrounds. Community readiness is defined as the level at which individuals and groups are willing to accept and support the implementation of new programs or activities in the community (Donnermeyer, et al., 1997). The article by Oetting et al. (2014) gives the most up-to-date and comprehensive review of the theory’s evolution, as well as all the necessary tools (CRT) for applying the model. The CRM enables researchers to precisely describe the level of development of a community in respect to a particular topic or problem (Jumper-Thurman et al., 2003).
Rather than focusing exclusively on departmental readiness as a binary concept, we were interested in developing and refining a tool for assessing levels of departmental readiness, because many promising strategies for increasing faculty diversity in STEM disciplines within a university system involves a redesign—that occurs on many levels, involving multiple, concurrent changes to policies, practices, communication, coalition capacity building, and collaboration. As with other researchers in change management who study organizational and community readiness change, we regard “departmental readiness” as a latent concept with multiple levels (Chilenski, Greenberg & Feinberg, 2007; Rafferty et al., 2013).
Explanation of the tool and Methods
The CRT consist of 36-40 questions assessing six dimensions of readiness and the key questions for each dimension is listed in Table 1 above.:
Instrumentation: Tailoring the Interview Guide for Departmental Readiness
The interviews were conducted by an outside consultant using a modified version of the Tri-Ethnic Center Community Readiness Assessment Interview Questions (Oetting et al., 2014). The original version was designed for use in addressing alcohol and drug abuse and contains 41 open-ended questions in five different dimensions: Community Knowledge of the Issue; Community Knowledge of Efforts; Community Climate; Leadership; and Resources (Oetting et al., 2014). Of the 41 questions, some questions in each dimension are bolded, meaning that they must be asked for the interview score to be valid. Non-bolded questions are optional and may not be asked if time is an issue and/or the information that the answers would provide is not crucial for other purposes. The questions are general enough that adapting them to the issue “the recruitment and retention of faculty from diverse backgrounds” required very little editing. For example, in the dimension of Community Knowledge of Efforts, question 2 reads: “Are there efforts in (community) that address (issue)?” The question was easily adapted to the issue “the recruitment and retention of faculty from diverse backgrounds” by filling in the blanks in the following manner: “Are there efforts in the department that address the recruitment and retention of faculty from diverse backgrounds?” There is also an option to add questions to the instrument, such as the 8 optional demographic questions. We did not use the demographic questions to avoid inadvertently identifying the participants. A copy of the final instrument is provided in Appendix A.
Prior to the interviews, each question was adapted to reflect the issue of the recruitment and retention of faculty from diverse backgrounds in the department. Next, we conducted a pilot test of the instrument with a member of the leadership team in order to assess the instrument’s readability and make small improvements. We did this for both the interviewer, who would be reading the questions, and the interviewees, because we believed that new questions should be simple to understand and extract the information needed to rate each dimension. The evaluator’s pilot interviews revealed that 45 minutes was an adequate amount of time to conduct interviews.
Steps to Assess Departmental Readiness
The CRT includes a six-step process for assessing community readiness to address an important issue. These steps include:
- Identify and define the specific issue: We defined the issue as “the recruitment and retention of faculty from diverse backgrounds.”
- Define the community. We defined the “community” as a biomed department in the USM that employed an APAA fellow.
- Identify Key Respondents; We received approval in the “exempt” category of the Institutional Review Board of the lead institution. Representatives from each of the five institutions’ leadership teams selected between six and eight individuals from each department who they felt to be the most knowledgeable about the department’s activities. After securing their permission, the team leader provided an email address and address for each key informant.
- Prepare Interview questions. We modified the instrument to reflect our specific issue and our defined community.
- Conduct and record structured interviews with key respondents via Webex or Zoom: We hired an outside consultant to conduct and record interviews with key informants in the USM Bio-med departments taking part in the APAA postdoctoral initiative. The consultant scheduled the one-hour interview slot and administered the consent and interview protocol.
- Obtaining transcripts of the department readiness interview recordings.
- Scoring the interviews and calculating overall and dimension-specific readiness scores.
- Creating a report describing the department readiness assessment process and presenting the department’s readiness scores.
- Developing strategies are consistent with those readiness levels.
Reducing Interviewer Bias: Hiring an outside consultant
The Departmental readiness issue, i.e., the recruitment and retention of faculty from diverse backgrounds, involved face-to-face virtual conversations about race within a possible multiracial setting. Multiple researchers in race and ethnicity concur that racial issues are typically “hot button” issues in the United States, causing people of color to become vocally angry and white people to become silent, defiant, or disconnected (Singleton & Hays, 2008; Kaplowitz & Griffin, 2019). The project necessitated hiring a seasoned researcher with the ability to effectively facilitate conversations about the lack of racial diversity among the faculty. Our internal evaluator, a race and ethnicity expert, understood the power, racial, and social dynamics involved with interviewing the key informants; the evaluator recommended that we hire an outsider to interview the 30 key informants from the start. In Race Dialogues: A Facilitator’s Guide to Tackling the Elephant in the Classroom the authors suggest that “an important part of the role of the facilitator in dialogues is to ensure that societal inequality are not re-enacted within the dialogue space (p.47)”. Thus, to insulate this qualitative research from interviewer bias and reduce the likelihood of collecting biased data, we hired an interviewer whose racial and gender characteristics mirrored those of the majority of participants.
In qualitative research, the presence of the interviewer may influence the responses. The interviewer’s role is to ask questions and listen to your interviewees without judgment. There may be a propensity for participants to answer questions as they believe they are expected to answer in a face-to-face interview, or in ways that they believe will make them appear more likable or cooperative. However, face-to-face interviews may provide more information and are therefore preferable. The information may be of higher quality since the interviewer can observe facial expressions and body language that may contradict spoken responses, show discomfort, or reveal other subtleties of emotion that could give crucial facts.
Key Informant Interviews
Because readiness is issue- and community-specific, a priori definition of the issue and the target community are the starting point for applying the TE-CRT. Conducting interviews with 6-8 key informants in the community for our purposes “department” is a critical component of the CRM. Individuals in the department/community who are knowledgeable about the department/community but are not necessarily leaders or decision-makers are often key informants. Key informants for department readiness interviews are department members who are involved in department affairs and are aware of what is going on or taking place in the department. The key informants/respondents were carefully chosen to represent the overall department (tenure and non-tenured faculty, administrators, deans/leaders).
Following the selection of key informants, interviews consist of approximately 20 to 40 questions customized to the community, i.e., the department, and the issue, i.e. the recruitment and retention of faculty from varied backgrounds. The CRT interview guide is included in this report (see Appendix A). CRT interviews are recorded so that a transcript can be created for the scoring process. Key informant interviews conducted online by an outside consultant began in May and were completed in August 2021.
Scoring Department Readiness Interviews Using the CRT
The CRT online manual outlines a straightforward, step-by-step method for determining dimension and overall department readiness scores. After completed interviews have been professionally transcribed and separately scored by two individuals utilizing CRT-anchored rating scales of readiness, the final results are decided by consensus to reduce scoring bias and get reliable scores. Based on interviews with key informants, the scorer assigned a readiness level ranging from one to nine to each dimension. Scores, ranging from one to nine, were assigned to each of the six dimensions. Individual scorers then meet to compare and agree upon the scores of each dimension for each interview (referred to in the CRT as a “consensus score”). Each question in all the dimensions is scored according to its own anchored rating scale and the scores are recorded in a Department Readiness Scoring Sheet. The scoring sheet contains a matrix for individual scores and consensus scores. The final department dimension scores are calculated by averaging the (dimension) scores collected throughout all interviews; the overall readiness score is then computed as the average of the six-dimension values.
Also, according to explicit instructions spelled out in the CRT when scores fall between two whole numbers, the prescribed procedure for determining the stage of readiness is to round down to the lower stage where the participants’ answers meet all of the descriptive statements of that stage. In reviewing the consensus scores for each participant for each dimension, the procedure of rounding down to the level where the participant’s answers met all of the descriptive statements for that stage is crucial, as low scores in one dimension reduce the readiness score of the entire department to the lower level.
Procedure for reaching consensus
To arrive at the consensus scores, the two people scoring the interviews met after the completion of independent scoring. The final scores were determined by consensus. Each scorer performed the technique of attaining consensus, sharing the ratings for each interview dimension individually. When the scores varied, each individual would explain how he or she arrived at his or her score, and then the two would talk until a consensus score was established. None of the scores differed by more than two scoring levels, and just a handful differed by more than one scoring level. Frequently, the difference was attributed to a slightly different interpretation of a quotation, or one person had used one or two different quotations. In the event of score discrepancies, it was common during the consensus process for each person to turn to the quotes for the participant in question and read aloud the quotations upon which the person scoring had based their score. There was never an attempt to average scores or to agree on a completely different score than any of the individuals who scored. Consensus was always obtained through a comparison of specific data points and discussion.
The department’s overall readiness score was then matched with actions to improve the department’s readiness. The dimensions with the lowest levels of readiness indicate the areas where the greatest outreach and education efforts are required for program success.