Reinhardt University SACSCOC Compliance Certification
Comprehensive Standards -
Table of Contents
|3.2||Governance & Admin|
|126.96.36.199||IE: Admin Support Services|
|188.8.131.52||IE: Academic & Student Support Services|
|3.6||Graduate & Post-Bac. Educational Programs|
|3.8||Library & Other Learning Resources|
|3.9||Student Affairs & Services|
|3.14.1||Publication of Accreditation Status|
|184.108.40.206 Institutional Effectiveness:
The institution identifies expected outcomes, assesses the extent to which it achieves these outcomes,
220.127.116.11 Educational programs, to include student learning outcomes
All educational programs at Reinhardt University have identified expected student learning outcomes, assessed the extent to which they achieve these outcomes, and provided evidence of improvement based on analysis of the results. As documented in the 2014-15, 2015-16, and 2016-17 Academic Program Assessment Reports, each academic program uses a variety of methods to assess student learning outcomes within the major, which are linked to university-wide general education student learning outcomes. Within these reports, program coordinators propose curricular and other changes based on assessment results and determine their budgetary implications, which are prioritized by the Deans and considered during the budget process. Assessment methods are continually being developed and refined as determined by assessment results. In addition, each year program coordinators are asked to provide a status report on the improvements they recommended in the previous year.
Overview of the Reinhardt's Academic Assessment, Planning, and Budget Process
As evidenced in CR 2.5, Reinhardt University has a comprehensive and continuous academic planning and evaluation process which calls for program review, annual evaluation of instruction by students, course reflections by the faculty, and assessment of student learning outcomes at the program and course levels. This multifaceted approach to institutional effectiveness flows from the University’s mission statement and its strategic plan and results in a comprehensive assessment of academic program learning outcomes.
At the end of each academic year, program coordinators are required to submit to their dean an Academic Program Assessment Report. See compilations of academic program assessment reports for the past three years. Working with program faculty, coordinators report their expected learning outcomes, assessment methods and benchmarks, and results for each method of assessment. The current reporting guidelines  prompt faculty to analyze results for each assessment, identify needed changes and their budgetary impact, summarize strengths and weaknesses of the program, and follow-up on the implementation of previously recommended improvements. Program coordinators also submit a copy of the report to the Director of Institutional Research and Effectiveness, who uses a rubric  to evaluate the appropriateness of the assessment methods and other elements of the report and may suggest ways to improve the quality of assessment efforts for the next assessment cycle. See examples of feedback provided to program coordinators. [3, 4, 5]
The improvements that are proposed in the Academic Program Assessment Report are evaluated and prioritized by each dean who also reviews their programs' strengths and weaknesses and any budgetary implications. The deans summarize program assessment results and proposed changes in the School Annual Report, which includes other measures of performance. An example of a school report is included in the supporting documentation. The summary of program assessments starts on page 13.  The School Annual Reports are submitted to the Provost, who reviews them and seeks advice and counsel from the Deans Assembly, a group comprised of school deans and academic unit directors, in order to prioritize academic initiatives for the coming year. A copy of the School Annual Report is sent to the Director of Institutional Research & Effectiveness.
The Provost submits selected academic initiatives and their budget implications  for further prioritization to the Leadership Team, comprised of the President, Provost, Vice Presidents, and other senior administrators. Improvements and budget recommendations are considered by the President and Leadership Team when constructing the budget. All resulting goals are aligned to the University's strategic goals. A draft of the new annual goals and budget is shared by the President and Leadership Team with University constituencies including faculty and staff for comments and additional concerns. The budget is then finalized and submitted to the Board of Trustees for approval. The deans are responsible for implementing the identified academic initiatives and managing budget allocations for their programs. The cycle is then repeated each year. The following diagram describes Reinhardt’s Academic Assessment, Planning and Budget process.
Student Learning Outcomes
Reinhardt University has identified expected outcomes for educational programs, to include student learning outcomes. All degree programs have stated student learning outcomes, and they are reported in the annual Academic Program Assessment Report and published in the academic catalogs. [7, 8]
Furthermore, instructors are required to state course objectives in their syllabi and link them to program outcomes or the general education learning outcomes, where appropriate. Similarly, the University requires all proposals for new degree programs to include statements of student learning outcomes. The MFA Proposal provides an example; see page 3.  New undergraduate programs are also required to link their student learning outcomes to the university student learning outcomes. New program proposals are reviewed on campus by the Academic Programs and Curriculum Committee, Faculty Executive Committee, Faculty Senate, Provost, and the President to ensure compliance with institutional requirements.
Through regular assessment workshops and one-on-one consultations, the Director of Institutional Research and Effectiveness works with existing programs and their coordinators to review and refine their student learning outcomes for each annual assessment cycle. Programs are advised to state between three and five learning outcomes, which can address the knowledge, values and skills the students are expected to acquire by completion of the program. They have also been instructed to develop "SMART" outcomes, that is, specific, measurable, attainable, results-focused, and time-focused. In addition, all programs have developed curriculum maps indicating how in each required course, program student learning outcomes are introduced, reinforced, and mastered, and what courses provide opportunities for assessment.  Faculty share the program learning outcomes with their students via the undergraduate and graduate catalogs, program websites, and course syllabi.  An example of a recent syllabus is attached as supporting documentation. 
A compilation of the most recent three cycles of Academic Program Assessment Reports, documenting the student learning outcomes across RU’s degree programs, including online programs, is provided as the supporting documentation in Table 3.3.1.-1. Programs that undergo a program review have the option of not completing an annual assessment report during the review year.
Assessment Reports by Location and Delivery Modality
*In both “No Report” cases, the program was in the midst of a leadership transition.
Academic programs at Reinhardt University report their assessment results annually using a Word template. Instructions for completing the report are posted on the OIRE Site in EagleWeb, Reinhardt’s Intranet.  The Academic Program Assessment Report template requires programs to list their student learning outcomes, specify direct and indirect methods of assessments, identify the location of the assessment results, report and analyze findings, and note any changes planned for the next academic year. Over the past three cycles the Academic Program Assessment Report template was revised to address needed improvements, as illustrated by Table 18.104.22.168-2 below.
Improvements in assessment reporting
Although assessment reports compiled prior to 2014-15 included information on learning outcomes, methods of assessments, results and proposed changes, the link between learning outcomes, assessment results, and changes was not always clear. With a few exceptions (i.e., Price School of Education programs), the reports were not specifying the benchmark or criterion of success for each learning outcome and were using few direct measures. Often times the direct measures were expressed as grades in courses and the location of the assessment evidence was not identified. Only a few programs were engaging in the analysis of program strengths and weaknesses based on the evidence collected during the assessment cycle.
The current reporting template was designed to address all these shortcomings in order to strengthen the assessment process and provide more robust evidence for academic planning and decision-making. In August of 2014, the Provost conducted an assessment workshop  with all program coordinators and provided training on using the new reporting template, placing special emphasis on how to refine learning outcomes and assessment methods, and how to establish reasonable benchmarks (or criteria of success) for each assessment. Program coordinators were also asked to use at least one direct method for assessing each student learning outcome. (A direct method is where faculty members evaluate a student’s work; an indirect method is where a student evaluates her or his own learning.) Additional workshops provided each subsequent year training in the areas of rubric design, curriculum maps, and closing the assessment loop. 
A review of the reports submitted over the past three years indicates that all programs have made considerable progress in refining their learning outcomes, selecting valid assessment methods that are appropriate for each outcome, and establishing a benchmark for each assessment. The majority of programs make ample use of rubrics for assessing papers and projects, embedded questions, capstone projects, portfolios, and locally developed tests. The documentation includes as an example the rubrics used by the MBA faculty for their program direct assessments. 
The discussion below provides an overview of best practices related to assessment methods as evidenced in the Academic Program Assessment Reports found in the supporting documentation. Although we provide only a few examples here, the 2014-15, 2015-16, and 2016-17 Academic Program Assessment Reports offer a more comprehensive view of the various assessment methods used by Reinhardt's programs.
Direct Methods. Assessment methods at Reinhardt consist of a wide array of direct measures of student learning, including comprehensive examinations, nationally based proficiency tests, locally developed tests, student portfolios, senior theses, research papers, case study analyses, embedded questions, internship supervisor evaluations, observations, ethics certifications, and pre-post tests. Examples of direct methods in selected programs are listed in Table 22.214.171.124-3 and discussed briefly in the text following the table.
Examples of Direct Methods of Assessment
At Reinhardt, students in many programs participate in special topics capstone courses, comprehensive examinations, and special projects to assure the proficiency of the students’ knowledge of the essential content of their programs. These experiences have provided programs the opportunity to develop meaningful, comprehensive, and cumulative assessments of skills, predispositions, and knowledge acquired in the majors. The implementation of the upcoming Quality Enhancement Plan, which will ensure that each Reinhardt student completes a capstone experience by the time they graduate, will likely increase the number of capstone assessments.
Several programs use a capstone course or evaluate student work in a required advanced course (or set of courses) as a direct assessment measure. Faculty members teaching these courses collect student work (or samples of student work) and dedicate a designated time at least once annually to reviewing these works and evaluating them in relation to student learning outcomes.
Most programs have developed rubrics that faculty can use to evaluate student work. The MPA program faculty, for instance, score the Comprehensive Examination using a rubric linked the program's learning outcomes. In the Psychology program, students present seminar-style lectures on classic and contemporary theoretical research papers, which are scored with a rubric measuring four domains: Organization, Content Knowledge, Comprehensibility, and Overall Quality.
Also in the MPA program, students have to submit a Professional Portfolio, which is a collection of artifacts generated from course work, internships, and work experience and production materials prepared to assess achievement of the Program SLOs, the Program Objectives, and the Professional Portfolio purposes. MPA faculty score the Professional Portfolio using a rubric determining what is expected in the answers relative to content, program SLOs, and Program Objectives.
Similarly, the MBA program developed a Portfolio evaluation rubric that is aligned to the domains that reflect each of the MBA student learning outcomes. Students submit six papers in their portfolio and each paper addresses one of the six learning objectives; a narrative must be written to show how the paper fits a particular learning objective. In the Studio Art program, students prepare a visual portfolio and install an exhibition of their work in the Fincher Fine Art Gallery. The professional portfolio submitted in Art 492 Thesis Exhibition and Portfolio is the second part of a two-part capstone course. Likewise, the Senior ePortfolio in Communication & Media Studies is a multifaceted and comprehensive assessment project that requires students to demonstrate the ability to use critical organizational skills as well as to edit and revise their best college writing and to create explanatory narratives about how they are fulfilling requirements. Senior portfolios also constitute the main method of assessment for History, Interdisciplinary Studies, and Religion while students in the Music program are assessed trough a Senior Recital.
Evaluations of practica and field experiences are a central component of the assessment plans for several programs. In Price School of Education, for instance, the Director of Field Experiences ensures that MAT candidates have multiple level field experiences through systematic monitoring and placement. Each stage in the program— Initial Admission to the MAT Program, Admission to Candidacy, Admission to the SMART Block, and Admission to Candidate Teaching – is monitored developmentally by both the candidate's faculty advisor and the Director of Field Experiences through observations. In the Bachelor of Healthcare Administration program, the academic coordinator who supervises the academic aspect of the internship has initiated refinement of the assessment used by the intern’s on-site supervisor to evaluate more critically the intern’s achievement of the stated course and program learning outcomes. The internship evaluation is becoming a more significant component of the student's assessment. Similarly, the MBA program uses a rubric to evaluate the Practicum presentation.
Several undergraduate programs use national tests, which allow the program coordinator to compare the extent to which student learning outcomes are achieved with peer institutions. Examples of national benchmarks are the ETS Major Field Tests for Biology, Business, Psychology, and Sociology.
Lastly, some programs use pre and post-tests as direct measures. In Biology, for instance, in Genetics (BIO 320), a ten, multiple-choice, pre-test is provided to all enrolled students in BIO 320 the first week of class. The same test is provided to all BIO 320 students still enrolled in the course on the last week of class. The gains in percentage of questions answered correctly between the pre- and post-test are calculated as a direct assessment. A similar pre and post-test assessment was introduced in the MBA program.
Indirect Methods. In addition to direct methods, academic programs at Reinhardt employ a series of indirect methods to assess student learning outcomes. For instance, individual Faculty Course Reflections are collected each term from instructors to help them fine tune their teaching and improve student learning outcomes. Other examples of indirect methods include course evaluations, program exit surveys, exit interviews, and success rates in placing students in graduate programs (see Table 126.96.36.199-4).
Examples of Indirect Methods of Assessment
Since 2010, the University has been using an online course evaluation system (SmartEvals) managed by GAP Technologies. While the standard questions in this indirect assessment focus on students' perceptions of quality of teaching, instructors can add supplemental questions that relate to the course learning outcomes.  The system generates a MyFocus report which is automatically emailed to the instructor once the results have bene complied. The report shows areas where the instructor scores significantly lower than the average rating for the University. An example of MyFocus report is included in the supporting documentation.  Feedback is reviewed carefully by instructors and used to improve teaching. In addition, feedback from course evaluations provides the basis for the performance review by the deans of full-time and adjunct faculty.
In the Course Reflection form, which has to be completed at the end of each term, instructors are asked to choose a course they taught that term and reflect on the effectiveness of the course’s learning outcomes. The program coordinator reviews the Course Reflection forms and uses the information to assess more accurately the extent to which the program’s student learning outcomes are being met. An example of a completed course reflection form has been included in the supporting documentation.  While student evaluations and faculty course reflections serve primarily as main methods for course-level assessments, they also inform assessment at the program level, particularly in those courses that house key program assessments, such as capstone courses and seminars. In addition, course reflections for courses that are part of the General Education Core are used an indirect method in assessing the General Education curriculum.
Some programs conduct their own exit surveys, which ask graduates to rate the extent to which they achieved the program's learning outcomes. In the Communication program, Exit Interviews are conducted with students by the entire faculty prior to graduation. These interviews provide qualitative feedback on satisfaction with the program, student engagement in the larger community through service, participation in community projects or student organizations.
Program exit surveys are conducted regularly by OIRE for traditional programs, online programs, and programs offered at other sites.  Exit surveys for the traditional students at the main campus rotate on an alternating schedule with the NSSE survey  being conducted in even years and the Noel-Levitz survey  in odd years.
Examples of Improvements Based on Assessment Results
The academic year 2014-15 was the first in which the University used a revised assessment template that prompted program coordinators to link proposed changes more tightly to assessment results. Changes were identified at the conclusion of the 2014-15 academic year and implemented beginning in 2015-16. At the same time, it should be noted that programs had been reporting changes based on earlier assessment efforts, and these improvements were documented in the 2013-14 and 2012-13 assessment reports, although the reporting template used for these assessment cycles did not strictly enforce a direct link between assessment results and improvements recommended. Examples of program improvements made in 2012-13 and 2013-14 are presented in Table 188.8.131.52-5 and a few are discussed in greater detail following the table. Examples of planned or implemented changes from the 2014-15, 2015-16, and the 2016-167 assessment reports follow this discussion.
Examples of improvements based on 2012-13 and 2013-14 assessment results
TABLE 184.108.40.206-5 (CONTINUED)
Examples of improvements based on 2012-13 and 2013-14 assessment results
TABLE 220.127.116.11-5 (CONTINUED)
Examples of improvements based on assessment results
The following section presents a selected examples of improvements made by programs in more recent years:
In 2014-15, the MBA faculty tightly connected the grading rubric used to assess Practica experiences to student learning outcomes. They faculty examined the grading rubric for one Practicum experience, BUS 635, to make revisions that will ensure more direct measures of student learning. All MBA Practicum experiences culminate with a student presentation. A Practicum presentation is graded on 1) Organization (20%), 2) Topic Knowledge (20%), 3) Creativity (10%), 4) Visual Aids (20%), 5) Summary (10%), and 6) Stage Presence (20%). The MBA faculty decided to change the grade distribution for “Topic Knowledge” from 20% to 50%. More importantly, each Practicum that is connected to an MBA course will be graded according to a “Topic Knowledge” rubric to more accurately assess whether a student meets the stated learning outcome.
The new online programs also used the results of their first assessment reports to develop plans of action and implement program improvements. In 2013-14, the BCJ program coordinator noted that students did not meet the benchmark for an assessment housed in the Juvenile Justice & Delinquency course, as they did not have sufficient time to prepare an end-of-course research paper explaining delinquent behavior and potential correctional/prevention strategies through the lens of criminological theories. To encourage submission, faculty recommended that students submit building block assignments prior to the end of the course (i.e. annotated bibliography, outline, rough draft of paper). As a result, this course was expanded to an (8) week session in 2014-2015, which may allow more time for students to prepare the paper. Likewise, based on the first cycle of assessment results, the Capstone Course in the BHA program has been converted from an eight-week course to a full semester sixteen-week schedule. This allows more time for students and the instructor to communicate about the progressive stages of the project and is expected to produce a higher-quality final report.
In 2015-16, the Biology program recommended an improvement in the quality of the General Biology course and General Chemistry in order to meet the benchmark set the SLO. This will most likely happen as soon as we are able to provide our students with a quality General Chemistry experience. The need for an improved chemistry program was commented on within the Biology Senior Exit Survey completed in April 2015 and 2016. The Majors’ General Biology Course now called BIO 120 was updated in fall semester 2016. A new text book was selected and used that provided a higher quality and more rigorous focus on Genetics and Molecular Biology. This book was directly geared towards majors in the biological sciences versus non- majors which is what the last book we used was focused for. The program also changed the laboratory format and included updated and more quantitative labs with a new format of using undergraduate teaching assistants instead of just a faculty member. Also a new Chemistry adjunct professor was hired for spring semester 2017. The impact of these changes will be assessed when students take the Genetics course in Fall 2017 and take the major field test as seniors.
In the Communication program, faculty improved the "Cultural Roots" research project, a direct assessment, by 1) revamping of the requirements to include both oral history and family tree building, dedicating additional class periods to preparing students for the assignment (for example, a lecture and film on oral history methods), 2) linking of the assignment to the required use of two technological platforms online: Ancestry.com (for research) and Geni.com (for family tree building), and 3) tutoring students on technological tools and archival research.
In the Psychology program, faculty teaching the Intro. Psychology course noted that fewer than 50% of the students correctly answered test questions related to integration of knowledge, especially regarding the integration of methodological issues across content topics. In response, the instructor revised lectures to ensure emphasis on interrelationship of topics, especially methodology.
In their 2014-15 assessment report, faculty in the Bachelor of Health Care Administration program, recommended to increase requirements for students to use medical terminology in course writing assignments. This was achieved in 2015-16 and, as a result, the percentage of students meeting or exceeding benchmark for SLO 3 (Graduates will master key clinical and managerial terminology and demonstrate the ability to communicate effectively about healthcare administrative issues in discussions with healthcare administration professionals, faculty, and other students) improved. In addition, two changes were implemented to improve learning for SLO 5 (Students will demonstrate the ability to recognize and analyze issues and problems associated with changes in the healthcare administration field and to subsequently propose or enact constructive responses): HCA 410 expanded to full semester and HCA 490 was augmented with host/preceptor evaluation of intern. As a result, benchmarks for this learning outcome were fully met.
Follow-Up on Improvements
To improve monitoring of the implementation of the actions recommended in the previous year, in 2016 the Office of Institutional Research & Effectiveness added a new section to the reporting template. Program coordinators were asked to follow-up on the improvements suggested in the previous year, report on their implementation and assess the impact of the changes on the outcome being measures. This last step was included to ensure that the closing of the assessment loop is achieved in each program. The following provides examples from the 2016-17 assessment reports documenting recent improvements and their impact on the outcome being measured.
In the 2015-16 MPA program assessment, the program coordinator noted MPA faculty should consider placing greater emphasis on the current management challenges within the public sector as compared to the private sector. MPA faculty might also contemplate assigning and using a different textbook with a greater emphasis on leadership styles and applications, instead of the specified foci on management trends and reforms of public organizations. In Fall 2016, the MPA faculty implemented this recommendation. They placed greater emphasis on the current management challenges within the public sector as compared to the private sector. Moreover, MPA faculty used different textbooks, Northouse’s Leadership: Theory and Practice, Seventh Edition, along with Miner’s Organizational Behavior I. These books and course sessions emphasized leadership styles and applications, instead of the specified foci on management trends and reforms of public organizations. In addition, the Leadership course had a practitioner’s focus. Seven well-known public, private, and nonprofit leaders spoke to students and lead specific, topical discussions. As a result of these improvements, faculty saw a positive impact on the students’ retention and use of leadership theories with the new readings and the new guest-led discussions. Specifically, 100% of MPA students scored greater than or equal to 80% of the questions correctly on the final examination in MPA 605: Leadership and Organizational Behavior.
In the B.S. in Business Administration program, the program coordinator recommended in 2015-16 that areas that could be improved include an increase in global emphasis and a stronger focus on incorporating technology into the classrooms. This recommendation was related to SLO 5 (Awareness of Global and Multicultural Issues - demonstrate awareness of, and analyze, global and multicultural issues as they relate to business). During the 2016-17 academic year, an International Accounting course approved by the Faculty Senate and added to 2017-2018 schedule. It expected that this course will increase students’ global business awareness.
In the Mathematics program, faculty noted in the 2015-16 recommended that more emphasis and time be spent on theoretical problems in academic year 2016-2017 and that the measurement tools needed to be statistically significant. In 2016-17, a Pre and Post-Test measurement tool was used in high enrollment courses, like MAT 200 and this is possible by using an external LMS. All indicators showed an increase in performances from 2015-2016 academic year and the Math Program will consider using this tool in the Calculus cycle.
Program Accreditation and External Program Review
Each Reinhardt degree program undergoes a periodic, which consists of either meeting accreditation or maintaining reaccreditation status for those programs for which there is an accrediting body or a program review for those programs that do not have accrediting agencies. Specialized accreditation reviews are conducted for all Education and Music programs by the Georgia Professional Standards Commission (PSC), and the National Association of Schools of Music (NASM), respectively. [23, 24, 25, 26] All the other degree programs are part of a 7-year cycle of program reviews. Academic program reviews are designed to assess program viability, quality, and productivity, assess its alignment with the University’s mission, and facilitate program improvement. The evaluations are based on guidelines  established by the Provost’s Office and provides an evaluation of program strengths and weaknesses and includes an evaluation of objectives and learning outcomes, similar to accreditation reviews. Program reviews are conducted on a continuous review cycle.  See examples of program review self-studies and external reviewer reports. [29, 30, 31, 32]
Program accreditation reports and program review results are used to inform and improve academic programs. Evidence of improvement based on program accreditation lies in continuous accreditation status. Evidence of improvement based on program review resides in actions a program must identify and take related to the reviewer’s recommendations. For instance, as a result of the external review conducted in Fall 2016, the Communication and Media Studies faculty implemented the following curricular improvements :
Reinhardt University has developed a sustainable assessment process at the academic program level, which includes online programs and programs offered at off-site locations. Course-level assessment is also robust as instructors examine each term their teaching effectiveness through Student Evaluations and Course Reflections. Results from both student evaluations and faculty course reflections often feed into program-level assessments as indirect methods, particularly in those courses that house major program assessments, such as capstone courses and seminars.
The recent improvements to the Academic Program Assessment Report template, including a prompt to follow-up on the status of recommended improvements, along with ongoing assessment workshops offered to program coordinators and faculty are expected to close the loop in the assessment process and yield higher quality assessment evidence and improvements in student learning.