|
(pressing HOME will start a new search)
|
|
IMPLEMENTING
OUTCOME ASSESSMENT FOR PROGRAM ACCREDITATION
|
|
Outcome assessment has become an official component of the American Council for Construction Education's requirements for accreditation of undergraduate programs. This paper was designed to explore a systematic approach to implementing outcome assessment for accreditation and total program evaluation. The
following issues have been identified by the author as key
considerations:
A
well organized and positive approach to quantifying the clients'
abilities and attitudes toward their learning experience can provide
valuable feedback, and permit programs to analyze their strengths and
shortcomings for future planning. KEY
WORDS: Accountability;
Accreditation; Outcome Assessment; Program Planning. |
In the summer of 1990, the American Council for Construction Education (ACCE) adopted a revised standard which specifically requires member universities and colleges to include the evaluation of program outcomes as a component of regular program assessment. This action followed more than three years of analysis and discussion.
Traditionally, accreditation standards focused on inputs, such as certain elements of the curriculum, qualifications of the faculty, adequacy of facilities and equipment, administrative support, and the academic quality of students. A new emphasis surfaced in 1985 when the Council on Postsecondary Accreditation (COPA) requested that accrediting bodies begin considering outcomes as well as curricular inputs for the programs under their Jurisdictions. Shortly thereafter, the ACCE formed a special Outcome Assessment Committee charged with investigating how an outcomes‑based approach would impact the existing accreditation standards and, more importantly, how this approach could be integrated effectively into current procedures.
During the initial discussions, a significant number of questions arose regarding the exact nature and intent of outcome assessment. These questions were addressed in a white paper, entitled: OUTCOME ASSESSMENT, authored by G. Arlan Toy and Robert Segner. This document, published in August 1990, was accepted by ACCE as an official document of the organization at its annual meeting in 1991.
The ACCE's Outcome Assessment Committee recommended a five‑part standard which was approved by the Board of Directors for implementation in academic year 1990‑1991. The revised standard requires each program requesting initial accreditation, or re-accreditation, to develop and implement a planning system which includes the following:
|
Additionally, ACCE stated that program constituents, including students, graduates, employers of graduates, industry benefactors, faculty, and administration must be a part of the outcome assessment effort. In keeping with the overall philosophy of ACCE, the specific methods and characteristics of program planning and outcome assessment remain with the individual institution and program.
It is the intent of this paper to address the implementation of an outcome‑based component of a construction education unit's overall program evaluation system. In order to adequately address the implementation process, it is important to clarify the basic elements to be implemented and assumptions as well. The elements to be discussed in this paper are as follows:
|
The first assumption is that the planning and outcome assessment program will meet the developers' perceived purpose. Perceived purposes generally fall into one or more of these categories:
|
If the impetus for outcome assessment lies outside the program, as in the first two categories above, then it is important to know what form the product, i.e. outcome assessment report, must have in order to meet the requirement of the sponsoring body. Key issues include: the intended use of the findings, the desired quantity of data, the scheduled submittal dates, and the format. As elementary as these items appear, they set the tone for every action which follows.
In some cases the desired format of the outcome assessment report is not specified. This could be viewed as troubling, but assessment planners have also used this opportunity to
design an outcome‑based planning and reporting system that more closely fits their specific organizational needs and, in turn, the ACCE requirements.
The impetus for an outcome assessment program can also come from within the program. A commitment to improve marketing and any form of a quality improvement program, such as TQM, are examples of this type of motivation. Although not directly related to accreditation, these forces are compatible for program outcome assessment.
For ACCE purposes, periodic assessment reports should be structured in two parts: a summary of the assessment of the measurable objectives, and a description of the program modifications which were implemented based on the results of the assessment. The latter will be a point of emphasis for academic year 1992‑1993.
ACCE's standards outline a planning progress which begins with a program mission statement intended to delineate the driving force behind the organization's actions. The development and dissemination of corporate mission statements has been a common endeavor for businesses and other organizations during the 1980's. A common trap that firms encounter is to state that they are committed to being the "best company in the United States."
Charles Garfield, author of Peak Performers, reveals that for a mission statement to accomplish its intended purpose it must translate a common vision into language that inspires each member of the team to effective action. A vision is defined as something desired so strongly that it attracts a wholehearted commitment.
If desiring to be the "best construction education program in the nation" inspires a program to genuine excellence, then this type of mission statement will suffice, however, if it serves only to complete the task of developing a mission statement, then the true purpose of the exercise is left unfulfilled. The genuine benefits are the discussion and consensus regarding the nature of the program, i.e. what is its purpose, and what should it strive to achieve.
It follows that an effective mission statement should include some reference to the learning environment that each program creates through its curriculum, faculty, physical facilities, research activities, and projects. Secondly, it should relate to each of the participants, i.e. students, employers, faculty, university, and the construction industry. Finally, the mission statement should provide direction for the endeavors of the program as well as the energize the faculty and staff Since the mission statement is intended to apply specifically to the role, purpose, and goals of the organization, it must be generated by the members, or representatives of members, of each program. Corporate executives and program managers that the exercise of developing a mission statement has two major benefits: clarity of the purpose and principles of the organization, and a unifying bond for future endeavors.
To obtain a coordinated outcome assessment effort, it is critical that a program periodically revisit the mission statement and review the goal developed from that mission statement. ACCE has found that programs are well represented in the area of goals, but objectives‑‑specifically measurable objectives‑‑continue to pose challenges.
These challenges include: a tendency to develop large numbers of behavioral tasks which can quickly overwhelm the faculty and produce unwieldy amount of data, a loss of focus on the desired general capabilities of graduates, difficulty in translating attitudinal results into measurable terms, the complexity of breaking a concept into basic building blocks, and fear of how the results may be used.
For the purposes of this paper, objectives will be viewed in three categories: demographic, attitudinal, and performance. Demographic objectives relate to qualitative and quantitative characteristics of the participates, individually or as a group. Attitudinal objectives are designed to describe how individuals feel, or what they believe, about specific issues related to the program. Finally, performance objectives are those which target a student's ability to demonstrate knowledge or skills.
Depending on those factors that a program to analyze, demographic objectives may pertain to the number of students enrolled, the number of graduates each semester, the program's retention rate, the average number of semesters for successful completion of each degree, the number of research articles produced by the faculty, or the amount of grant funding obtained during an academic year. The program's mission statement and program goals should suggest the type and scope for a set of demographic objectives.
Attitudinal objectives are prime elements of questionnaires to alumni, employers, program benefactors, and current students. Course evaluations which ask students for their opinion regarding the course content, instructor's presentation ability, clarity of grading procedures, and quantity of work required, are also good examples of instruments
which address attitudinal objectives. Establishing objectives in this area usually involves setting unacceptable level of positive responses. For example: The percentage of students who would recommend this program to a friend or relative is greater than 80%.
Performance objectives establish expectations regarding students ability to demonstrate knowledge of key concepts and skills. Some programs have elected to use the results of a standardized examination. A performance objective such as " Seventy percent of the students graduating from our program pass the Engineer‑in‑Training (EIT) examination with three (3) years", may be appropriate. The performance examination can be standardized nationally, locally, or within the department. A key issue remains, does the instrument reflect the knowledge and skills that each participating program emphasizes.
An alternate method for developing performance objectives is for the program faculty to identify one or more concepts in various courses that is fundamental to the long term goals of the program. Questions can then be designed to explore a level of comprehension for this concept or the ability to apply it. In this manner, key objectives can be assessed regularly, providing an opportunity to make modifications to individual courses and the programs relatively quickly. Overall performance in outcomes can be judged by the sum of the results of course objectives and total program objectives.
Demographic information can be readily summarized and reported, once the database is identified and compiled. The key factor is to identify and implement a method to build a data base as the year proceeds. For instance, if one of the demographic objectives is for each faculty member to complete twenty (20) industry contacts per year, then an expedient method of documenting and filing these contacts should be established before the assessment period begins.
Attitudinal objectives require that subjects are selected randomly, or the sampling could produce invalid conclusions based on data skewed by the selection process. In addition to the questionnaire method, student interviews and anecdotal summaries of telephone conversations with alumni and employers can provide useful information for program evaluation.
Data relating to performance objectives can be collected from a single evaluation event, such as a capstone examination, or by individual faculty members during the regular course examination. If a cumulative, program‑wide examination is used, then it can be constructed and administered as a separate activity, or as a part of a required senior‑level course.
In either case, the specified questions can then be evaluated by the individual administering the assessment, or by a designated team. A summary of the results is developed and submitted by each instructor, or the team. An outcomes assessment coordinator, or task force, can then compile a comprehensive profile which includes all individual summaries and any program‑wide findings.
One of the major barriers expressed to ACCE regarding implementing an outcome assessment program is the belief that it is an additional requirement with heavy time demands placed on people who already have a fill set of professional expectations. Data analysis and program modification are often viewed as prime culprits in this belief. When outcome assessment is built into the naturally occurring academic process, the perceived demand or individual faculty and staff for data collection and analysis is reduced.
Although not specifically required by ACCE, often a formal body is charged with reviewing the data, conclusions and program modifications. It is important that this group have representation from a wide range on constituents to demonstrate a knowledgeable, interested, and unbiased perspective on the task. One suggestion is to make this a rotating assignment for a minimum of one two‑year term. This approach provides continuity while encouraging varied viewpoints in the assessment process.
This body will examine the summary reports, actual data, analyses and conclusions, and proposed modifications submitted by either individual faculty or an assessment team for program‑wide instruments, such as exit interviews, portfolios, or alumni surveys. If data is initially submitted in summary form, then the analysis and review group can decide which base data it requires to complete its evaluation.
The summary information can take various forms, but should include a statement of the objectives, the number of individuals attempting each objective, the criterion for successful completion of each objective, the number and percent of successful results, and the method and form of reporting results should fit the individual program.
This highlights another basic assumption for outcome assessment: improvement is on‑going and should be interpreted as an indicator of program well‑being. It demonstrates interest in client feedback as well as commitment to improved quality. Change and improvement are the anticipated impacts of an outcome assessment system.
It is commonly agreed that outcome assessment is a form of accountability, hence, a clear effort should be made to build in quality control and verification. The first step is the involvement of representatives of the program's constituencies in the development stage. Outside participation on the analysis and review group is also important. Faculty members or administrators from other departments, interested alumni, members of the industry advisory council, or graduate students, often make viable candidates.
Base data for each objective should be retained in a secure location for a pre‑determined period. This will allow an in depth review should one be desirable or necessary. Summary reports should be kept in the program's permanent file storage.
Requirements in this area vary from one university to another. At a minimum, an ACCE team will review the results of the outcome assessment system on its initial accreditation campus visit. Re-accreditation occurs on a 6year cycle.
Several programs report that they are required to complete internal audits on a regular schedule, i.e. three‑year or five-year cycles. In other cases, funding authorities have mandated that outcome assessment progress reports be submitted annually. Outcome assessment findings are currently, or have the potential to become, a component of university and program funding decisions.
Whether or not the distribution of an annual report is mandatory, a summary of findings which is made available to students and interested individuals can generate a sense of openness regarding feedback and the program's efforts in self‑improvement. If construction educators are gathering data regularly, reviewing the findings, and making prudent modifications, then informing their constituencies can become a natural element of the system.
Developing and implementing a well‑devised, comprehensive outcome assessment process cannot guarantee its success. A key factor is follow‑through. As with other management programs, success can be enhanced by designating an individual to be responsible for monitoring regular and timely data collection, analysis of results, and identification of program modifications. Along with this responsibility should go appropriate authority, or support from the individual who does have that authority.
When resulting program modifications are perceived as beneficial to all parties, and the recipients of progress reports are sincerely interested in the results, then outcome assessment becomes an integral part of the normal operations of the construction education program.
Formal outcome assessment is relatively new territory in construction education. Change is normally accompanied by anxiety and uncertainty. It is fundamentally important that outcome assessment be pursued with a positive attitude, care, and persistence.
The following are examples of pitfalls that outcome assessment planners have encountered:
|
Outcome assessment is now a requirement for ACCE accreditation. Used effectively, an annual outcome assessment system can provide the basis for on‑going program planning, as well as implementation and evaluation of modification.
Programs seeking accreditation status for ACCE must include the following elements in an outcome assessment system: mission statement and goals, measurable objectives aligned with these goals, a mechanism for gathering data related to these objectives, a clearly defined analysis and review process, implementation of program modifications, and dissemination of the findings with program constituencies.
Successful development and implementation starts with knowledge of the concept and its purpose, continued emphasis on the potential benefits for the program, keeping participants informed, a commitment to simplicity, where possible, and persistence.
ACCE, in consonance with COPA, is now focusing on the program modifications resulting from the outcome assessment and review process.
REFERENCES
American Council for Construction Education. (1990) Annual Report. Monroe, LA., author.
Belasco, James A. (1986). The Excellence Choice: How Vision Creates Excellence. Management Development Associates, San Diego.
Garfield, Charles. (1986). Peak Performers. William Morrow and Co., New York.
Liska, Roger W. Outcome Assessment, notes for a training program presented at the ACCE Annual Meeting, July 1989.
Toy, G. Arlan, and Segner, Robert. Outcome Assessment, a white paper developed for the American Council for Construction Education, 1990.