Electrical Construction Management Specialization Program: A Formative Evaluation Kirk Alter Purdue University West Lafayette, Indiana
Introduction A survey of the Proceedings of the Annual Conferences of the Associated Schools of Construction from 1987-1997 shows that of all of the papers published that approximately 18% focused on curriculum development, 3% focused on program development, 2% focused on curriculum evaluation and 2% focused on program evaluation. As an organization committed to promoting excellence in construction management and technology education the Associated Schools of Construction (ASC) seems unusually silent on the matter of careful and continuous evaluation of courses and programs currently offered. Certainly, it would be fair to assume that calls for development of new curriculum are signals to indicate that at any point in time someone is looking at specific parts of the curriculum of personal interest. While the 7% contribution towards program development and curriculum and program evaluation indicates that there is an occasional interest in taking a broader look at the direction of construction education it does not send the signal that this is a very significant concern for construction educators. It seems problematic that ASC faculty charged with providing the highest quality construction education possible rarely publicly evaluate their curriculum and programs. The only recent articles published, Shahbodaghlou & Rebholz (1994), and Yoakum (1994), focus broadly on program design and outcome assessments. While both program design and outcome assessments are important, they fail to direct an introspective examination of the curricula being taught everyday in the ASC schools. Outcome assessments are particularly suspect in a boom construction economy where demand for construction program graduates is so high that ASC schools may possibly gain a overestimated opinion of the quality of their curricula and programs. A business maxim adhered to by successful firms is that the time for careful introspection and creation of plans to improve and change is when you are at the "top of your game". The notion that a course, curricula, or program is the "best", and therefore not subject to review or change should be an anathema to all educators. It may be prudent for ASC schools to consider performing closer, and more frequent self-examinations and making them public to ensure that they are practicing continuous improvement in their organizations. The absence of published evaluations of curricula and programs may be simply the result of no clear models or examples. The purpose of this paper is to examine a fledgling new program in electrical construction management, in order to hopefully provide both meaningful input to that program, and to perhaps provide an evaluation model for others to use.
Electrical Construction Management Specialization (ECM) – Program History An ELECTRI21 study of 160 electrical contractors and 90 U.S. colleges and universities funded by the National Electrical Contractors Association (NECA) entitled "Developing a Curriculum for Electrical Building Construction and Contracting" (Lew & Achor, 1994) found that 96% of the contractor respondents felt that a college educational program was needed in electrical construction management. The intended program would focus on developing a specialization program in electrical construction management. Some of the tasks identified as important in creating such a specialization program included development of courses, development of student recruitment programs, student internship & placement programs, faculty recruitment, development of student organizations, and the use and equipping of laboratories. Additionally, the findings of the study indicated that "major discussions with industry representatives and other universities concerning the content and goals of such a specialization had to be accomplished initially to give the program direction." (ibid.) Summary – The precursor to developing a specialization program in electrical construction management was an industry needs assessment (1994), providing some direction on how to collect more data on program design. Next, a proposal was developed and submitted in June 1995 to NECA for funding of the "Implementation of an Electrical Construction Management Curriculum." (Alter & Lew, 1995) The purpose of the proposal was to seek initial funding for the development of an electrical construction management specialization program. The project was designed to focus on the development of electrical construction management courses, development of a marketing plan, student recruitment, placement of graduates, and a national effort to begin the process of offering similar programs at other universities. Specifically, the grant proposal was to deliver work product in the forms of 1) formulating and convening an "Electrical / Mechanical Construction Management Conference," 2) developing courses designed to provide instruction in subject areas identified in the ELECTRI21 report, and 3) developing and implementing a marketing plan. The grant was fully funded by NECA, and work began on designing a specialization program which would meet the stated goals and objectives of the grant proposal. Summary - The NECA grant (1995) provided the resources to begin work on the development of the specialization. Upon award of the grant in June 1995 work was begun to develop and implement an electrical construction management program. A new faculty member whose primary area of responsibility was in the area of electrical construction management was hired for the school year beginning August 1995. The following is a synopsized chronology of the evolution of the specialization program: Fall Semester 1995 - The new faculty member teaches a core introductory course in electrical materials & methods. The existing text used is not particularly relevant, the existing course manual is disorganized, and the faculty member randomly experiments with both curriculum and instructional methodology in a search to provide effective instruction. Spring Semester 1996 – The faculty member abandons the existing text, and compiles a rudimentary new course manual. Consideration is given to altering the delivery format of the class, to changing from three 50 minute lectures weekly to two 50 minute lectures and one 2 hour lab. The core course is refocused from a survey of the entire field of electrical construction management to an introduction of electrical materials and methods. This shift more closely resembles the approach used in providing general construction management education. Fall Semester 1996 – The faculty member continues to refine and develop the course manual, and introduces the National Electrical Code (NEC) as a primary source for the class. A lab component is added for the first time, accomplishing a shift in the instructional delivery method. The faculty member solicits the assistance of the local NECA Chapter, and the first student chapter of NECA is formed. 10% of the department student population joins (60 students). A first meeting of electrical, mechanical, and university representatives is convened to discuss the formation of an alliance to determine, develop, and promote electrical and mechanical construction management programs at universities throughout the country. The faculty member develops a course in electrical estimating. Spring Semester 1997 – The faculty member continues to refine the electrical materials and methods course (core course), and offers for the first time, a course in electrical estimating. The course is provided to 12 students, and the faculty member works closely with industry representative in designing and delivering the course. The course is experimental in nature, and success of course is difficult to define. The faculty member proposes a new course for Fall 1997 in the area of electrical and mechanical design build. The faculty member formally proposes an electrical specialization program to the department curriculum committee consisting of courses in materials and methods (core), electrical construction estimating, electrical construction management, design/build for MEP contractors, and one elective from a proposed list of acceptable electives. The proposed curriculum changes are accepted by the department committee, and forward to the school curriculum committee for approval. Department faculty members publish a journal article entitled "Development and Implementation of Electrical and Mechanical Specialization Programs in the Undergraduate Curriculum." (Alter, Koontz, and Lew, 1997) as a result of their 1996 "Contractors’ Survey for Electrical/Mechanical Building Construction Education" outlining the goals and objectives of the specialization program. Summer 1997 – The faculty member writes a grant proposal to NECA to develop a national student scholarship program in electrical construction management. The grant proposal is accepted by NECA, to be funded Summer 1998. The faculty member continues course development on all courses approved by the department curriculum committee. Student interns work for electrical contractors in Indiana, Illinois, California, Oregon, Washington, Colorado, Arizona, and Kentucky. Fall 1997 - Electrical methods & materials, and Design/Build for MEP contractors are offered and fully subscribed. NECA students and faculty member attend the NECA annual convention in Miami Beach in October presenting on program progress, and attending an intensive seminar on fiber optics. Annual ECM specialization course sequencing begins with fall semester offerings of materials & methods, and design/build, and spring semester offerings of materials & methods, and estimating. Electrical construction management is offered by both other faculty members in the department, and as an independent study, and electives are taught by either the primary faculty member or outside of the department. The first two graduates of the program exited the department in December 1997.
ECM Program Evaluation Evaluation Rationale The electrical construction management specialization is still in its infancy. The stated program design and purpose have been well received by both industry and academia. It is being funded by both the university and industry. Student interest is building, with a rapid influx of students participating in the NECA student chapter. So why, then, evaluate the program now, when everything seems to "be coming up roses?" The answer is that now, with the program still in its infancy, we should look at the program to assess whether we have adequately defined it, and to determine if the means of installation are congruent with the design. This is especially important, as it is intended that this program will be a model for first, the consortium of schools in the alliance developed as part of the NECA grant, and later for other interested ASC schools. Evaluation Model Program evaluation is a well developed field of study in the educational arena, yet there are no apparent references to the use of formal evaluation techniques in the records of the proceedings of the ASC over the past ten years. The purpose of program evaluation is to provide two fundamental groups of decision makers information. The first target audience is all those who make decisions to improve and/or stabilize specific programs – this group would include the faculty member/s delivering the instruction, the curriculum committee, and the department head. The second target audience is those who make decisions to retain or terminate programs – this group could include the curriculum committee, department head, and dean. There are two basic schools of thought on program evaluation – the utilitarian perspective, and the intuitionist/pluralist perspective, and both have very well developed positions and techniques. Further, the tools of evaluation are extensive and include methods of obtaining group input, indicators of academic achievement, alternative methods of assessment – including performance assessment and portfolio assessment, assessment of affective characteristics, and qualitative methods of inquiry (Gredler, 1996). Before undertaking any program evaluation the evaluator must consider the following:
The Provus Discrepancy Evaluation Model The Provus Discrepancy Evaluation Model – designed by Malcolm Provus in 1969, is a well-tested and commonly accepted utilitarian model to use in evaluating academic programs. He defined evaluation as the process of agreeing upon program standards, determining whether a discrepancy exists between some aspect of the program and standards governing that aspect of the program, and using discrepancy information to identify weaknesses of the program. His stated purpose of evaluation is to determine whether to improve, maintain or terminate a program (Gredler, 1996). His model is primarily a problem-solving set of procedures that seeks to identify weaknesses (according to selected standards) and to take corrective actions with termination as the option of last resort. With this model, the process of evaluation involves moving through stages and content categories is such a way as to facilitate a comparison of program performance with standards, while at the same time identifying standards to be used for future comparisons. This seems to be a particularly useful technique to employ when evaluating a fledgling program like the ECM specialization. The Provus method identifies four specific stages of all programs. The stages are: Stage 1: Program Definition Where the purpose of the evaluation is to assess the program design by first defining the necessary inputs, processes, and outputs, and then, by evaluating the comprehensiveness and internal consistency of the design. Evaluation Stage 1 asks the question, "Is the program adequately defined"? Stage 2: Program Installation Where the purpose of the evaluation is to assess the degree of program installation against Stage 1 program standards. Stage 2 asks, "Is the program installed as defined in Stage 1"? Stage 3: Program Process Where the purpose of the evaluation is to assess the relationship between the variables to be changed and the process used to effect the change. Stage 3 asks, "Are the resources and techniques being used congruent with the goals of the program? Stage 4: Program Product Where the purpose of the evaluation is to assess whether the design of the program achieved its major objectives. Finally, in Stage IV the question is asked, "Are the program objectives achieved in the implementation"?
At each of the four stages the defined standard is compared to actual program performance to determine if any discrepancy exists. The use of discrepancy information always leads to one of four choices:
Use of the Provus Discrepancy Model The Provus model is most effective under the following circumstances:
Provus Terminology Defined The following definitions will be useful in understanding the evaluation which follows: Inputs – a) the things the program is attempting to change, and b) things that are prerequisite to program operation, but which remain constant. Process – those activities which change inputs into desired outputs. Outputs – the changes that have come about including a) enabling objectives, b) terminal outcomes, and c) other benefits. Enabling Objectives – intervening behaviors/tasks which students must complete as a necessary basis for terminal outcomes. Terminal Outcomes – the behaviors the clients are expected to demonstrate upon completion of the program. Design Criteria – contains a comprehensive list of program elements (input, process, output) that become the standard of performance in Stage 1.
Stage 1 Provus Evaluation of the ECM Specialization Program As this program evaluation is being conducted in the very formative stages of the program, and since the strengths of the Provus model focus closely on program definition and program installation it was decided to attempt to use Provus methodology in examining the electrical specialization program. Some limitations of this evaluation include the application of only the first stage of the Provus model - dealing with program definition, and leaving out the Provus examination of program installation, program implementation and outcomes. The choice to only examine the first stage of the program evaluation as described by Provus is the simple result of the current status of the program. The Provus model is explicit in its prohibition of moving on to subsequent stages until discrepancies in the current stage are remedied. The program has only recently been defined, and is still in the early moments of program installation. According to the Provus model, the most important thing we can do right now is to carefully examine whether the program is adequately defined. Stages 2-4 would only be carried out after a thorough examination of Stage 1 to determine if any discrepancies exist. Stage I - Program Definition The basic question posed in this stage is, "Is the program adequately defined?" Specific information needs to be gathered regarding the program purpose, required conditions, and transactions. Definitions of student entry behaviors, staff qualifications and training, curriculum and instruction methods and materials, facilities requirements, and administration conditions must be derived. Further, program definition must be analyzed for clarity, internal consistency, and comprehensiveness. Why do all of this? In order to provide justification for going on to the next developmental stage - program installation. The ECM specialization program can be justifiably terminated under the following conditions if determined at this stage:
Electrical Construction Management Specialization Program Definition Template This template has been designed according to the Provus model of program evaluation for Stage 1. At various points in the template there exist enumerated blank lines. This occurs where no written program definition currently exists, and represents a clear area of discrepancy within the Provus model. Correction of this discrepancy would need to occur before moving on to Stage 2 – Program Installation, under this model.
Program Goals & Objectives -
Discrepancy Evaluation Comments. The statements of program rationale do provide some insight into the raison d’ętre of the program, but they are neither clear nor succinct. Recommendation - Consult with consortium members to clarify and make them more succinct. The statements of objectives seem to be very thorough and complete. Recommendation - See Program Outcomes Recommendations II. Scope of the Program
Discrepancy Evaluation Comments. The intended scope of the program is not clearly defined. No information is provided which describes the student, school, or industry populations. No contextual information is provided. In spite of a major objective of "hiring new faculty", no description of existing faculty, or faculty qualifications exists. A "consortium" was created to "spread the gospel", yet no criteria is defined for how participating schools were selected. Recommendation - Compile data on proposed student population impacted by the program, including demographic statistics, critical mass requirements, and contextual background of target audience. Define the characteristics of schools which may be affected by, or which may become participants in the program (or a similarly transportable program). Create general job description and preferred qualifications of faculty and staff participating in the program. B. Program Outcomes
Discrepancy Evaluation Comments Regarding Outcomes. While there are nine "Goals and Objectives" cited under the overall statement of program objectives, they are not clearly classified or identified in the categories of Major Objectives, Enabling Objectives, or Other Benefits. It is difficult to determine which stated program objectives have priority or significance as currently presented in "laundry list" form. Recommendation - Provide clear definition on how to classify objectives using this model to consortium members, and classify the original nine stated program objectives. Where necessary or appropriate add or delete objectives to clarify the intended outcomes of the program. C. Program Antecedents - identify contextual information, participant characteristics, qualifications, and required program support .
Discrepancy Evaluation of Antecedents. The existing program definition is built upon a needs assessment survey. There is little information extant describing antecedents in the areas of students, staff, and support. Recommendation - Confer with consortium to identify and define program antecedents. D. Program Process - clearly identify the processes which will be implemented to install the program.
Discrepancy Evaluation of Process. The course syllabi provide a description of student requirements for each course, but could be improved by more carefully and precisely identifying student activities, staff functions, and intra-staff communication and coordination. Experimental courses, or courses being taught for the first few times need to be much more carefully defined and monitored for recognizing discrepant outcomes. While a specialization program has been identified and approved by the department curriculum committee, the specialization program as a whole does not have a clearly explained process or rationale. There have also been intra-staff communication and coordination problems, especially in the area of internships and placement. This clearly seems to be a result of the lack of clear program definition and communications.
Conclusions Regarding the ECM Specialization Program The ECM specialization program has been well received by both students and industry, but according to the evaluation template applied in this paper it demonstrates some opportunities for improvement in the Stage 1 phase of program definition. Since the program has now moved into the installation stage (2), it is important that all discrepancies discovered in Stage 1 be corrected as soon as possible. Some specific recommendations include:
Conclusions Regarding Program Evaluation in ASC Programs in General While agreeing with Aldous Huxley that, "There’s only one corner of the universe that you can be certain of improving, and that’s your own self," this author believes that there is a need for program administrators to better understand that the installation and continuance of construction management curricula and programs, whether innovative or not, involves a high risk of failure and erosion over time. There is a need for programs to be evaluated more carefully, and for evaluators and curriculum committees to better understand the kind of information administrators need if the cost of these risks is to be minimized (Provus, 1969). Both administrators and faculty must see evaluation as a continuous information-management process which serves program-improvement as well as program assessment purposes. The complexity and concomitant high cost of effective evaluation must be recognized as a necessary management and time expense. Everyone concerned with construction management education must be willing to spend larger sums for evaluation if we are to maintain the highest quality of construction management education among ASC schools. Those involved in curricula review and program development must recognize:
References
|
Go to the
|
|
|