Back Home
ASC Proceedings of the 41st Annual Conference
University of Cincinnati - Cincinnati, Ohio
April 6 - 9, 2005         

 

Ranking Construction Programs

“The Academic Debate Begins”

 

Dr. William W. Badger

Arizona State University

 

Dr. James C. Smith

Texas A&M University

 

The authors propose a methodology for the ranking of Construction programs and to use ASC as the forum to debate.  Professional organizations such as ACCE, AGC, NAHB, and ENR are interested in ranking construction programs.  U.S. News has ranked colleges, but not construction.  Programs are going through change to more traditional academic programs with graduate education, research, fund raising, and faculty member production.  The discipline needs to discuss these changes.  Creating a ranking system may be a means to discuss common issues.  Participating in a ranking process will provide benchmark data for self analysis, identification of short comings, and information to justify change.  The ranking system must be complicated enough to seem scientific and the results must match, more or less, people's nonscientific prejudices.  The ranking process needs programs reputational ranking element to balance the processes.  First, establish a theoretical World Class goal and benchmark program against the standard before comparing rankings between programs.  A series of “strawmen” rankings needs to be conducted before establishing a set of metrics.  The ranking model must be validated by a number of test runs.  Papers about controversial topics provide new ideas and concepts that need to be discussed and may drive change.

 

Key Works Construction Education, Program Rankings, Performance Metrics

 

 

Introduction

 

The purpose of this paper is to propose methodologies for the ranking of Construction programs [This paper uses the term “Construction” programs to include all programs of construction higher education, including Construction Management, Construction Science, Building Construction, Construction Engineering, etc.].  The authors realize this is a very controversial topic and anticipate that this paper will be a vehicle to start and to conduct many academic debates.  The Associated Schools of Construction (ASC) may be the forum to have these debates on rankings.  A number of professional organizations such as the Associated General Contractors (AGC) and the National Association of Home Builders (NAHB) are studying the process of ranking of Construction programs.  The Engineering News Record (ENR) magazine started to rank Construction programs in 2001, but compromised by publishing a magazine article “profiling” Construction programs. 

 

In 2000, the leaders of Construction programs from Arizona State, Colorado State, Purdue, and Texas A&M conducted a workshop in Estes Park to create criteria for a theoretical World Class Construction Program.  These program leaders created their “vision” of World Class programs.  Using this information as a starting point, a Model was developed with several elements and is shown at Attachment A.  The workshop did not develop metrics to measure World Class, but it was recommended that once metrics were established that Construction programs could benchmark themselves against these standards for self evaluation purposes, not rankings.  It was anticipated that self-evaluations lead to improved performance.

 

In 2001, ENR held a workshop in New York City at their offices to develop the criteria for ranking Construction Programs.  The attendees represented academia, the construction industry, and the editorial staff of ENR.  The ranking criteria were discussed and developed, but the ranking concept was changed to profiling.  It was clear that the academic representatives were not prepared to discuss ranking, nor did they desire to have their Construction programs ranked.

 

During 2001 at the ASC annual conference, some attending faculty members were briefed on the proposed profiling process and provided their input.  An ENR editor took the comments and modified the data to be collected in preparation for publication of the profiling article.  ENR published the construction program profiles in the 29 October 2001 “C-Schools” edition. 

 

In 2004, the AGC's Higher Education Committee held a meeting in Scottsdale, Arizona to discuss the possibility of AGC sponsoring a Construction Program ranking system.  The resulting recommendation was to establish criteria, identify target programs and develop a system to profile Construction programs.  AGC would produce this profiling deliverable annually for use of their member companies and the construction industry.

 

In Badger 2001, the widespread nature of the American Council for Construction Education (ACCE) visiting team findings indicates the breadth of the problem within Construction higher education; the clear message is that, based on ACCE observations, “universities do not support the CM discipline” in faculty lines, salaries, or budgets.  Samples of deficiencies listed below are some of the ACCE findings regarding weaknesses in personnel, funding, and developmental funding for faculty members.  These sample factors should be considered when metrics are developed to rank programs.  Construction program will not achieve world class status without university (institutional) investment and support.  Consequently, this support is a criterion and should receive significant weighting in any ranking process.

  1. Lack of sufficient number of faculty to meet teaching demands and research expectations

  2. Lack of support from the University to hire qualified faculty

  3. The Department Head’s salary is not equivalent to others on campus

  4. The Department is under funded and has to utilize soft money for operations

  5. Salaries should be reviewed in terms of on-campus and national peer group programs

  6. There is an inequitable salary distribution among faculty

  7. Funds allocated for faculty travel and professional development are inadequate

  8. There is little support from administration for faculty development 

 

The faculty members who are employed in Construction programs reflect both the strengths and weaknesses of the current system.  They suffer from many of the same problems that every academic department suffers from, but also have a relatively unique set of difficulties.  These problems are discussed below and need to be included in any new ranking system.

 

Tufte and Hannestad (1988) found that construction management faculty turnover is an ongoing problem at many universities.  They cite the following as possible reasons for faculty turnover:  difficulties in new faculty hires, alternative opportunities for qualified faculty, poor economic considerations and lack of opportunities for advancement, excessive workload and poor morale.

 

Ciesielski (1997) investigated tenure policies and criteria and promotion considerations for full professor in CM and Civil Engineering schools.  Principally, research holds a more prominent place in CE schools than Construction schools, and in CE schools, research is much more important when making tenure and promotion (T&P) decisions. Conversely, teaching holds a more important place in Construction schools; it is more important than research in making tenure decisions.  Service is ranked last in both.

 

In Badger (2001), based on U.S. averages, not by location, the Engineering colleges seem to pay 6-8% better than Architecture colleges, and the Architecture faculty salaries seem to be 6-8% higher than the Technology colleges.  Depending on the job position, the Engineering colleges’ salaries are 12-26% higher than those faculty members employed at the Technology dominated colleges.

 

The above comments suggest a significant disparity in almost all areas of Construction faculty hiring, promotions, and salaries.  The dramatic decrease in applicants for vacant faculty positions, whether right out of graduate programs or after some years of industry experience, demonstrates the effects of morale problems and lack of competitive salary structure.  Additionally, the fewer faculty hires from US programs indicates that US schools are not meeting the domestic needs.  The absence of a pool of young, PhD-holding applicants indicates a severe problem for the future.  While international hires and hires from outside the discipline can fill some of the positions, the inability to hire PhD’s in the specific field debilitates programs drastically, which negatively affects future recruitment of students and faculty. 

 

Construction as an academic discipline has not matured to the position of being able to produce its own PhD’s in Construction.  Research funding is the fuel that pays for PhD candidates and the movement into research is evolving slowly.  Outside forces on universities to raise private funding is also changing the funding dynamics of higher education and needs to be addressed.

 

Reisberg (1997), in a report commissioned by U.S. News & World Report to analyze its methods for ranking colleges, concluded in 1997 that the rankings lack substance.  "The principal weakness of the approach was that the weights used to combine the various measures into an overall rating lack any defensible empirical or theoretical basis,"  Mr. Thompson writes that the rankings are valuable because they help high-school students learn more about the colleges they are considering, and they standardize the data that institutions make public. The report noted that colleges and universities have long criticized the rankings for an apparently arbitrary weighting scheme that seems to change from year to year. The criticism, the report said, "does not mean that the present weighting system is necessarily wrong, but it does mean that it is difficult to defend on any grounds other than the U.S. News staff's best judgment on how to combine the measures."  As for adding measures that weight academic rigor and student satisfaction, Mr. Cary said, "We do have some measures that we feel go some distance toward getting at academic rigor," such as the freshman academic profile and the percentage of faculty members with Ph.D.'s. The academic rigor issue may be addressed in ACCE.

 

U.S. News & World Report (2002)   This year, according to the U.S. News & World Report ranking, Princeton is the best university in the country and Caltech is No. 4. This represents a pretty big switcheroo—last year, Caltech was the best and Princeton the fourth.  Of course, it's not as though Caltech degenerated or Princeton improved over the past 12 months. As Bruce Gottlieb explained last year in Slate, changes like this come about mainly because U.S. News fiddles with the rules.  But the credibility of rankings like these depends on two semi-conflicting rules. First, the system must be complicated enough to seem scientific. And second, the results must match, more or less, people's nonscientific prejudices.

 

Ranking Engineering Colleges

In ranking engineering colleges, college deans do a subjective ranking on the other engineering colleges using their own personal knowledge.  This is a subjective rating based on the reputation of the individual engineering colleges.  This is like the coaches ranking for football teams. Creating new knowledge in terms of funded research ranks high with 25 % and teaching receives little weighting.  Faculty resources focus on faculty status, reputation, and pedigree of the individual faculty members.  Having National Academy of Engineering faculty members in the college is critical for good ratings.  “Merit Scholar” designated students are critical for the rating for quality students.  The colleges need the quality students to do the research.  It is recognized that the factors to make engineering colleges World Class will be different from those factors needed to make Construction programs World Class

 

Reputation College 40%

Research

25%

 Quality Students 10%

Faculty Resources 25%

 

 

World Class Model for Construction Programs   The 2000; Estes Park workshop created the criteria for a World Class Construction Programs and created the following “vision”.

 

  1. Professional Faculty

    Quality faculty members are determined by their academic credentials, their ability to educate future leaders and the quality and quantity of research they undertake.  The research that they conduct, and the issues they grapple with, must be current, timely and well regarded.  Demonstrating scholarly work by publishing is a significant element of the successful faculty member.

  2. Quality Students and committed Alumni

    World Class programs offer exceptionally high and well-demonstrated academic quality.  World Class organizations prepare students to be owners and presidents of companies.  Graduates from World Class organizations get spectacular first jobs. Alumni of these organizations give at a significantly higher rate and commit personal money as well as corporate dollars to their Construction programs.             

  3. School – World Class

    A World-Class School has a large stock of reputation capital that brings a competitive edge.  World Class programs are at the school level or higher.  Organizational position brings prestige.  World Class programs are endowed programs with dynamic leaders that form alliances internally and externally to make a vision a reality.  A World Class program is diverse, open and has recognition externally by others.

  4. State and Institutional Support

    A program cannot be successful without significant support from the parent institution.  Support not only comes in the form of funding, but also in the approval of adequate facilities, promotion of exceptional faculty and the autonomy to make strategic decisions. 

  5. International Engagement  

    Organizations that are World Class are members of the world community through industry partnerships, formalized relationships and joint degrees with other global educational institutions.

  6. Social Embeddedness (Outreach)

    At the core of social embeddedness is striking a balance between the esoteric and the practical to serve the needs of the local community as well as the larger national and international communities.  World class programs must have mechanisms that think through current and future issues and easily disseminate solutions back and forth through society.

  7. Interdisciplinary

    World Class organizations by default are interdisciplinary.  They reach out to other professional groups to form a network that further strengthens all other functions within the organization and creates new ventures in research and community support.

 

The challenge in ranking is that professionals, faculty members, and program leaders do not want to be graded, ranked, or profiled; however, profiling appears to be more acceptable than ranking.  Faculty members have accepted the fact that to become tenured in the university system they must meet certain academic criteria in teaching, research, and service.  Engineering faculty members seem to place more emphasis on research, and Construction faculty members place more emphasis on teaching.  

 

The above discussion presents the challenges in improving the image of the Construction academic discipline.  The authors believe that the development of a ranking system will probably highlight the current problems/challenges in Construction education and this recognition may help improve Construction higher education.  Ranking should create data, information, and trends about Construction programs and will most likely drive universities and the industry to help improve those academic units.

 

 

Ranking Construction Program

 

It is appreciated that ranking construction programs would be different from rating universities or engineering programs because of the weighting distribution of the different categories.  In creating a construction program ranking system, the authors propose that program leaders do an annual rating of peer programs [the Engineering College model] that will be weighted 25% of the total score.  Additionally, the Construction programs would be grouped and ranked within those groups.  Other techniques to consider would be:

 

Determine what World Class metrics are and measure programs against that benchmark.  This is basically a self-evaluation study with data shared publicly, similar to an ACCE accreditation self-study, wherein a program evaluates its performance against ACCE standards.    

Once the process is developed, programs would have the option not to participate in the rankings. 

 

As a ranking process is developed and gains creditability, it is expected that more ASC programs would participate.  The debate and the development of the process must begin

 

Ranking Methodology

 

Any attempt to evaluate, compare, and rank programs requires the use of criteria and metrics that drive valid, reproducible results. For a given element, a numerical value should be assigned to all criteria and metrics.  An initial indication and measurement of each of the categories (critical principles of faculty, students, program, and scholarly work) is obtained through a simple summation of the numerical results for each metric on an individual basis. 

 

This ranking or measurement concept can be refined and may provide any detail or measurement desired.  Rankings with a high degree of confidence may be able to raise the standards of Construction education, improve the image of the industry, draw new practitioners and young people into the industry, and propel the recognition process beyond the Architecture, Engineering, and Construction industries.

 

 

The Stawman Ranking System
 
The authors propose that the Construction programs be divided into five program groups because there appears to be significant levels of size and maturity diversity between programs.
bullet
Group 1 CM   Construction programs that have 300 to 700 student enrollments, 15-25 faculty members, accredited, a graduate program, a funded research program in the $200K to $3M annually and well published.  It is estimated that six to eight ASC programs nationally may be in this group.
bullet
Group 2 CE   Construction Engineering Programs that have 50 to 200 student enrollments, 4-10 faculty members, accredited, a graduate program, a funded research program in the $200K to $3M annually, and well published.  It is estimated that six to eight programs nationally may be in this group.
bullet
Group 3 CM   Construction programs that have 200 to 300 student enrollments,  6 to 15 faculty, accredited, a graduate program, and little research or publishing. It is estimated that eight to 15 ASC programs nationally may be in this group. 
bullet
Group 4 CM   Construction programs that have 100 to 200 student enrollments, accredited, 3 to 8 faculty members, with or without graduate programs, and little research and/or publishing. It is estimated that 15 to 20 ASC programs nationally may be in this group.
bullet
Group 5 CM   Construction programs that have under 100 students enrollments, 1 to 3 faculty members, no graduate program, accredited or not, and little research and/or publishing.  It is estimated that15 to 20 ASC programs nationally may be in this group.
The authors established a strawman ranking with categories, factors, and metric details in Appendix A.  This Appendix has nine categories, 50 factors, a set of World Class metrics, the program metrics and related scoring.  If World Class is achieving a perfect score on each factor, the total points would be 1000. 

 

Category Weighting

 

The authors chose to use nine categories with relative weights as follows

 
bullet

Peer ranking [250 points]

bullet

Faculty [150 points]

bullet

Students [150 points]

bullet

Funding [100 points]

bullet

Industry Support [100 points]

bullet

Programs [100 points]

bullet

Facilities [50 points]

bullet

Globalism [50 points]

bullet

Alumni [50 points]

Metrics for each Category
 
bullet
Peer Ranking.  Within each group of construction programs, the program leaders would be asked to force rank all other programs.  Individual force rankings would be averaged and an overall force ranking determined.  The program ranked number one would receive 250 points; the program ranked number two would receive 225 points; etc.
bullet

Faculty. The authors feel that the true measure of any program is the quality of the individual faculty members.  World Class instruction is not singularly driven by curriculum, but the quality of the individual faculty member.  Great faculty members have outstanding classes and the students become outstanding alumni.  The challenge is that the faculty members do not generally want to be evaluated and compared with their peers or peer programs.  The metrics of student teaching evaluations, chair’s exit interviews, publications record, research dollars expenditures, and results of the CPC exams assessments could be used to rank faculty members and programs, but few programs have the desire, capability, or data to do this type of evaluation and ranking or can stand the heat if they do.

 

For this paper, the authors have chosen to suggest seven faculty metrics:

  1. Student teaching evaluations [25 points].  Most universities require a student evaluation of every course every semester.  The metric used would be the average score for all courses for the most recent academic year.  This average score would be benchmarked against a perfect score and points awarded as follows:
    1. Average score is 90-100% of benchmark—25 points;
    2. 80-90% of benchmark—20 points;
    3. 70-80% of benchmark—15 points; etc.
  2. Faculty teaching awards [25 points].  Awards earned in the most recent academic year as follows:
    1. National awards—15 points each award
    2. College/University awards—10 points each award
    3. Program awards—5 points each award
    Points are awarded for each award up to a maximum of 25.
  3. Research Expenditures [40 points].  Average research expenditures for the most recent academic year for all tenured and tenure-track faculty.  The benchmark would be $200,000 average per faculty member per year.  Points would be awarded as follows:
    1. Average more than $200,000—40 points
    2. Average more than $150,000—30 points
    3. Average more than $100,000—20 points
    4. Average more than $50,000—10 points
  4. Publications [40 points].  Average number of refereed publications [journals and conference proceedings] for the most recent academic year for all tenured and tenure-track faculty.  The benchmark would be five publications average per faculty member per year.  Points would be awarded as follows:
    1. Average 5 or more—40 points
    2. Average 4 or more—30 points
    3. Average 3 or more—20 points
    4. Average 2 or more—10 points
  5. National leadership [20 points].  Points would be awarded for faculty holding national leadership positions in the most recent academic year as follows:
    1. President of a national organization—10 points
    2. Officer, Board member or Committee Chair of a national organization—5 points
    3. Journal Editor—5 points
    4. National Academy Membership--5 points
    Points are awarded for each position held up to a maximum of 20 points.
  6. Diversity [30 points].  Each university has definitions as to the meaning and intent of faculty diversity.  Usually faculty diversity is intended to increase the hiring of minorities [as defined by the university] and females.  Points would be awarded for programs improving their faculty diversity during the most recent academic year as follows:
    1.  
      One new hire minority/female, tenured or tenure-track faculty member added during the most recent academic year—30 points
  7. Longevity [20 points].  Faculty longevity is a measure of the program’s success in recruiting, retaining and promoting faculty.  Points would be awarded based on the average longevity of all tenured and tenure-track faculty as follows:
    1. Average 10 years or more—20 points
    2. Average 8 years or more—15 points
    3. Average 6 years or more—10 points
bullet Students.  Like quality faculty, quality students are essential for a quality program.  The authors have chosen 10 metrics to define the ranking for students.
  1. High School Rank [20 points].  Most universities track the average high school rank of entering freshmen.  The metric chosen is the percentage of entering freshmen in the last class who are in the top 10% of their high school class. The benchmark chosen is 50% of entering freshmen in the top 10% of their high school class.  Points would be awarded as follows:
    bullet

    More than 50% in the top 10%--20 points

    bullet

    More than 25% in the top 10%--10 points

    bullet

    More than 10% in the top 10%--5 points

  2. National Merit Scholars [10 points].  Another measure of the quality of high school graduates is the number of National Merit Scholars.  The benchmark is set at 10.  Points would be allowed as follows:
    bullet

    More than 10—10 points

    bullet

    More than 5—5 points

  3. SAT/ACT scores [20 points].  Most entering freshmen are required to take the SAT and/or ACT test and report their test scores.  Programs can determine the average for all students in the program for the most recent fall semester/quarter.  The benchmark is set at 1200 SAT or equivalent ACT score.  Points would be awarded as follows:
    bullet

    Average SAT score greater than 1200—20 points

    bullet

    Average SAT score greater than 1100—15 points

    bullet

    Average SAT score greater than 1000—10 points

    bullet

    Average SAT score less than 1000—0 points

  4. Student/faculty ratio [50 points].  Student/faculty ratios are a widely used metric and tend to provide a measure of class sizes and personal attention given to students by faculty.  The student/faculty ratio metric is computed for the most recent fall term or quarter and includes all faculty [FTE; tenured, tenure-track and adjunct] and all students [FTE] in the program.  The benchmark is set at 20:1.  Points would be awarded as follows:
    bullet

    Student/faculty ratio 20:1 or less—50 points

    bullet

    Student/faculty ration 25:1 or less—40 points

    bullet

    Student/faculty ratio 30:1 or less—30 points

    bullet

    Student/faculty ratio 35:1 or less—20 points

    bullet

    Student/faculty ratio more than 35:1—0 points

  5. Student funding [40 points].  An important metric is the amount of institutional funding [state funding] provided for the students in the program.  The funding per student is measured at the most recent fall semester or quarter and the metric is obtained by dividing the total institutional funding provided to the program by the number of students [FTE] in the program.  The benchmark is set at $8000/student FTE.  Points would be awarded as follows:
    bullet

    More than $8000/student—40 points

    bullet

    More than $7000/student—30 points

    bullet

    More than $6000/student—20 points

    bullet

    More than $5000/student—10 points

    bullet

    Less than $5000/student—0 points

  6. Diversity [20 points].  Diversity of the student body is a goal of every university.  It is a special challenge for construction programs to recruit female students given the white, male dominance of the management ranks of the construction industry.  The authors struggled with a single metric for diversity and chose to focus on females.  The metric selected is the percentage of females in the entire program student body.  The benchmark chosen is 20%; points would be allowed as follows:
    bullet

    More than 20% female—20 points

    bullet

    More than 10% female—10 points

    bullet

    Less than 10% female—0 points

  7. Placement [10 points].  The placement of program graduates is considered an important metric.  Construction programs have enjoyed strong placement for many years, dictating an aggressive benchmark.  The metric is the percentage of graduates who had accepted a position with industry at graduation for the most recent year.  Students committed to graduate school or going into the military are considered to have placement at graduation.  The benchmark is set at 90%.  Points would be allowed as follows:
    bullet

    More than 90% placement—10 points

    bullet

    Less than 90% placement—0 points

  8. Starting Salary [10 points].  Starting salary for new graduates is an indicator of the value that the industry places on the program.  However, the authors had a difficult time choosing a metric measure.  It was decided to use the average starting salary of Civil Engineering [CE] graduates as the benchmark.  The average starting salaries are compared for the most recent academic year.  Points would be awarded as follows:
    bullet

    Starting salaries equal to or greater than CE starting salaries at the same institution—10 points

    bullet

    Starting salaries less than CE starting salaries—0 points

  9. Retention [10 points].  Student attrition is a measure of the quality of the program. The metric used is the percentage of student losses [students lost/total student FTE] for the previous academic year.    The benchmark is set at 5%.  Points would be awarded as follows:
    bullet

    Retention 95% or greater—10 points

    bullet

    Retention less than 95%--0 points

  10. Associate Constructor Exam. [10 points].  Many construction programs are now requiring graduating seniors to sit for the Associate Constructor examination in their last semester.  This is a day-long examination and can lead to the Designation of Certified Professional Constructor.  The authors chose to include this metric since it is a strong indicator of the strength of the program.  The metric is the percentage of students who pass the examination; the national pass rate is about 55%.  The bench mark is set at 65% pass rate.  The program pass rate is for the most recent academic year.  Points would be allowed as follows:
    bullet

    Programs with a pass rate of 65% or more—10 points

    bullet

    Program with a pass rate equal to or exceeding the national average pass rate—5 points

    bullet

    All others—0 points

     
bullet Funding.  Construction programs are funded from multiple sources; the dominant source is usually the institutional funding [state funding] although universities depend more and more on non-institutional funds.  Private donations are becoming increasingly important; and earned income from activities like continuing education offerings is often necessary to provide funding for student enrichment and faculty development.  The authors have chosen four metrics for this category.
  1. Endowments [50 points].  Endowments are becoming the financial foundation of many programs.  The benchmark set here is based on the normal endowments set up by universities when a large donor favors the program.  Points would be awarded as follows:
    bullet

    For a program endowment—25 points

    bullet

    For an endowed Chair—10 points

    bullet

    For an endowed Professorship—5 points

    bullet

    For endowed scholarships with a corpus in excess of $1 million—5 points

    Total points awarded may not exceed 50

  2. Earned Income [10 points].  Income may be earned from many sources and must accrue to the program for its use.  A dollar metric is set for the awarding of points.  The dollar amount is for the most recent academic year.
    bullet

    Earned income in excess of $100,000—10 points

    bullet

    Earned income in excess of $50,000—5 points

    bullet

    Earned income less than $50,000—0 points

  3. Donations [10 points].  There are many donors and many forms of donations-cash, equipment, in-kind services.  Once again the authors have chosen to use a dollar metric.  The dollar amount is for the most recent academic year and does not include donations that are to be endowments.  Points would be awarded as follows:
    bullet

    Donations in excess of  $100,000—10 points

    bullet

    Donations in excess of $50,000—5 points

    bullet

    Donations less than $50,000-0 points

  4. Institutional funding [30 points].  Institutional funding has already been measured under the Student category.  However, the authors feel another metric is appropriate here and that relates to overall program funding for faculty, staff and operating funds.  Most universities maintain statistics on the cost to produce one semester credit hour [$/SCH].  The metric used here will measure the program $/SCH as compared to comparable figures for other academic programs at the same university.  The measure is $/SCH for the most recent academic year.  Points would be awarded as follows:
    bullet

    Program $/SCH 10% lower than other college/university program—30 points

    bullet

    Program $/SCH within +/- 10% of other college/university programs—15 points

    bullet

    Program $/SCH more than 10% higher than other college/university programs—0 points

 

bullet

Industry support.  Industry support is crucial for construction programs of higher education.  Industry employs program graduates and interns, helps with program quality improvement, and provides targeted funding support.

  1. Industry Advisory Committee [50 points].  Most accrediting bodies require that construction programs have industry advisory committees [IAC].  The metric seeks to establish the quality of IACs. Exemplary IACs are well organized with current by-laws, meet at least twice a year to conduct business, have robust membership representing a cross section of industry sectors, provide discretionary funding to the program, and actively involve the membership in matters of program curriculum, facilities, research, and guest speakers.  Strong IACs  participate in at least half of the activities outlined above for Exemplary IACs.  Points would be awarded as follows:
    bullet

    Exemplary IAC—50 points

    bullet

    Strong IAC—25 points

    bullet

    All other—0 points

  2. Continuing education [25 points].  Strong construction programs offer continuing education opportunities for the construction industry.  The metric chosen by the authors is the number of continuing education courses offered in the most recent academic year.  The benchmark is set at five.  Courses are defined as at least a three-day syllabus.  Points would be awarded as follows:
    bullet

    Five or more courses offered—25 points

    bullet

    Three or more courses offered—15 points

    bullet

    All other—0 points

  3. Student Chapters of professional organizations [25 points].  Student chapters of professional organizations offer leadership opportunities and foster stronger relationships between programs and industry.  The metric chosen is the number of active student chapters in the most recent academic year.  The benchmark is set at five.  Points would be awarded as follows:
    bullet

    Five or more active student chapters—25 points

    bullet

    Three or more active student chapters—15 points

    bullet

    All other—0 points

 
bullet Programs.  Construction programs are found in many different places in the academic arena.  The authors feel that some metrics should be offered to measure the program’s place in the academic community and its stature in that community.  Four metrics are used.
  1. Program title [30 points].  The title of the program and its place within the academic hierarchy is important.  Points would be awarded as follows:
    bullet

    Title “College”—30 points

    bullet

    Title “School”—20 points

    bullet

    Title “Department”—10 points

    bullet

    All other—0 points

  2. Degrees offered [30 points].  It is important for construction programs to evolve to offer graduate programs.  Points would be awarded as follows:
    bullet

    PhD offered—30 points

    bullet

    Masters offered—15 points

    bullet

    All other—0 points

  3. Accreditation [20 points].  Accreditation is an essential peer review process that promotes excellence and continuous improvement.  Points would be awarded as follows:
    bullet

    ACCE/ABET/NAIT accreditation with no weaknesses—10 points

    bullet

    Outside peer review—5 points

    bullet

    Other accreditation—5 points

  4. Internships/co-op [20 points].  Strong programs have mandatory internship/co-op requirements.  Points would be awarded as follows:
    bullet

    Program has mandatory semester internship program—20 points

    bullet

    Program has mandatory summer internship—10 points

    bullet

    All other—0 points

Facilities.  Excellent programs need excellent facilities.  The authors propose four metrics in this area.
  1. Dedicated facility [20 points].  Points would be awarded as follows:
    bullet

    Separate program building or dedicated space in quality facility with conspicuous “front door” identity—20 points

    bullet

    All other—0 points

  2. Faculty offices [10 points].  Faculty offices should provide adequate space, be appropriately furnished and provide privacy for counseling students.  Points would be awarded as follows:
    bullet

    Quality, private space of at least 150sf for all faculty—10 points

    bullet

    All other—0 points

  3. Laboratories [10 points].  Some program coursework requires laboratory space for excellent teaching.  Points would be awarded as follows:
    bullet

    Excellent laboratory facilities—10 points

    bullet

    All other—0 points

  4. Quality classrooms [10 points].  Quality teaching requires quality classrooms—classrooms with appropriate lighting, environmental systems, audio-video equipment, and seating.  Points would be awarded as follows:
    bullet

    Quality classrooms available—10 points

    bullet

    All other—0 points

 

bullet

Globalism.  Construction is a global business.  Programs must foster globalism through study abroad, international exchanges, and courses that teach the global nature of our shrinking world.  The authors propose four metrics.

  1. Study Abroad [20 points].  A study abroad program has multiple opportunities for students to study outside the U.S.  Points would be awarded as follows:
    bullet

    Study abroad program available and utilized—20 points

    bullet

    All other—0 points

  2. International student exchange program [10 points].  Agreements with international universities to provide for student exchanges enrich both universities.  Points would be awarded as follows:
    bullet

    Exchange agreement in place and utilized—10 points

    bullet

    All other—0 points

  3. International faculty exchange [10 points].  Just as student exchanges can enrich programs, faculty exchanges are excellent vehicles for students to see other cultures.  Points would be awarded as follows:
    bullet

    Faculty exchange agreement in place and exercised in the most recent academic year—10 points

    bullet

    All other—0 points

  4. Cooperative degree program [10 points].  One of the best international collaboration means is to have a joint degree offering.  Points would be awarded as follows:
    bullet

    Cooperative degree program in place and exercised—10 points

    bullet

    All other—0 points

 

bullet

Alumni.  Active and committed alumni are a strong indication of program strength.  The authors have selected three metrics to quantify this support.

  1. Membership in alumni association [20 points].  Every university has an alumni association.  The metric chosen is the percentage of alumni from the program who are members of the association.  The chosen benchmark is 25% from the most recent available census.  Points would be awarded as follows:
    bullet

    Programs with more than 25% alumni membership—20 points

    bullet

    Programs with more than 15% alumni membership—10 points

    bullet

    All other—0 points

  2. Alumni giving [20 points].  Strong programs receive financial support from their alumni.  Choosing a measure for this metric is a challenge, but the authors have chosen the percentage of alumni from the program who made a financial contribution to the program in the most recent academic year.  The benchmark chosen is 15%.  Points would be awarded as follows:
    bullet

    More than 15% of alumni contributing—20 points

    bullet

    More than 10% of alumni contributing—10 points

    bullet

    All other—0 points

  3. Alumni company leadership and ownership [10 points].   Program alumni move up rapidly in the construction industry to positions of responsibility.  These alumni are in a position to influence the success of the program.  The metric chosen here is the number of alumni who have risen to the top ranks of their companies.  The measure is the number of CEOs and company owners in the most recent academic year.  The benchmark is set at 20.  Points would be awarded as follows:
    bullet

    More than 20 CEOs/owners—10 points

    bullet

    More than 10 CEOs/owners—5 points

    bullet

    All other—0 points

     
 
Who Should Do the Ranking
 

This issue suggests another whole set of issues involving rankings.  The Associated Schools of Construction is the largest organization of construction higher education programs, yet a significant majority of the membership is expected to oppose any ranking system.  Accreditation bodies such as ACCE and ABET have not historically ranked programs and are not expected to embrace ranking concepts.  The American Institute of Constructors is a national organization, focused on the professionalism of the individual constructor and probably would not feel it appropriate to consider a program ranking system.

 

For colleges and universities, the most often quoted ranking system is by U.S. News and World Report—the media.  ENR has shown interest in a ranking system and may evolve the current profiling system into a ranking system using its own criteria.

 

Professional associations have shown interest in a ranking system from time to time.  Most recently, the Associated General Contractors and the National Association of Homebuilders have initiated ranking system discussions.  The fact that these two very independent professional associations are considering ranking systems lends even more complexity to the issue; they can be expected to come up with very different sets of ranking metrics.

 

There is no clear answer to this issue.  However, the authors feel confident that in the not too distant future, someone will decide to rank programs of construction higher education; virtually every other academic program in existence is ranked by someone, and construction programs have matured to the point where their time has come.

 

 

Conclusion
 

These conclusions are offered to open the debate and to engage the construction higher education community.

 
  1. Construction programs are going through transformational change from just teaching undergraduate programs to more traditional academic programs with graduate education, research, fund raising, and well-rounded, aggressive faculty member expectations.

  2. The academic discipline of Construction needs to discuss the changes and the impact on programs.  Creating a ranking system may be an appropriate means to discuss these issues.

  3. Participating in a ranking process will provide program leaders with benchmark data for self analysis, identification of short comings, and information to promote and justify change.

  4. Construction programs are different from engineering programs and need different metrics.  The differences need to be discussed, understood, and reconciled.

  5. The program ranking system must be complicated enough to seem [or “appear”] scientific and the results must match, more or less, people's nonscientific prejudices.

  6. The ranking process needs to obtain a program leader reputational ranking element to balance the process.

  7. The first step is to establish a theoretical World Class goal and benchmark program as the standard before comparing rankings between programs.

  8. A series of strawmen rankings needs to be conducted before establishing a set of factors and metrics.  The ranking model must be validated by a number of test runs. 

  9. Design a rating system where some non-traditional leadership can use it and others will follow. 

  10. Papers about controversial topics provide new ideas and concepts that need to be discussed and may drive change.

 

 

References

 

Leo Reisberg, 1997, Independent Report in 1997, Assailed Substance of 'U.S. News' College Rankings

 

William W. Badger, 2001, “the CM Faculty Pipeline Needs Renovating” An ASC Proceedings paper

 

William W. Badger, 2001 letter to the incoming president of ACCE, Bill Barnes.

 

Badger, W.W. and Robson, K. (2000). Raising expectations in construction education. ASCE Construction Congress VI, USA, 1151-1164

 

Christensen, K. and Rogers, L. (1992).  Teaching, service, and research, in evaluation of construction management faculty for tenure and promotion. ASC Proceedings of the 28th Annual Conference, Auburn, Alabama, 79-83

 

Ciesielski, C.A. (1997). Tenure and promotion: A comparison between Construction Management and Civil Engineering. ASC Proceedings of the 33rd Annual Conference, University of Washington, Seattle, WA, 21- 31

 

 

Appendix

 

APPENDIX A. PROGRAM EVALUATION WORKSHEET

CASE I

CASE II

CATEGORY

WEIGHT

METRICS

WEIGHT

SCORE

SCORE

               

A

Peer Ranking

250

     

200

200

               
     

1

Student Teaching Evaluations

25

15

15

     

2

Faculty Teaching Awards

25

30

20

     

3

Research Expenditures

40

20

10

B

Faculty

150

4

Publications

40

40

20

5

National Leadership

20

20

15

6

Diversity

30

0

0

7

Longevity

20

15

15

               

1

High School Rank

20

10

10

     

2

National Merit Scholars

10

0

0

3

SAT/ACT scores

20

10

15

4

Student/faculty ratio

50

20

20

C

Students

150

5

Student funding

40

0

0

6

Diversity

20

10

10

7

Placement

10

10

10

8

Starting salary

10

10

10

9

Retention

10

10

10

     

10

Associate Constructor exam

10

10

10

               

1

Endowments

50

35

30

D

Funding

100

2

Earned Income

10

10

10

3

Donations

10

10

10

4

Institutional funding

30

0

0

               
     

1

Industry advisory committees

50

50

50

E

Industry Support

100

2

Continuing education

25

25

15

3

Student Chapters

25

15

25

               

1

Program Title

30

20

10

F

Programs

100

2

10

30

15

15

3

Accreditation

20

15

15

4

Internship/co-op

20

20

20

               

1

Dedicated Facility

20

20

0

G

Facilities

50

2

Faculty offices

10

10

10

3

Laboratories

10

10

0

4

Quality classrooms

10

10

10

               

1

Study abroad

20

20

20

H

Globalism

50

2

International student exchange

10

10

10

     

3

International faculty exchange

10

10

0

     

4

Cooperative degree program

10

10

0

               
     

1

Membership in alumni association

20

10

10

I

Alumni

50

2

Alumni giving

20

10

10

     

3

Alumni company leadership/ownership

10

10

10

               

Total

1000

775

670