Back Home
ASC Proceedings of the 42nd Annual Conference
Colorado State University Fort Collins, Colorado
April 20 - 22, 2006                 

 

Continuing the Ranking Game: Using ASC Publication as One Criteria for the Ranking of C-schools

 

Kenneth C. Williamson III, Ph.D. and Richard A. Burt, MRICS, Ph.D.

Texas A&M University

College Station, Texas

 

This research profiles, measures and ranks ASC publications as one measure to shape a more systematic and scientific system affecting a C-school top-ranking solution. Although there are three potential end users of the ranking, i.e., students, programs, and employers, these criteria will assist mainly the programs in cross C-school evaluation.  The results provide two levels of ASC publication rankings, 1) long-term and 2) current, calculated to affect selected equity weights which negate program size and maturity.

 

Key Words: Profiling, Ranking, C-schools, Publications

 

 

Rationale

 

The public construction program (C-schools) ranking game can trace its meager beginnings to Engineering News Record’s (ENR) and a select group of peer-construction program leader’s efforts beginning in 2000. ENR attended the Program Leaders Meeting at the 2001 Annual Conference of the Associated Schools of Construction (ASC) to vainly attempt to garner appropriate C-school top-ranking criteria from the attending program representatives. It is the opinion of the authors of this paper that ENR failed in the attempt, and that they discovered that construction programs, as a general rule, do not desire a national C-school top-ranking system. In October of that year ENR published the results of their effort, which did not afford a systematic top-ranking of C-schools, but did report student enrollment as a 20 largest C-schools rank along with profiling fourteen other data points across 88 programs (ENR, 2001).

 

At the 2005 Annual ASC Conference held at the University of Cincinnati the debate began again on top-ranking construction programs (Badger & Smith, 2005). These authors proposed a straw man ranking system, developed in a non-public workshop at Estes Park Colorado, to rank construction programs across five groups over nine categories thereby creating their “vision” of World Class C-school. This current article’s authors find it interesting to note that the private Estes Park workshop recommended that once a standard metric was established that ASC programs would be able benchmark themselves against these standards. Further, that this visionary information would be used for self evaluation purposes, not rankings, and would theoretically lead to improved program performance. Counter to the workshop’s recommendation not to rank their article is a rank article with overtones of competition by selected program size.

 

Profiling Data

 

The ASC represents is the only academic organization exclusively representing construction higher education programs. There are others who dabble in construction education, i.e. American Society of Engineer Education (ASEE), but these organizations have agendas different than the member programs of the ASC which is the reason that the ASC exists. The ASC began profiling its membership and publishing the results early in its history through annual institutional program, curriculum, student, textbook and salary surveys. The earliest profiler identified is a 1974 Course and Curriculum Information publication of the sixteen member programs (ASC, 1974). In 1982 James Rodger, Memphis State University, published the Minutes of the 18th General Meeting profiled student enrollments of the fifty members (ASC, 1982). 1986 Dr. Kent Davis, John Brown University conducted and published the first identified ASC Textbook Survey (ASC, 1988). 1988-89 data was published from the ASC Institutional Survey including student enrollment at both undergraduate and graduate levels, and program faculty counts (ASC, 1989). In 1989 the second of two topics identified by the ASC Board as an essential organizational function was to serve as an information source to members, industry and the public (Badger, 1989). Steve Schuette, Purdue University, conducted the first published faculty salary survey (Schuette, 1994). In 1995 the ASC went electronic through the ASC Web Site, forms were created and annual data on the ASC annual surveys were called for, collected, archived and reported until 2002. At the current time only the salary survey has been maintained. One could easily say that the ASC has lost the mission of the early 1980’s in sharing program profiles. Without question, it is the responsibility of the ASC to profile its membership and publish that data. Without cross-program data it will be almost impossible for the ASC to advance the mission of its membership. To that end it has failed its membership.

 

Any profile data collected may be incorrect and as such any ranking system is precarious because the underlying data may not be reliable. Much of the data used is supplied by the program leaders themselves and is often a reflection not of reality but what they really want to report. Different programs may have different ways of counting courses, faculty, students, or it is entirely possible that the agent reporting data simply made a mistake that is not caught. Quantitative data also strips away information about the qualitative differences in programs. In some cases quantitative profile data would make programs look more similar than they really are, but in most cases it makes programs look more different in quality than they really are. Programs probably vary more in character and tradition than they do in quantity and size. The problem with a totally quantitative ranking system will be that many closely ranked programs will have very similar total profile criteria scores, and even programs with quite different rankings can have similar profiles. With an insignificant change in the profile data, such as two additional students passing the AIC exam, a program could leap over competitors with similar total scores. So, top-rankings are treacherous because they mask the fact that the differences between C-schools are negligible on many profile criteria. In reality, for C-schools that have close profile scores and ranks, whether the university has a good football team is as important as the difference in rank.

 

Ranking System

 

Profiling is integral to program advancement and any systematic approach to C-school ranking, but the C-school ranking game is a very dangerous adventure. This discussion on ranking is drawn from the works of Jeffrey E. Stake, Indiana University School of Law.

 

“There are lies, damned lies, statistics, and rankings. Rankings create hidden problems for people who are choosing a school, people who are affiliated with a school, and for society at large. They can discourage applicants, students, and teachers. They can cause applicants to choose the wrong school and schools to admit the wrong applicants. Rankings can even cause schools to teach less well by deflecting a curriculum from what matters to what will improve rankings (Stake, 2005).”

 

Three principal end users are generally identified within the ranking literature, 1) students, 2) programs, and 3) employers. Rankings are highly dependant on the choice of factors and the equity weights applied to the various factors. There are no correct weights that fit all users. Ranking systems with arbitrarily fixed weights will potentially mislead students, programs and employers. The nine categories and embedded sub-categories proposed by Badger & Smith are quite extensive and may be appropriate to their expectations. However, their assignment of a value system is considered by this paper’s authors as inappropriate and exemplifies their program leader and engineering bias. Each ranking user is looking for completely different rank based on unique value systems, problematically from the same data set. Students and their parents are most likely interested in searching for and finding a program that fits with their unique expectations. These expectations might include the quality of teaching, the accessibility of faculty, the availability of internship programs, the potential for employment during and after their educational experience, the strength of the alumni network, as well as, housing, educational costs, and local leisure opportunities. Not found within these search expectations might be faculty research and scholarly publication, donations, institutional funding, advisory boards, continuing education (which they will not need until after they graduate) and above all, faculty offices. One could probably effectively argue that a potential student or even a potential employer would have no value in the ranking criteria proposed by this current paper; however construction programs should value the criteria.

 

Badger and Smith identify five program groups by significant levels of size and maturity between programs. They propose a stratification of the ASC, in the order of the controversial “peer-six” collaboration. A better solution would be to normalize the data prior to any ranking to adjust for or negate the effects of program size and maturity. For some criteria, higher numbers are better and for other criteria such as faculty-to-student ratios and tuition prices, lower scores are better. However, these scores could be normalized by subtracting them from a constant to reverse their order, making them the higher score within the ranking. Following this iteration, all scores on one criterion could be divided by the standard deviation of those scores. The end result would be that, the criteria are equally distributed, and therefore have equal weight irrespective of the criterion factors. The size and maturity of a program has little to do with the quality of teaching or program content.

 

Rankings may erroneously make some programs look bad. Users of the rankings may not know much about construction education and be lead to believe that the programs at the bottom of the list are inferior programs, when they are in fact excellent academic programs that produce superior results. A rank’s meaning is totally dependent upon the values of the end user and any rank system must provide for adjustable value weights. Ranking equity weighting will most likely be defined by minute differences in composite C-school scores. As with all games there are winners and losers and there will be bottom-ranked programs as well top-ranked programs. Construction education, program profiling and even top-ranking systems has many potential winners. The negative side effect of ranking programs, as we do other competitions, may be to cause users to forget that unlike games, one program’s losing does not give their program meaning, educational programs do not need to defeat other programs to be winners.

 

Problem Statement

 

None of the nine categories proposed by the Badger and Smith paper mentioned research in the categorical title. Research is included in the model under the category Faculty which accounted for 150 points out of a total of 1000 points. Of these 150 points 40 points are allotted to Publications. This paper’s authors maintain that publications within this academic construction organization are one of the most visible indicators of construction research activity, to some extent the most easily and factually quantified, and as such demands significant weight in any proposed ranking or assessment scheme. The Badger and Smith model’s publication criteria we consider flawed by two aspects. First it is an annual snapshot of faculty publications, as we all know there is significant turnover in faculty within construction programs and the push for publication is related to tenure seeking faculty. There is a significant probability that programs with the youngest and most tenure seeking faculty will skew the criterion scoring positively in favor of that program. It is probably safe to say that this young faculty is not yet representative of a program’s academic quality as are their more mature faculty. Finally, we also are aware of faculty who are not aligned with construction while teaching within construction programs. Most engineers and architects teaching within construction programs, as a general rule, do not make use of the ASC publication opportunities. They support the discipline’s publication effort from whence they came. To these, construction remains a second-hand resource.

 

This paper continues the ranking debate by proposing ranking criteria for C-schools based on a research output profiles exemplified by “long-term” and “current” ASC publication. This first attempt at ranking research output will use the publications database of the ASC. This research will profile and evaluate the publication record of construction education programs in the (International) Journal of Construction Education (JCE) and the Annual Proceedings of the ASC in order to propose an effective research output criterion set and measurements to be included in future models for C-school top-ranking.

 

 

Method

 

Sample Description

 

The populations under analysis are the publications of the ASC; the Annual Proceedings and the (International) Journal of Construction Education. The ASC website at http://www.ascweb.org/ holds an archive of the proceedings dating back to the 23rd conference held at Purdue in 1987 with the 22nd conference proceedings held at the University of Florida in 1986 representing the first published proceedings; however it is not archived by the ASC. All Journal articles dating back to volume one published in 1996 are also in the archive. The total number of proceedings papers is 601 and the total number of journal articles is 137.

 

Procedure

 

This paper will rank the research output of construction programs by analyzing four samples from the population:

 

bullet

Long-term (All)
bullet

The (International) Journal of Construction Education since 1996

bullet

The Annual Proceedings of the ASC since 1986

bullet

Current (Last 5 Years)
bullet

The (International) Journal of Construction Education from 2001-2005

bullet

The Annual Proceedings of the ASC from 2001-2005

 

Scoring

 

Each article or paper in the population will be assigned one point. If the authors are all from the same university the whole point will go to that university. If there are authors from different universities the one point will be apportioned equally between the universities. For example the paper entitled Distance Education with Internet2 Audio/Video Technology published in Volume 1.1 (Summer 2004) of the International Journal of Construction Education has three authors from the University of Nebraska, Lincoln and one author from Colorado State University, therefore 0.75 points would be assigned to the University of Nebraska, Lincoln and 0.25 points to Colorado State University. C-schools will be initially ranked by publication counts within the sampled grouping of the total population.

 

There are three threats identified that could have a direct affect upon the calculation of the final ranking. These threat’s scorings are treated as cumulative step scores in achieving a final score and rank. Each weighting step utilizes an assigned value and calculated score leading to the next threat category. The first, membership starts and stops date data was gathered from the 2001 ASC Database and the current programs listed on the ASC Website. Length of membership was then adjusted to the length of time of the period being investigated. If they are not an active member of the ASC their score is zero and if they have not been a participant for at least 75% of the evaluation period their score is calculated using a 0.75 multiplier. This iteration assures that all active member program articles or papers, given that they have an established publication track record, are equally measured by increasing the count to match the full time-period members, e.g., IF(Membership Years=0,0,(IF(Membership Years<75% of Sample Period,(Publication Count*0.75),(((1-(Membership Years/Years of Sample Period))+1)*Publication Count)))).

 

The Carnegie classifications were taken from the ASC Graduate Education web site. It is the opinion of these authors that a Tier 1 institution’s faculty are expected to publish, that a Tier 5 faculty are not necessarily required to publish, but all faculty should be publishing. If a Tier 5 faculty publishes they are exceeding their employment requirements and therefore should be rewarded above a Tier 1 faculty. A multiplier was therefore applied to the article or paper count to affect this reward. There are seven tiers within the Carnegie classification system. Tier 7 was given a 1.0 multiplier and the multiplier was reduced 0.05 per tier thereafter. This gave a Tier 1 school a 0.75 multiplier, e.g. (Carnegie Classification Value*Membership Score).

 

The faculty count data was gathered from the 1988-1989 ASC Institutional Survey, the 1995 (Williamson 1995) and 2001 (Williamson 2001) ASC Databases. To establish the 2005 tenured and tenure-track faculty counts an Internet search of all represented programs. In that the ASC web site does not provide program faculty rank data the Internet search was necessary. Prior ASC surveys only included tenured and tenure-track faculty counts and the search provided consistency in faculty counts. Faculty counts were averaged during the time period being investigated. This equally weights publication production to program faculty counts, e.g. (Carnegie Score/Average Faculty Count for Sample Period).

 

In that there is evidence of marked non-normality we are able to remedy this problem by applying suitable data transformation such as a smoothing of the criteria by a using statistical weighting scheme to calculate the final equalized score. An equal weight adjustment factor for article or paper productivity scoring can be calculated directly by dividing estimated article or paper production by the criteria’s mean standard deviation (NIST/SEMATECH, 2005), e.g. (Faculty Score/STDEV(Total Score)). This method is similar to the Daniell window which is a weighted moving average transformation used to smooth the criteria values. This transformation amounts to a simple (equal weight) moving average transformation of the criteria values, that is, each criterion estimate is computed as the mean of the preceding and subsequent criteria values as suggested by Bloomfield (2000).

 

 

Results

 

The (International) Journal of Construction Education since 1996

 

Since its inception in 1996 there have been 137 articles published in the (International) Journal of Construction Education. The 137 articles are accounted for by 38 institutions of these 33 are ASC members. The current ASC web site identifies 105 member institutions. So 31% of the ASC membership is represented in the Journal. Table 1 shows member institutions by article counts, years of membership, Carnegie classification, average faculty counts, the total equalized points scored and the rank for the JCE since 1996.

 

Table 1

 

Long-term Ranking of C-schools within the (International) Journal of Construction Education since 1996

C-school

Articles

Membership

Carnegie

Faculty

Totals

Count

Rank

Value

Score

Value

Score

Value

Score

Score

Rank

University of North Florida

6.00

7

10

6.00

0.80

4.80

3.67

1.31

4.085

1

Texas A&M University

24.00

1

10

24.00

0.75

18.00

15.33

1.17

3.667

2

Colorado State University

13.00

2

10

13.00

0.75

9.75

9.33

1.05

3.264

3

Virginia Institute of Technology

6.80

6

10

6.80

0.75

5.10

5.67

0.90

2.809

4

University of Nebraska, Lincoln

5.80

8

10

5.80

0.75

4.35

5.33

0.82

2.549

5

Illinois State University

3.00

12

10

3.00

0.75

2.25

3.00

0.75

2.342

6

University of Oklahoma

3.50

11

10

3.50

0.75

2.63

4.00

0.66

2.050

7

Brigham Young University

5.00

10

10

5.00

0.75

3.75

6.67

0.56

1.756

8

Arizona State University

7.50

4

10

7.50

0.75

5.63

10.33

0.54

1.701

9

Bowling Green State University

2.00

15

10

2.00

0.75

1.50

3.00

0.50

1.562

10

Purdue University - BCM

9.50

3

10

9.50

0.75

7.13

14.67

0.49

1.517

11

Bradley University

5.50

9

10

5.50

0.80

4.40

9.67

0.46

1.421

12

Southern Polytechnic State University

2.70

14

10

2.70

0.95

2.57

5.67

0.45

1.413

13

University of Florida

7.00

5

10

7.00

0.75

5.25

12.33

0.43

1.330

14

University of Nevada, Las Vegas

2.00

15

10

2.00

0.75

1.50

3.67

0.41

1.277

15

Northern Arizona University

2.00

15

10

2.00

0.75

1.50

4.33

0.35

1.082

16

Florida International University

2.80

13

10

2.80

0.75

2.10

7.33

0.29

0.895

17

University of Louisiana at Monroe

2.00

15

10

2.00

0.80

1.60

5.67

0.28

0.881

18

University of Cincinnati

3.00

12

10

3.00

0.75

2.25

9.00

0.25

0.781

19

University of Washington

2.00

15

10

2.00

0.75

1.50

7.33

0.20

0.639

20

Auburn University

3.00

12

10

3.00

0.75

2.25

14.33

0.16

0.490

21

 

The (International) Journal of Construction Education from 2001-2005

 

Over the past five years 42 articles were published in the (International) Journal of Construction Education. The 42 articles are accounted for by 22 institutions of these 19 are currently ASC members. Therefore 18% of the ASC membership is represented in the Journal. Table 2 shows member institutions by article counts, years of membership, Carnegie classification, average faculty counts, the total equalized points scored and the rank for the JCE from 2001-2005.

 

Table 2

 

Current Ranking of C-schools within the (International) Journal of Construction Education 2001- 2005

C-school

Articles

Membership

Carnegie

Faculty

Totals

Count

Rank

Value

Score

Value

Score

Value

Score

Score

Rank

University of North Florida

4.00

2

5

4.00

0.80

3.20

4.00

0.80

3.643

1

Texas A&M University

13.00

1

5

13.00

0.75

9.75

15.50

0.63

2.864

2

University of Alaska

1.00

6

1

0.75

0.80

0.60

1.00

0.60

2.732

3

Brigham Young University

3.00

3

5

3.00

0.75

2.25

7.50

0.30

1.366

4

Bowling Green State University

1.00

6

5

1.00

0.75

0.75

3.00

0.25

1.138

5

Illinois State University

1.00

6

5

1.00

0.75

0.75

3.50

0.21

0.976

6

University of Florida

2.50

4

5

2.50

0.75

1.88

12.50

0.15

0.683

7

Auburn University

3.00

3

5

3.00

0.75

2.25

15.50

0.15

0.661

8

Michigan State University

1.00

6

5

1.00

0.75

0.75

5.50

0.14

0.621

9

Northern Arizona University

1.00

6

5

1.00

0.75

0.75

6.00

0.13

0.569

10

Indiana State University

1.00

6

5

1.00

0.75

0.75

6.50

0.12

0.525

11

Virginia Institute of Technology

1.00

6

5

1.00

0.75

0.75

6.50

0.12

0.525

11

Louisiana State University

1.00

6

1

0.75

0.75

0.56

5.50

0.10

0.466

12

University of Washington

1.00

6

5

1.00

0.75

0.75

8.00

0.09

0.427

13

Colorado State University

1.25

5

5

1.25

0.75

0.94

10.50

0.09

0.407

14

Bradley University

1.00

6

5

1.00

0.80

0.80

11.00

0.07

0.331

15

Arizona State University

1.00

6

5

1.00

0.75

0.75

12.00

0.06

0.285

16

Purdue University - BCM

1.00

6

5

1.00

0.75

0.75

14.50

0.05

0.236

17

Ohio State University - Wooster

1.00

6

0

0.00

0.75

0.00

1.00

0.00

0.000

18

University of Singapore

1.00

6

0

0.00

0.75

0.00

1.00

0.00

0.000

18

 

The Proceedings of the Annual Conference of the Associated Schools of Construction from 1986-2005

 

The Proceedings Archive on the ASC website contains all the proceedings from the 23rd Annual Conference onwards. The authors also obtained a paper copy of the proceedings of the 22nd Conference. A total of 601 papers were presented at the 20 conferences. The 601 papers are accounted for by 76 institutions of which 60 of these are current ASC members. Therefore 57% of the ASC membership is represented in the Proceedings. Table 3 shows member institutions by article counts, years of membership, Carnegie classification, average faculty counts, the total equalized points scored and the rank for the Proceedings of the Annual Conference since 1986.

 

Table 3

 

Long-term Ranking of C-schools within the Proceedings of the Annual Conference of the Associated Schools of Construction from 1986-2005

C-school

Papers

Membership

Carnegie

Faculty

Totals

Count

Rank

Value

Score

Value

Score

Value

Score

Score

Rank

Arizona State University

46.00

2

20

46.00

0.75

34.50

9.50

3.63

4.275

1

Brigham Young University

27.00

6

20

27.00

0.75

20.25

6.00

3.38

3.973

2

University of Nevada, Las Vegas

14.00

12

18

15.40

0.75

11.55

3.50

3.30

3.885

3

Texas A&M University

62.00

1

20

62.00

0.75

46.50

14.50

3.21

3.775

4

Southern Illinois University, Edwardsville

15.00

11

20

15.00

0.80

12.00

4.00

3.00

3.532

5

University of Oklahoma

15.00

11

20

15.00

0.75

11.25

3.75

3.00

3.532

5

Virginia Institute of Technology

17.00

9

17

19.55

0.75

14.66

5.00

2.93

3.452

6

Colorado State University

31.00

5

20

31.00

0.75

23.25

8.75

2.66

3.128

7

University of Nebraska, Lincoln

17.00

9

20

17.00

0.75

12.75

5.50

2.32

2.729

8

Purdue University - BCM

43.00

3

20

43.00

0.75

32.25

14.75

2.19

2.574

9

Auburn University

40.00

4

20

40.00

0.75

30.00

14.25

2.11

2.478

10

Bradley University

21.00

8

20

21.00

0.80

16.80

8.00

2.10

2.472

11

Southern Polytechnic State University

9.00

14

17

10.35

0.95

9.83

5.00

1.97

2.315

12

Clemson University

16.00

10

20

16.00

0.75

12.00

7.50

1.60

1.884

13

University of Florida

23.00

7

20

23.00

0.75

17.25

11.00

1.57

1.846

14

University of North Florida

9.00

14

11

6.75

0.80

5.40

3.67

1.47

1.732

15

University of Cincinnati

17.00

9

20

17.00

0.75

12.75

8.75

1.46

1.715

16

University of Southern Mississippi

9.00

14

20

9.00

0.75

6.75

5.25

1.29

1.514

17

California Polytechnic State, San Luis Obispo

10.00

13

17

11.50

0.80

9.20

7.25

1.27

1.494

18

Florida International University

9.00

14

20

9.00

0.75

6.75

7.00

0.96

1.135

19

East Carolina University

10.00

13

20

10.00

0.75

7.50

8.00

0.94

1.104

20

 

The Proceedings of the Annual Conference of the Associated Schools of Construction from 2001-2005

 

A total of 191 papers were presented at the last five ASC conferences. The 191 papers are accounted for by 47 institutions of which 40 of these are current ASC members. Therefore, 38% of the ASC membership is represented in the Proceedings for the last five conferences. Table 3 shows member institutions by article counts, years of membership, Carnegie classification, average faculty counts, and the total equalized points scored and the rank for the Proceedings of the Annual Conference for the last five conferences.

 

Table 4

 

Current Ranking of C-schools within the Proceedings of the Annual Conference of the Associated Schools of Construction from 2001-2005

C-school

Papers

Membership

Carnegie

Faculty

Totals

Count

Rank

Value

Score

Value

Score

Value

Score

Score

Rank

Brigham Young University

13.00

3

5.0

13.00

0.75

9.75

7.50

1.30

3.804

1

Arizona State University

20.00

2

5.0

20.00

0.75

15.00

12.00

1.25

3.658

2

Southern Illinois University, Edwardsville

6.00

6

5.0

6.00

0.80

4.80

4.00

1.20

3.512

3

Texas A&M University

22.00

1

5.0

22.00

0.75

16.50

15.50

1.06

3.115

4

Auburn University

20.00

2

5.0

20.00

0.75

15.00

15.50

0.97

2.832

5

University of North Florida

4.00

8

5.0

4.00

0.80

3.20

4.00

0.80

2.341

6

Clemson University

8.00

4

5.0

8.00

0.75

6.00

8.00

0.75

2.195

7

University of Nevada, Las Vegas

4.00

8

5.0

4.00

0.75

3.00

4.00

0.75

2.195

7

Southern Polytechnic State University

5.00

7

5.0

5.00

0.95

4.75

6.50

0.73

2.139

8

Virginia Institute of Technology

6.00

6

5.0

6.00

0.75

4.50

6.50

0.69

2.026

9

University of Southern Mississippi

4.00

8

5.0

4.00

0.75

3.00

5.00

0.60

1.756

10

California Polytechnic State, San Luis Obispo

7.00

5

5.0

7.00

0.80

5.60

9.50

0.59

1.725

11

University of Oklahoma

3.00

9

5.0

3.00

0.75

2.25

4.00

0.56

1.646

12

University of Arkansas, Little Rock

3.00

9

5.0

3.00

0.75

2.25

4.50

0.50

1.463

13

University of Maryland Eastern Shore

3.00

9

5.0

3.00

0.80

2.40

5.00

0.48

1.405

14

Bradley University

6.00

6

5.0

6.00

0.80

4.80

11.00

0.44

1.277

15

Colorado State University

6.00

6

5.0

6.00

0.75

4.50

10.50

0.43

1.254

16

Northern Arizona University

3.00

9

5.0

3.00

0.75

2.25

6.00

0.38

1.097

17

Indiana State University

3.00

9

5.0

3.00

0.75

2.25

6.50

0.35

1.013

18

University of Cincinnati

4.00

8

5.0

4.00

0.75

3.00

9.00

0.33

0.975

19

Florida International University

3.00

9

5.0

3.00

0.75

2.25

7.50

0.30

0.878

20

Purdue University - BCM

5.00

7

5.0

5.00

0.75

3.75

14.50

0.26

0.757

21

University of Florida

4.00

8

5.0

4.00

0.75

3.00

12.50

0.24

0.702

22

Florida State University

3.00

9

0.0

0.00

0.75

0.00

16.00

0.00

0.000

23

 

 

Discussion

 

The authors have proposed a more scientific and statistically valid method of measuring research output using the publications of the ASC. Unlike the conclusion drawn by Badger and Smith, "The program ranking system must be complicated enough to seem [or “appear”] scientific and the results must match, more or less, people's nonscientific prejudices." We believe that the top-ranking system can be designed scientifically to eliminate end user prejudices. The initial article or paper counts result show that Texas A&M University is ranked number one in all four samples. They would clearly represent the top ranked institution in the Journal since its inception and in the past five years and they would be the highest ranked in the Proceedings since 1986. However, when the equating weights are systematically applied it reaction to the validity threats a different picture arises. We indeed found that in all samples we were splitting hairs between programs as illustrated in Table 5. Across all samples differences only totaled 1.68 (SD 0.22) articles or papers between the top five, only 3.32 (SD 0.43) articles or papers separated the top and bottom programs, and the standard distributions were exceptionally small.

 

Table 5

 

Summary of Equally Weighed Scoring of ASC C-school’s Publication Efforts

Scores

Journal

Proceedings

Total
Difference

Long-term

Current

Long-term

Current

Score

SD

Score

SD

Score

SD

Score

SD

Score

SD

Top-Five

1.54

0.20

2.96

0.24

0.74

0.23

1.46

0.19

1.68

0.22

All

3.59

0.32

3.41

0.22

3.17

0.85

3.10

0.32

3.32

0.43

 

The University of North Florida (Average rank 1) is the highest ranked ‘long-term’ Journal C-school and Arizona State University (Average rank 1.5) is the highest ranked current Proceedings C-school. However, the long-term publication winner is Texas A&M University (Average rank 3) in that it is the only C-school which is represented in both the Journal and Proceedings top-five rankings which indicate a consistency in long-term Journal and Proceedings publication effort.

 

The University of North Florida is the highest ranked ‘current’ Journal C-school and Brigham Young University is the highest ranked current Proceedings C-school. However, the long-term publication winners are Brigham Young University (Average rank 2.5) and Texas A&M University (Average rank 3) in that they are the only C-schools which are represented in both the Journal and Proceedings top-five rankings which indicate a consistency in current Journal and Proceedings publication effort.

 

Which C-school deserves the highest ranking across both long-term and current sample groups? Texas A&M University (Average rank 3) ranked the highest by consistently appearing in all of the sample groupings. Brigham Young University (Average rank 2) is the only other C-school that was ranked in two of the sampling groups. If the analysis holds true then Texas A&M University has achieved the merit of being the top ranked C-school. But remember this is a rank game.

 

The use of the ASC publications as the source of data for ranking the research output of C-schools has arguments for and against. The authors believe the main argument for using ASC publications is that there would be an increase in faculty submitting articles or papers to both the Journal and Conference Proceedings if there was a formal C-school ranking system sponsored by the ASC. Those institutions that are competitive and seek to improve or maintain their ranking will encourage their faculty to publish in ASC publications. This would surely benefit the ASC and construction education as a whole. The main argument against is that the ASC publications are not the only venue for publication for ASC members and it is possible that other publications could and should be included. The authors maintain that if “Construction” is to continue to solidify its discipline it needs to have a core set of publications that are anchored in that discipline and not in the associated disciplines of Engineering and Architecture.

 

The authors also acknowledge that there are limitations to the method for measuring research output and that this current proposed publication system would only be of interest to programs. That is not to say that students or employers would sometimes not find this publication measure interesting but we believe it will not be of much significance to their unique value systems. Other or additional criteria should and could be utilized. For example, no score is awarded for the number of times that a paper is cited. This could be included if the archive was fully searchable and could be used to give extra weight to those papers that had made a significant contribution to the discipline and would be a positive weight criterion. An additional criterion could be if the construction program had a true construction Master’s and/or Ph.D. as defined by the ASC. The assumption would be that faculty within these programs would have improved opportunities for publication. The equality weight for this measure would naturally need to be negative to normalize all programs.

 

The authors believe that the academic debate regarding C-school top-ranking is good and healthy for a construction discipline such as ours. This paper continues that debate and hopefully will promote similar scholarship. As with any profile or rank schema there will have to be a champion, we believe that the ASC it up to the task and its membership can be that champion but it must redefine its mission to re-include profiling its members.

 

 

References

 

ASC (1974). Construction education programs: Course and curriculum information. (Available from K. C. Williamson III, Ph.D., Department of Construction Science, Texas A&M University, College Station, Texas, 77845-3137)

 

ASC (1989). ASC institutional survey. (Available from K. C. Williamson III, Ph.D., Department of Construction Science, Texas A&M University, College Station, Texas, 77845-3137)

 

Badger, W. W. (1989). ASC future direction paper. Unpublished manuscript, Associated Schools of Construction, University of Nebraska – Lincoln. (Available from K. C. Williamson III, Ph.D., Department of Construction Science, Texas A&M University, College Station, Texas, 77845-3137)

 

Badger, W. W., & Smith, J. C. (2005). Ranking construction programs: The academic debate begins. Proceedings of the Associated Schools of Construction's Annual Conference, 41, Cincinnati, Ohio. Retrieved December 27, 2005, from the ASC Proceedings web site: http://www.asceditor.usm.edu/archives/2005/CEGT16_5800_Badger05.htm

 

Bloomfield, P. (2000). Fourier analysis of time series: an introduction (2nd ed.). New York: John Wiley & Sons, Inc.

 

Davis, K (1988). ASC textbook survey. (Available from K. C. Williamson III, Ph.D., Department of Construction Science, Texas A&M University, College Station, Texas, 77845-3137)

 

NIST/SEMATECH (2006). Engineering statistics handbook of statistical methods. Retrieved December 27, 2005, from the SEMATECH web site: http://www.itl.nist.gov/div898/handbook/

 

Rodger J. A. (1982). Minutes of the 18th general meeting of the Associated Schools of Construction. (Available from K. C. Williamson III, Ph.D., Department of Construction Science, Texas A&M University, College Station, Texas, 77845-3137)

 

Rosenbaum D. B., Rubin, D. K. & Powers, M. B. (2001). The nation’s c-schools. Engineering News Record: New York , McGraw-Hill. 26-37.

 

Rounds, J. L. (1986). Proceedings of the 22nd Annual Conference of the Associated Schools of Construction. (Available from K. C. Williamson III, Ph.D., Department of Construction Science, Texas A&M University, College Station, Texas, 77845-3137)

 

Schuette, S. D. (1995). ASC salary survey. (Available from K. C. Williamson III, Ph.D., Department of Construction Science, Texas A&M University, College Station, Texas, 77845-3137)

 

Stake, J. E., (2005). How to protect from ranking-mania. Retrieved December 29, 2005, from Indiana University School of Law - Bloomington’s web site: http://monoborg.law.indiana.edu/LawRank/rankingmania.shtml

 

Williamson III, K. C. (1995). [ASC electronic membership database]. Unpublished raw data. (Available from K. C. Williamson III, Ph.D., Department of Construction Science, Texas A&M University, College Station, Texas, 77845-3137)

 

Williamson III, K. C. (2001). [ASC electronic membership database]. Unpublished raw data. (Available from K. C. Williamson III, Ph.D., Department of Construction Science, Texas A&M University, College Station, Texas, 77845-3137)