Você está na página 1de 15

SCHEDULED OVERTIME AND LABOR PRODUCTIVITY: QUANTITATIVE ANALYSIS

By H. Randolph Thomas1 and Karl A. Raynar2 Note. Discussions open until November 1, 1997. To extend the closing date one month, a written request must be filed with the ASCE Manager of Journals. The manuscript for this paper was submitted for review and possible publication on April 29, 1996. This paper is part of the Journal of Construction Engineering and Management, Vol. 123, No. 2, June, 1997. CASCE, ISSN 07339364/97/0002-0181-0188/$4.00 + $.50 per page. Paper No. 13135.
1Prof. of Civ. Engrg., Pennsylvania

Transp. Inst., Pennsylvania State Univ., 203 Res. Ofc. Build.,

University Park, PA 16802.


2Res. Assoc., Pennsylvania

Transp. Inst., Pennsylvania State Univ., 106 Res. Ofc. Build., University

Park, PA.

Abstract This paper describes a study of 121 weeks of labor productivity data from four industrial projects. The
objective is to quantify the effects of scheduled overtime. First, it describes how the data were collected, processed, and analyzed. The results show losses of efficiency of 10-15% for 50- and 60-h work weeks. The results compare favorably to other published data including the Business Roundtable (BRT) curves. Therefore, it was concluded that the BRT curve is a reasonable estimate of losses that may occur on average industrial projects. Second, this paper addresses the reasons for efficiency losses. For this analysis disruptions in three categories-resource deficiencies, rework, and management deficiencies-were analyzed. The analyses showed that the disruption frequency, which is the number of disruptions per 100 work hours, worsened as more days per week were worked. This led to the conclusion that losses of efficiency are caused by the inability to provide materials, tools, equipment, and information at an accelerated rate.

INTRODUCTION Scheduled overtime has been the subject of controversy since the Business Roundtable (BRT) published its overtime study in the early 1970s. It was reissued again in 1983 as p of the Construction Industry Cost Effectiveness project ("Scheduled" 1980). Some argue that scheduled overtime can be used without losing labor efficiency [Construction Indus Institute (CII) 1988], and others argue that when an overtime schedule is applied, labor efficiency automatically suffers There are numerous disagreements about the extent of inefficiencies and misunderstandings regarding how overtime schedules affect labor output. OBJECTIVES The objectives of the present paper are to detail the result of a comprehensive study to measure the effects of schedule overtime on construction labor efficiency and to define the relationship between scheduled overtime and various types o disruptions. The objectives are then to document how much loss of productivity one can expect and to question why inefficiencies occur. The emphasis in this paper is on labor work hours rather

than costs. DEFINITIONS In this paper, the term "scheduled overtime" refers to a planned decision by project management to accelerate the progress of the work by scheduling more than 40 work hours per week for an extended period of time for much of the craft work force. This term is in contrast to "spot overtime," which is applied sporadically for a limited number of workers. A "disruption" is an event that is known or has been reported in the literature to adversely affect labor productivity. Examples include lack of materials, lack of tools or equipment, congestion, and accidents. "Efficiency" is the relative loss of productivity compared to some baseline period. A value less than unity means performance is less than the baseline period. "Labor productivity" is the work hours during a specified time frame divided by the quantities. The time frame can be daily, weekly, or the entire project (cumulative). This measure is commonly called the unit rate. BACKGROUND A comprehensive review of the literature related to scheduled overtime has been published by Thomas (1992). This paper reported the literature to be very sparse-dated to the late 1960s and earlier-based on small sample sizes and largely developed from questionable data sources. While there appears to be a number of sources, this is an illusion because many of the articles and publications quote other sources while providing no new data or insight. Where the data source is known, other pertinent information, such as the environmental conditions, quality of management and supervision, and the labor situation, is unknown. The various graphs and data that have been published serve to suggest an upper bound on the losses of efficiency that might be expected. The literature offers no guidance as to what circumstances may lead to losses of efficiency. With respect to loss of efficiency as a function of time, there are very few articles or reports that show how efficiency is supposed to deteriorate over long periods of time. HOW OVERTIME AFFECTS LABOR PRODUCTIVITY A detailed representation of the factor model is shown in Fig. 1. The model shows that the conversion of inputs (work hours) to outputs (quantities) is a function of the work method or conversion technology. Various factors affect the efficiency with which inputs are converted to outputs. These impediments are divided into two categories: the work to be done, and the work environment. The work to be done refers to the physical components of the work. The work environment portion shows 10 variables that can be influential. These are the root causes of loss of efficiency. While there can be many other factors, these 10 are the most common. These factors impede or enhance the efficiency with which inputs or work hours are converted to output or quantities.

Overtime is an indirect factor that causes disruptions in the work environment. In extreme cases overtime can contribute to ripple effects. This is because the view is that overtime (exclusive of fatigue) itself does not lead to productivity losses. If it did, the losses would be automatic, which most professionals agree is not the case. Instead, a scheduled overtime situation causes other variables to be activated. Consider the following situation where project management decides to go from a work week consisting of four 10-h days to six 10h days. The labor component is thus increased by 50%. What else happens? Does the work get finished 50% faster? To function efficiently the entire system must respond to the increase in work hours. Materials must be made available 50% faster; equipment will be used 50% more; and the project staff must respond to 50% more questions. Everything is accelerated. If a project is behind schedule because of one or more of the work environment factors in Fig. 1, an overtime schedule will only make matters worse. It is this theory, the causal link between overtime and disruptions, that is being examined in this paper. OVERALL ASPECTS OF STUDY The study had several unique aspects that are different from most previous studies (CII 1994). These differences are summarized in the following. The smallest man-power unit that produces completed output is the crew. Therefore, the focus is on an average crew. The study includes crews of electricians and/or pipe fitters from four active construction projects. The work of the crews involves bulk installations only, such as cable, conduit, and piping. Since the stage of construction can affect labor productivity, the study specifically excluded the early phase of the work and the startup phase. Most previous studies have relied on cumulative productivity data. In this study, unit data are summarized daily and weekly. Cumulative data are also used. DATA COLLECTION PHASE Define Study Parameters In this paper, only the electrical and piping crafts are studied. The rationale is that these crafts represent the majority of the work that is most likely to be affected by scheduled overtime. The work performed by these crafts was further narrowed to crews performing production-related work. For electricians the productionrelated work studied was the installation of conduit, cable and wire, terminations and splices, and junction boxes. For piping, the work studied was pipe erection and the installation of supports and valves. Crews performing other kinds of work were not considered for study. Project selection is also an important element for removing other potential influences. The labor environment should be tranquil, and there should not be an inordinate number of changes. Experimental, unique, or poorly managed projects should be avoided. In this study, each of these criteria was met. None of the projects in the study experienced labor problems, jurisdictional disputes, labor shortages, or other factors that may have influenced the results. The study duration was sufficient to include a straight-time and an overtime schedule. The performance of a crew on an overtime schedule was compared to the same crew on a straight-time schedule. In this study, a target duration of 14 weeks was planned. The actual durations on the four projects studied ranged from eight to 16 weeks. Project Descriptions In this study, productivity data were collected from four active construction projects as shown in Table 1. There were a total of 151 weeks of data. The projects were constructed in the 1989-92 time frame. Each was constructed in a tranquil labor environment and was well managed. None experienced any unusual difficulties that would have caused progress to fall behind schedule. Each project was completed in a timely fashion. The overtime schedule was used to maintain schedule, not to attract labor. The manufacturing and paper mill

projects were existing facilities where old systems and equipment were removed and new ones installed. Congestion was a concern in each facility. The process plant was a spacious, outdoor, grassroots facility; and the refinery involved the rebuilding of parts of an existing facility. With respect to owner involvement, design, and construction management, the four projects were considered average industrial projects.

Procedures The data collection effort was independent of the cost reporting system. A procedures manual was developed for this purpose (Thomas and Rounds 1991). Site personnel collected the data. The philosophy and evolution of the procedures manual are explained elsewhere (Thomas et al. 1989). The data collection effort was organized around the completion of eight forms. Seven forms were completed daily. The forms solicited information about the work hours, crew size, absenteeism, the quantities installed, and the conditions in which the work was done. Selected information requested on each form is as follows: Form number 1 -manpower/labor pool: crew size, crew composition (skflled and unskilled), and absenteeism 1. 2. 3. 4. Form number 2-quantity measurement: measured units completed for each subtask Form number 3-design features/work content: work type and design details Form number 4-environmental/site conditions: temperature, humidity, and weather events Form number 5-management practices: delays, material and equipment availability, congestion, sequencing, and rework 5. Form number 6-construction methods: length of work day, overtime schedule, and working foreperson 6. Form number 7-project organization: size of project work force, other site support personnel, and number of forepersons 7. Form number 8-project features: type of project, approximate cost, and approximate planned duration The type of data recorded was continuous, integer, and binary. An example of continuous data is the quantity of conduit, i.e., 22.8 m (74.6 ft). Integer data included the crew size, i.e., nine tradespeople. Binary variables take on values of 0 or 1, depending on whether a particular condition is present. For example, if a measurable portion of the work hours were affected by the lack of materials, a 1 would be recorded; if not, that variable would be recorded as 0. Although the data forms are more detailed than shown here, every effort was made to streamline the data collection process. Following an initial familiarization period, data collection typically took about 30 min per crew per day. DATA PROCESSING PHASE The purpose of the data processing phase was to normalize the productivity data to the estimated daily

productivity had the crews been installing the same item of work, screen the data for unusual peculiarities, and normalize the data so that performance is related to a baseline productivity when a straight-time schedule was being worked. Calculate Conversion Factors It is known that the installation of different sized components require different labor resources. For example, a 101.6mm (4-in.) conduit requires more work hours per foot to install than a 19.1-mm (0.75-in.) conduit. Differences such as these exist for all items included in the study. These differences are accounted for by using conversion factors. The logic is explained elsewhere and is as follows (Thomas and Napolitan 1995). The first step is to define a standard item. In theory the choice of an item is irrelevant. In practice, it is usually selected as an item that occurs frequently. In this study, the standard item for electrical work was 50.8-mm (2in.) galvanized rigid steel (GRS) conduit, and for piping it was 63.5-mm (2 1/2in.), schedule 40, butt welded, carbon steel spools. In this investigation the estimate of conversion factors was based on unfactored unit rates that were obtained from standard estimating manuals. For electrical work the Means and Richardson manuals and a manual from a construction company were consulted. For piping work the Means, Richardson, Page & Nations, and the manual from the same construction company were used. The use of multiple estimating manuals precludes the factors from being influenced by one source. Using the data from a single estimating manual, conversion factors for each item are calculated as

where i = item number; and j = manual number. Once conversion factor values have been calculated for all manuals and items, multiple regression techniques can be used to develop a mathematical relationship for each grouping of like items. Groups are for conduit, cable, pipe, valves, and so on. The group regression equation was used to estimate the conversion factor for each item in the group. In practice, the conversion factor shows how much more or less difficult an item is to install compared to the standard item (Sanders and Thomas 1990). The theory behind conversion factors is that of earned value. It can be easily verified that irrespective of the mix of quantities installed, the conversion factor does not alter the hours earned in a given time frame. Conversion factors are analogous to monetary exchange rates. For example, a mix of marks, yen, and pounds can be exchanged for an equivalent amount (or value) of pounds or another currency such as dollars. The utility of the conversion factor approach is that the productivity of crews doing a variety of work can have their output expressed as an equivalent output of a single standard item. Thus, the productivity of all crews can be calculated for the same standard item during each time period regardless of the work performed. Likewise, crews from different projects can have their productivities calculated for the standard item, meaning that the data from multiple projects can be combined into a single database because all the productivity values represent installing the same item of work. To illustrate how the conversion factors are calculated, consider the items listed in Table 2. 'Me standard item is 50.8mm (2-in.) GRS conduit. The conversion factors in the last column are calculated using (1), where the unit rate for the standard item is 0.584 work hours/m (0.178 work hours/ft).

Calculate Equivalent Quantities The equivalent quantities are the number of units of the standard item that will yield the swne number of earned hours as was actually earned by installing nonstandard items. Practically speaking, it is the most likely estimate of the quantity of the standard item that would have been completed for the same set of work conditions. The equivalent quantity is calculated using (2)

where i = denotes the item being installed; and k = total number of items installed during work day 1. Suppose on a given day a crew installs the quantities listed in the first two columns in Table 3. The conversion factors in Table 3 are used in (2) to calculate the equivalent quantities. As shown, the crew did the equivalent of 61.0-m (200-ft) of 50.8-mm (2-in.) GRS conduit. The work hours earned are determined by multiplying the quantities installed by the unit rate from Table 2 (Thomas and Kramer 1987). For the actual installed quantities in Table 3, the crew earned 35.6 work hours. If the earned hours are calculated based on the equivalent quantity of 61.0 m (199.95 ft), the unit rate of 0.584 work hours/m (0.178 work hours/ft) for the standard item [50.8-mm (2-in.) GRS conduit] from Table 2 is used, and the earned work hours are also equal to 35.6. Therefore, the value of the work in terms of earned hours is the same; it is simply expressed in a different way. If a different standard item is chosen, the earned work hours will still be 35.6.

Defining Baseline

To define the baseline the nominal hours per week were used. For example, if the crew worked 37.5 h during the week, that was considered a 40-h week. One of the difficulties in examining overtime data is that, in practice, it can be difficult to identify a period of time where a straight-time schedule was used followed by an overtime schedule. Work schedules are affected by weather, and managers strive to ensure that workers have ample time away from the job. It is infrequent that one would see an extended overtime schedule lasting 10-12 weeks as presented in the BRT study (1980). Variations in work schedule make it difficult to define a baseline. Since there were no data for five eight-hour days, a four to 10 schedule was used as the.baseline. In determining the weeks to use, consideration was given to consistency of work hours, crew size, and number of days worked per week. For the baseline weeks the work hours and quantities were determined. The baseline values were then calculated for each crew using the following equation:

The calculated baseline values are summarized in Table 4.

Final Data Screening When examining the weekly productivity values, one must be cognizant of outliers. However, simply removing extreme data points would be improper since they are, to some extent, the focus of this study. Some initial difficulties with data collection are noted for projects 9,181, 9,183, and 9,185. Accordingly, the first week of data for these three projects have been discarded. This leaves a total of 148 weeks of data. All subsequent analyses have been performed on this reduced data set. Calculate Performance Factors For each data set, performance factors were calculated using (4)

A performance factor value greater than unity means that performance that week was better than the performance during the baseline period. The use of performance factors allows data sets from various sources to be combined. In this instance the 11 sets were combined, and all analyses were done on the performance factors.

DATA ANALYSES: HOW MUCH This section explains the results of the data analyses. It relies on daily, weekly, and cumulative performance factors. The approach used is to examine the influence of hours per day and then to perform other analyses to determine if they support or contradict the initial investigation. Tlere was insufficient dispersion of data to investigate the influence of hours per day. DAYS PER WEEK The initial analysis was to determine the influence of days per week on labor performance using weekly performance factors. The analysis was done on work weeks of two, three, four, five, and six days. Work weeks shorter than four days usually were shortened because of bad weather. There was one seven day work week, and it was discarded. The weekly performance factor values were analyzed to determine if there were changes in the performance factor that were correlated to the number of days worked per work week. The results of this analysis are listed in Table 5. The efficiency is calculated by dividing the average weekly performance factor by the average weekly performance factor for a 40-h (fourday) work week or 0.98. The statistical significance of the results was evaluated using an analysis of variance (ANOVA) test. The level of significance, which ranges between 0.000 and 1.000, was calculated to be 0.046. If it is hypothesized that an independent variable produces statistically significant differences in a dependent variable, then the level of significance is the calculated otvalue at which the null hypothesis Ho, which there is no difference, would be rejected (Devote 1991). In simpler terms the level of significance is the maximum probability that chance or randomness produced the observed results when, in fact, the null hypothesis is true. The level of significance is also called the p-value; a value near 0.000 means a highly significant relationship. The use of the level of significance highlights the difference between the approaches of theoretical or classical and applied statistics. A brief discussion is provided in Appendix I.

The efficiencies for two-, three-, four-, five-, and six-day work weeks are shown in Fig. 2. The reduced efficiency for the two- and three-day work weeks was caused by bad weather. 'Me five- and six-day work weeks are of particular interest. These schedules showed greater variability in performance factor values than the other schedules.

Synopsis of Initial Investigation The initial investigation based on 120 weeks of work showed that there was, on average, about a 10-15% loss of productivity when working longer than a normal 40-h or four day work week. The loss of efficiency for five- and six-day work weeks (50- and 60-h work weeks) was about the same. The remaining analyses are an effort to support the initial determination that there are productivity losses when working an overtime schedule. Overtime Duration In examining performance as a function of the duration of the overtime schedule, cumulative performance factors were calculated and comparisons were made against the curves from the BRT study (1980). The comparisons are limited for two reasons. First, most crews worked an overtime schedule for three weeks or less compared to the BRT curves that extend for 12 weeks. Second, there was some inconsistency in the overtime schedule. For example, a crew may work five days one week, six days the next, and then return to a five-day work week. In examining the efficiency trends for 50- and 60-h weeks, it was evident that most crews follow the general downward trend established in the BRT study; however, not all crews follow this trend (BRT 1980). It may be possible that overtime schedules lasting three to four weeks or less can be used with minimal loss of efficiency. However, no other data from this study could be identified to support this conclusion. For longer overtime schedules, fatigue probably increases. Fig. 3 shows the average of all crews working a 50-h week, the BRT curve (1980), and the results from several references reporting overtime efficiency as a function of time (Adrian 1988; Haneiko and Henry 1991; Overtime 1989). From this analysis one concludes that the BRT curve is probably a good representation of the industry average of overtime efficiency, but individual work may vary.

Variations Caused by Schedule Changes If overtime causes negative impacts, one would expect that when going from a straight-time schedule to an overtime schedule, most of the time there would be a decline in performance. That is, the performance factor would decrease. Likewise, when coming off of an overtime schedule, one might expect an increase in performance. This aspect was investigated by calculating the change in performance when there was a schedule change. This analysis showed considerable variability. The frequent changing of the schedule to and from overtime may be more detrimental than intuition may suggest. Subsequent research suggests that the change in schedule is more likely to be caused by variations in the workload (Thomas et al. 1995). Thus, frequent accelerations and decelerations are detrimental to efficiency. DATA ANALYSES: WHY The previous analyses investigated the effects of an overtime schedule on labor efficiency, i.e., how much is the impact. Negative effects were shown to have occur-red. The analyses that follow investigate the question of why negative impacts occur. Understanding the "why" question is necessary for one to manage an overtime schedule. Disruptive Events Disruptions are defined as the occurrence of events that are known or have been reported in the literature to adversely affect labor productivity. In this analysis only the four-, five-, and six-day work weeks were evaluated, thus negating most of the weather disruptions that affected the results in Fig. 2. The rationale for ignoring weather disruptions is that they are unrelated to overtime schedules. The disruption types were organized into three categories as follows: 1. Resources

Material availability Tool availability Equipment availability Information availability 2. Rework Changes Rework 3. Management Congestion Out-of-sequence work Supervisory Miscellaneous If the factor model (see Fig. 1) is a valid representation of labor productivity, then one would expect to see more frequent occurrences of disruptions and a simultaneous worsening of productivity. Conversely, if there is no worsening of productivity, there should be little change in the frequency of occurrence of disruptions. Relationship of Performance to Disruptions To test the previous hypothesis, a statistical analysis was performed to assess the influence of disruptions on performance. The daily performance factor values for days with and without disruptions were compared using an analysis of variance test. It was found that the efficiency on days when disruptions occurred was reduced to an average of 73% of what it would have been if there had been no disruption. The level of significance was calculated as 0.098. From this analysis the likelihood of randomly observing the differences in performance factor values for the subsets of disruptions and no disruptions is less than 10%, which may lead one to conclude that there is a causal relationship between lower performance and the presence of disruptions. Relationship of Performance and Disruptions to Weekly Schedule Weekly disruption frequencies were calculated using the following equation:

The disruption frequency represents the number of weekly disruptions based on a 10-person crew working a 10-h day or every 100-work hours. Thus, by using the disruption frequency, a shortened work week can be compared to a longer work week. The weekly disruption frequencies were averaged according to the number of days worked per week. The results are summarized in Table 6 and are shown in Fig. 4. As can be seen, as the work week lengthens, the disruption frequencies increase. The six-day work week involves weekend work, and the nature of the work being performed may explain the reduction in disruption frequency for the longer work week.

Disruption by Type The type of disruption was also analyzed. The data are summarized in Table 7 and are shown graphically in Fig. 5. The research showed that the number of disruptions caused by changes and rework varied according to the days worked per week with no consistent pattern. Management-related disruptions (congestion, out-ofsequence work, supervision, and miscellaneous) were more for the five-day per week schedule than for the other schedules. Disruptions caused by lack of resources (materials, equipment, tools, and information) increased consistently with the number of days worked per week. Impact of Disruptions Disruption impacts were also assessed by calculating a disruption index, which is the ratio of the performance factor on days when a specific type of disruption occurred divided by the average performance factor on days when no disruption occurred. These average daily values are summarized in Table 8. Only the most significant disruptions are shown. As can be seen, rework has the greatest impact on performance.

Comparison to Previous Studies Most of the previous studies show single efficiency values for a particular work schedule (Thomas 1992). In many cases, these values show greater losses of efficiency than the values shown in this study. From reading the reports and articles, one learns little or nothing about the origin of the data, and except for one or two studies, no differentiation is made between short- and long-term effects. Some data were known to be from projects that were involved in contract disputes. This study examined mainly short-term overtime effects, e.g., three to four weeks and less. On average, productivity losses of about 10-15% were observed. The trends are consistent with the curves published by BRT ("Scheduled" 1980). The research also suggested that it may be possible to work overtime for three to four weeks without losses of productivity, although the likelihood is small. This observation is somewhat consistent with an earlier study published by CII (1988). Based on the analysis of disruptions, there are increasing difficulties in providing resources as the overtime schedule becomes more intensive. It i&4heorized that on projects where there are few resource problems when working straight time, the loss of efficiency can range from 0 to 15%. The overtime losses can exceed 15% if the project is already behind schedule because of other problems, such as incomplete design, numerous changes, work in an operating environment, or labor unrest. Under those circumstances, the values shown elsewhere in the literature may be more realistic.

CONCLUSIONS As a result of this study, several important conclusions can be formulated. These are summarized in the following. There is little doubt that scheduled overtime results in a loss of productivity. Since the projects studied did not experience labor problems, material shortages, and other major disruptive event, it is concluded that the BRT curve is a reasonable estimate of the minimum loss of productivity. For projects experiencing worsening degrees of distress and disruption, the loss of productivity will probably be greater. While it is possible to perform some limited scope of work for a few weeks with no loss of productivity, the likelihood of doing so is small. Consecutive overtime schedules lasting longer than three to four weeks will lead to productivity losses from fatigue. This study has shown scheduled overtime to be a resource problem. A causal relationship between disruptions and losses of efficiency was shown. It was also shown that as more days per week are worked, there are increasing difficulties in providing resources, i.e., materials, equipment, tools, and information. 'Merefore, it is concluded that the major reason for losses of productivity during a period of scheduled overtime is the inability to provide resources at an accelerated rate. The factor model was shown to be a valid representation of overtime productivity. ACKNOWLEDGMENTS This work was sponsored by the Construction Industry Institute (CII) under the guidance of the Scheduled Overtime Task Force. Their support and assistance in this research is gratefully acknowledged and appreciated. APPENDIX 1. THEORETICAL VERSUS APPLIED STATISTICS The classical or theoretical approach to hypothesis testing is to define an acceptable level of significance or (x-value a priori, use the data set to compute an F-ratio and a -value, and reject the null hypothesis H, if the computed (x-value exceeds the preselected value. This approach may be inadequate because it says nothing about whether the computed value of the test statistic just barely fell into the rejection region or exceeded the critical value by a large amount. The applied statistician approaches the hypothesis testing problem in a slightly different way. No pass/fail a value is selected in advance. Instead, the data are analyzed, and the smallest (x-value at which the null hypothesis Ho would be rejected is computed. This statistic is called the p-value or level of significance. The p-value conveys much about the strength of evidence against Ho and allows an individual decision-maker to draw a conclusion without imposing a particular a on others, who might wish to draw their own conclusions. The level of significance has other practical implications as well. The level of significance is the maximum probability that chance or randomness produced the observed differences when, in fact, the null hypothesis Ho is true. If the level of significance is near zero, then it is more probable that the observed differences were truly the result of the influence of the independent variable being considered. APPENDIX II. REFERENCES 1. Adrian, J. J. (1988). Construction claims, a quantitative approach. Prentice-Hall, Inc., Englewood Cliffs, N.J. 2. Business Roundtable (BRT). (1980). "Scheduled overtime effect on construction projects." Rep. C-2, New York, N.Y., 12-13.

3. Construction Industry institute (Cil). (1988). "The effects of scheduled overtime and shift schedule on construction craft productivity." Rep. of the Productivity Measurements Task Force, Source Document 43, Austin, Tex. 4. Construction Industry Institute (CII). (1994). "Effects of scheduled overtime on labor productivity: a quantitative analysis." Rep. of the Overtime Task Force, Austin, Tex. 5. Devote, J. L. (1991). Probability and statistics for engineering and the sciences. Brooks/Cole Publishing Co., Pacific Grove, Calif. 6. Haneiko, J. B., and Henry, W. C. (1991). "Impacts to construction productivity." Proc., Am. Power Conf., Illinois Inst. of Technol., Chicago, Ill., Vol. 53-II, 897-900 7. Overtime and productivity in electrical construction. (1989). National Electrical Contractors Association, Bethesda, Md. 8. Sanders, S. R., and Thomas, H. R. (1990). "Masonry conversion factors." Masonry Soc. J., 9(i), 95104. 9. Thomas, H. R. (1992). "Effects of scheduled overtime on labor productivity." J. Constr. Engrg. and Mgmt., ASCE, 118(l), 60-76. 10. Thomas, H. R., Arnold, T. M., and Oloufa, A. A. (1995). "Quantification of labor inefficiencies resulting from schedule compression and acceleration." Final Rep. to the Electrical Contracting Found., Inc., Pennsylvania Transp. Inst., Pennsylvania State Univ., University Park, Pa. Thomas, 11. H. R., and Kramer, D. F. (1987). The manual of construction productivity measurement and performance evaluation. Construction Industry Institute, Austin, Tex. 12. Thomas, H. R., and Napolitan, C. L. (1995). "Quantitative effects of construction changes on labor productivity." J. Constr. Engrg. and Mgmt., ASCE, 121(3), 290-296. 13. Thomas, H. R., and Rounds, J. (1991). Procedures manual for collecting productivity and related data on overtime activities on industrial construction projects: electrical and piping. Construction Industry Institute, Austin, Tex. 14. Thomas, H. R., Smith, G. R., Sanders, S. R., and Mannering, F. L. (1989). "An exploratory study of productivity forecasting using the factor model for masonry." Rep. to the Nat. Sci. Found. Grant No. MSM 861160, Pennsylvania Transp. Inst., Pennsylvania State Univ., University Park, Pa.

Você também pode gostar