Você está na página 1de 15

Forecasting can be broadly considered as a method or a technique for estimating many future aspects of a business or other operation.

There are numerous techniques that can be used to accomplish the goal of forecasting. For example, a retailing firm that has been in business for 25 years can forecast its volume of sales in the coming year based on its experience over the 25-year periodsuch a forecasting technique bases the future forecast on the past data. While the term "forecasting" may appear to be rather technical, planning for the future is a critical aspect of managing any organizationbusiness, nonprofit, or other. In fact, the long-term success of any organization is closely tied to how well the management of the organization is able to foresee its future and to develop appropriate strategies to deal with likely future scenarios. Intuition, good judgment, and an awareness of how well the economy is doing may give the manager of a business firm a rough idea (or "feeling") of what is likely to happen in the future. Nevertheless, it is not easy to convert a feeling about the future into a precise and useful number, such as next year's sales volume or the raw material cost per unit of output. Forecasting methods can help estimate many such future aspects of a business operation. Suppose that a forecast expert has been asked to provide estimates of the sales volume for a particular product for the next four quarters. One can easily see that a number of other decisions will be affected by the forecasts or estimates of sales volumes provided by the forecaster. Clearly, production schedules, raw material purchasing plans, policies regarding inventories, and sales quotas will be affected by such forecasts. As a result, poor forecasts or estimates may lead to poor planning and thus result in increased costs to the business. How should one go about preparing the quarterly sales volume forecasts? One will certainly want to review the actual sales data for the product in question for past periods. Suppose that the forecaster has access to actual sales data for each quarter over the 25year period the firm has been in business. Using these historical data, the forecaster can identify the general level of sales. He or she can also determine whether there is a pattern or trend, such as an increase or decrease in sales volume over time. A further review of the data may reveal some type of seasonal pattern, such as peak sales occurring before a holiday. Thus by reviewing historical data over time, the forecaster can often develop a good understanding of the previous pattern of sales. Understanding such a pattern can often lead to better forecasts of future sales of the product. In addition, if the forecaster is able to identify the factors that influence sales, historical data on these factors (or variables) can also be used to generate forecasts of future sales volumes. FORECASTING METHODS All forecasting methods can be divided into two broad categories: qualitative and quantitative. Many forecasting techniques use past or historical data in the form of time series. A time series is simply a set of observations measured at successive points in time or over successive periods of time. Forecasts essentially provide future values of the time series on a specific variable such as sales volume. Division of forecasting methods into qualitative and quantitative categories is based on the availability of historical time series data. QUALITATIVE FORECASTING METHODS Qualitative forecasting techniques generally employ the judgment of experts in the appropriate field to generate forecasts. A key advantage of these procedures is that they can be applied in situations where historical data are simply not available. Moreover, even when historical data are available, significant changes in environmental conditions affecting the relevant time series may make the use of past data irrelevant and questionable in forecasting future values of the time series. Consider, for example, that historical data on gasoline sales are available. If the government then implemented a gasoline rationing program, changing the way gasoline is sold, one would have to question the validity of a gasoline sales forecast based on the past data.

Qualitative forecasting methods offer a way to generate forecasts in such cases. Three important qualitative forecasting methods are: the Delphi technique, scenario writing, and the subject approach. DELPHI TECHNIQUE. In the Delphi technique, an attempt is made to develop forecasts through "group consensus." Usually, a panel of experts is asked to respond to a series of questionnaires. The experts, physically separated from and unknown to each other, are asked to respond to an initial questionnaire (a set of questions). Then, a second questionnaire is prepared incorporating information and opinions of the whole group. Each expert is asked to reconsider and to revise his or her initial response to the questions. This process is continued until some degree of consensus among experts is reached. It should be noted that the objective of the Delphi technique is not to produce a single answer at the end. Instead, it attempts to produce a relatively narrow spread of opinionsthe range in which opinions of the majority of experts lie. SCENARIO WRITING. Under this approach, the forecaster starts with different sets of assumptions. For each set of assumptions, a likely scenario of the business outcome is charted out. Thus, the forecaster would be able to generate many different future scenarios (corresponding to the different sets of assumptions). The decision maker or businessperson is presented with the different scenarios, and has to decide which scenario is most likely to prevail. SUBJECTIVE APPROACH. The subjective approach allows individuals participating in the forecasting decision to arrive at a forecast based on their subjective feelings and ideas. This approach is based on the premise that a human mind can arrive at a decision based on factors that are often very difficult to quantify. "Brainstorming sessions" are frequently used as a way to develop new ideas or to solve complex problems. In loosely organized sessions, participants feel free from peer pressure and, more importantly, can express their views and ideas without fear of criticism. Many corporations in the United States have started to increasingly use the subjective approach. QUANTITATIVE FORECASTING METHODS Quantitative forecasting methods are used when historical data on variables of interest are availablethese methods are based on an analysis of historical data concerning the time series of the specific variable of interest and possibly other related time series. There are two major categories of quantitative forecasting methods. The first type uses the past trend of a particular variable to base the future forecast of the variable. As this category of forecasting methods simply uses time series on past data of the variable that is being forecasted, these techniques are called time series methods. The second category of quantitative forecasting techniques also uses historical data. But in forecasting future values of a variable, the forecaster examines the cause-and-effect relationships of the variable with other relevant variables such as the level of consumer confidence, changes in consumers' disposable incomes, the interest rate at which consumers can finance their spending through borrowing, and the state of the economy represented by such variables as the unemployment rate. Thus, this category of forecasting techniques uses past time series on many relevant variables to produce the forecast for the variable of interest. Forecasting techniques falling under this category are called causal methods, as the basis of such

forecasting is the cause-and-effect relationship between the variable forecasted and other time series selected to help in generating the forecasts. TIME SERIES METHODS OF FORECASTING. Before discussing time series methods, it is helpful to understand the behavior of time series in general terms. Time series are comprised of four separate components: trend component, cyclical component, seasonal component, and irregular component. These four components are viewed as providing specific values for the time series when combined. In a time series, measurements are taken at successive points or over successive periods. The measurements may be taken every hour, day, week, month, or year, or at any other regular (or irregular) interval. While most time series data generally display some random fluctuations, the time series may still show gradual shifts to relatively higher or lower values over an extended period. The gradual shifting of the time series is often referred to by professional forecasters as the trend in the time series. A trend emerges due to one or more long-term factors, such as changes in population size, changes in the demographic characteristics of population, and changes in tastes and preferences of consumers. For example, manufacturers of automobiles in the United States may see that there are substantial variations in automobile sales from one month to the next. But, in reviewing auto sales over the past 15 to 20 years, the automobile manufacturers may discover a gradual increase in annual sales volume. In this case, the trend for auto sales is increasing over time. In another example, the trend may be decreasing over time. Professional forecasters often describe an increasing trend by an upward sloping straight line and a decreasing trend by a downward sloping straight line. Using a straight line to represent a trend, however, is a mere simplificationin many situations, nonlinear trends may more accurately represent the true trend in the time series. Although a time series may often exhibit a trend over a long period, it may also display alternating sequences of points that lie above and below the trend line. Any recurring sequence of points above and below the trend line that last more than a year is considered to constitute the cyclical component of the time seriesthat is, these observations in the time series deviate from the trend due to cyclical fluctuations (fluctuations that repeat at intervals of more than one year). The time series of the aggregate output in the economy (called the real gross domestic product) provides a good example of a time series that displays cyclical behavior. While the trend line forgross domestic product (GDP) is upward sloping, the output growth displays a cyclical behavior around the trend line. This cyclical behavior of GDP has been dubbed business cycles by economists. The seasonal component is similar to the cyclical component in that they both refer to some regular fluctuations in a time series. There is one key difference, however. While cyclical components of a time series are identified by analyzing multiyear movements in historical data, seasonal components capture the regular pattern of variability in the time series within one-year periods. Many economic variables display seasonal patterns. For example, manufacturers of swimming pools experience low sales in fall and winter months, but they witness peak sales of swimming pools during spring and summer months. Manufacturers of snow removal equipment, on the other hand, experience the exactly opposite yearly sales pattern. The component of the time series that captures the variability in the data due to seasonal fluctuations is called the seasonal component. The irregular component of the time series represents the residual left in an observation of the time series once the effects due to trend, cyclical, and seasonal components are extracted. Trend, cyclical, and seasonal components are considered to account for systematic variations in the time series. 'h e irregular component thus accounts for the random variability in the time series. The random variations in the time series are, in turn, caused by short-term, unanticipated and nonrecurring factors that affect the time series. The irregular component of the time series, by nature, cannot be predicted in advance.

TIME SERIES FORECASTING USING SMOOTHING METHODS. Smoothing methods are appropriate when a time series displays no significant effects of trend, cyclical, or seasonal components (often called a stable time series). In such a case, the goal is to smooth out the irregular component of the time series by using an averaging process. Once the time series is smoothed, it is used to generate forecasts. The moving averages method is probably the most widely used smoothing technique. In order to smooth the time series, this method uses the average of a number of adjoining data points or periods. This averaging process uses overlapping observations to generate averages. Suppose a forecaster wants to generate three-period moving averages. The forecaster would take the first three observations of the time series and calculate the average. Then, the forecaster would drop the first observation and calculate the average of the next three observations. This process would continue until three-period averages are calculated based on the data available from the entire time series. The term "moving" refers to the way averages are calculatedthe forecaster moves up or down the time series to pick observations to calculate an average of a fixed number of observations. In the three-period example, the moving averages method would use the average of the most recent three observations of data in the time series as the forecast for the next period. This forecasted value for the next period, in conjunction with the last two observations of the historical time series, would yield an average that can be used as the forecast for the second period in the future. The calculation of a three-period moving average can be illustrated as follows. Suppose a forecaster wants to forecast the sales volume for American-made automobiles in the United States for the next year. The sales of American-made cars in the United States during the previous three years were: 1.3 million, 900,000, and 1.1 million (the most recent observation is reported first). The three-period moving average in this case is 1.1 million cars (that is: [(1.3 + 0.90 + 1.1)/3 = 1.1]). Based on the three-period moving averages, the forecast may predict that 1.1 million American-made cars are most likely to be sold in the United States the next year. In calculating moving averages to generate forecasts, the forecaster may experiment with different-length moving averages. The forecaster will choose the length that yields the highest accuracy for the forecasts generated. " It is important that forecasts generated not be too far from the actual future outcomes. In order to examine the accuracy of forecasts generated, forecasters generally devise a measure of the forecasting error (that is, the difference between the forecasted value for a period and the associated actual value of the variable of interest). Suppose retail sales volume for Americanmade automobiles in the United States is forecast to be 1.1 million cars for a given year, but only I million cars are actually sold that year. The forecast error in this case is equal 100,000 cars. In other words, the forecaster overestimated the sales volume for the year by 100,000. Of course, forecast errors will sometimes be positive, and at other times be negative. Thus, taking a simple average of forecast errors over time will not capture the true magnitude of forecast errors; large positive errors may simply cancel out large negative errors, giving a misleading impression about the accuracy of forecasts generated. As a result, forecasters commonly use the mean squares error to measure the forecast error. The mean squares error, or the MSE, is the average of the sum of squared forecasting errors. This measure, by taking the squares of forecasting errors, eliminates the chance of negative and positive errors canceling out. In selecting the length of the moving averages, a forecaster can employ the MSE measure to determine the number of values to be included in calculating the moving averages. The forecaster experiments with different lengths to generate moving averages and then calculates forecast errors (and the associated mean squares errors) for each length used in calculating moving averages. Then, the forecaster can pick the length that minimizes the mean squared error of forecasts generated.

Weighted moving averages are a variant of moving averages. In the moving averages method, each observation of data receives the same weight. In the weighted moving averages method, different weights are assigned to the observations on data that are used in calculating the moving averages. Suppose, once again, that a forecaster wants to generate three-period moving averages. Under the weighted moving averages method, the three data points would receive different weights before the average is calculated. Generally, the most recent observation receives the maximum weight, with the weight assigned decreasing for older data values. The calculation of a three-period weighted moving average can be illustrated as follows. Suppose, once again, that a forecaster wants to forecast the sales volume for American-made automobiles in the United States for the next year. The sales of American-made cars for the United States during the previous three years were: 1.3 million, 900,000, and 1.1 million (the most recent observation is reported first). One estimate of the weighted three-period moving average in this example can be equal to 1.133 million cars (that is, [ 1(3/6) x (1.3) + (2/6) x (0.90) + (1/6) x (1.1)}/ 3 = 1.133 ]). Based on the three-period weighted moving averages, the forecast may predict that 1.133 million American-made cars are most likely to be sold in the United States in the next year. The accuracy of weighted moving averages forecasts are determined in a manner similar to that for simple moving averages. Exponential smoothing is somewhat more difficult mathematically. In essence, however, exponential smoothing also uses the weighted average conceptin the form of the weighted average of all past observations, as contained in the relevant time seriesto generate forecasts for the next period. The term "exponential smoothing" comes from the fact that this method employs a weighting scheme for the historical values of data that is exponential in nature. In ordinary terms, an exponential weighting scheme assigns the maximum weight to the most recent observation and the weights decline in a systematic manner as older and older observations are included. The accuracies of forecasts using exponential smoothing are determined in a manner similar to that for the moving averages method. TIME SERIES FORECASTING USING TREND PROJECTION. This method uses the underlying long-term trend of a time series of data to forecast its future values. Suppose a forecaster has data on sales of American-made automobiles in the United States for the last 25 years. The time series data on U.S. auto sales can be plotted and examined visually. Most likely, the auto sales time series would display a gradual growth in the sales volume, despite the "up" and "down" movements from year to year. The trend may be linear (approximated by a straight line) or nonlinear (approximated by a curve or a nonlinear line). Most often, forecasters assume a linear trendof course, if a linear trend is assumed when, in fact, a nonlinear trend is present, this misrepresentation can lead to grossly inaccurate forecasts. Assume that the time series on American-made auto sales is actually linear and thus it can be represented by a straight line. Mathematical techniques are used to find the straight line that most accurately represents the time series on auto sales. This line relates sales to different points over time. If we further assume that the past trend will continue in the future, future values of the time series (forecasts) can be inferred from the straight line based on the past data. One should remember that the forecasts based on this method should also be judged on the basis of a measure of forecast errors. One can continue to assume that the forecaster uses the mean squares error discussed earlier. TIME SERIES FORECASTING USING TREND AND SEASONAL COMPONENTS. This method is a variant of the trend projection method, making use of the seasonal component of a time series in addition to the trend component. This method removes the seasonal effect or the seasonal component from the time series. This step is often referred to as de-seasonalizing the time series.

Once a time series has been de-seasonalized it will have only a trend component. The trend projection method can then be employed to identify a straight line trend that represents the time series data well. Then, using this trend line, forecasts for future periods are generated. The final step under this method is to reincorporate the seasonal component of the time series (using what is known as the seasonal index) to adjust the forecasts based on trend alone. In this manner, the forecasts generated are composed of both the trend and seasonal components. One will normally expect these forecasts to be more accurate than those that are based purely on the trend projection. CAUSAL METHOD OF FORECASTING. As mentioned earlier, causal methods use the cause-and-effect relationship between the variable whose future values are being forecasted and other related variables or factors. The widely known causal method is called regression analysis, a statistical technique used to develop a mathematical model showing how a set of variables are related. This mathematical relationship can be used to generate forecasts. In the terminology used in regression analysis contexts, the variable that is being forecasted is called the dependent or response variable. The variable or variables that help in forecasting the values of the dependent variable are called the independent or predictor variables. Regression analysis that employs one dependent variable and one independent variable and approximates the relationship between these two variables by a straight line is called a simple linear regression. Regression analysis that uses two or more independent variables to forecast values of the dependent variable is called a multiple regression analysis. Below, the forecasting technique using regression analysis for the simple linear regression case is briefly introduced. Suppose a forecaster has data on sales of American-made automobiles in the United States for the last 25 years. The forecaster has also identified that the sale of automobiles is related to individuals' real disposable income (roughly speaking, income after income taxes are paid, adjusted for the inflation rate). The forecaster also has available the time series (for the last 25 years) on the real disposable income. The time series data on U.S. auto sales can be plotted against the time series data on real disposable income, so it can be examined visually. Most likely, the auto i sales time series would display a gradual growth in sales volume as real disposable income increases, despite the occasional lack of consistencythat is, at times, auto sales may fall even when real disposable income rises. The relationship between the two variables (auto sales as the dependent variable and real disposable income as the independent variable) may be linear (approximated by a straight line) or nonlinear (approximated by a curve or a nonlinear line). Assume that the relationship between the time series on sales of Americanmade automobiles and real disposable income of consumers is actually linear and can thus be represented by a straight line. A fairly rigorous mathematical technique is used to find the straight line that most accurately represents the relationship between the time series on auto sales and disposable income. The intuition behind the mathematical technique employed in arriving at the appropriate straight line is as follows. Imagine that the relationship between the two time series has been plotted on paper. The plot will consist of a scatter (or cloud) of points. Each point in the plot represents a pair of observations on auto sales and disposable income (that is, auto sales corresponding to the given level of the real disposable income in any year). The scatter of points (similar to the time series method discussed above) may have an upward or a downward drift. That is, the relationship between auto sales and real disposable income may be approximated by an upward or downward sloping straight line. In all likelihood, the regression analysis in the present example will yield an upward sloping straight lineas disposable income increases so does the volume of automobile sales. Arriving at the most accurate straight line is the key. Presumably, one can draw many straight lines through the scatter of points in the plot. Not all of them, however, will equally represent

the relationshipsome will be closer to most points, and others will be way off from most points in the scatter. Regression analysis then employs a mathematical technique. Different straight lines are drawn through the data. Deviations of the actual values of the data points in the plot from the corresponding values indicated by the straight line chosen in any instance are examined. The sum of the squares of these deviations captures the essence of how close a straight line is to the data points. The line with the minimum sum of squared deviations (called the "least squares" regression line) is considered the line of the best fit. Having identified the regression line, and assuming that the relationship based on the past data will continue, future values of the dependent variable (forecasts) can be inferred from the straight line based on the past data. If the forecaster has an idea of what the real disposable income may be in the coming year, a forecast for future auto sales can be generated. One should remember that forecasts based on this method should also be judged on the basis of a measure of forecast errors. One can continue to assume that the forecaster uses the mean squares error discussed earlier. In addition to using forecast errors, regression analysis uses additional ways of analyzing the effectiveness of the estimated regression line in forecasting. Box-Jenkins Forecasting Method: The univariate version of this methodology is a selfprojecting time series forecasting method. The underlying goal is to find an appropriate formula so that the residuals are as small as possible and exhibit no pattern. The model- building process involves four steps. Repeated as necessary, to end up with a specific formula that replicates the patterns in the series as closely as possible and also produces accurate forecasts. Box-Jenkins Methodology Box-Jenkins forecasting models are based on statistical concepts and principles and are able to model a wide spectrum of time series behavior. It has a large class of models to choose from and a systematic approach for identifying the correct model form. There are both statistical tests for verifying model validity and statistical measures of forecast uncertainty. In contrast, traditional forecasting models offer a limited number of models relative to the complex behavior of many time series with little in the way of guidelines and statistical tests for verifying the validity of the selected model. Data: The misuse, misunderstanding, and inaccuracy of forecasts is often the result of not appreciating the nature of the data in hand. The consistency of the data must be insured and it must be clear what the data represents and how it was gathered or calculated. As a rule of thumb, Box-Jenkins requires at least 40 or 50 equally-spaced periods of data. The data must also be edited to deal with extreme or missing values or other distortions through the sue of functions as log or inverse to achieve stabilization. Preliminary Model Identification Procedure: A preliminary Box-Jenkins analysis with a plot of the initial data should be run as the starting point in determining an appropriate model. The input data must be adjusted to form a stationary series, one whose values vary more or less uniformly about a fixed level over time. Apparent trends can be adjusted by having the model apply a technique of "regular differencing," a process of computing the difference between every two successive values, computing a differenced series which has overall trend behavior removed. If a single differencing does not achieve stationarity, it may be repeated, although rarely if ever, are more than two regular differencings required. Where irregularities in the differenced series continue to be displayed, log or inverse functions can be specified to stabilize the series such that the remaining residual plot displays values approaching zero and without any pattern. This is the error term, equivalent to pure, white noise. Pure Random Series: On the other hand, if the initial data series displays neither trend nor seasonality and the residual plot shows essentially zero values within a 95% confidence level and these residual values display no pattern, then there is no real-world statistical problem to solve and we go on to other things. Model Identification Background

Basic Model: With a stationary series in place, a basic model can now be identified. Three basic models exist, AR (autoregressive), MA (moving average) and a combined ARMA in addition to the previously specified RD (regular differencing) combine to provide the available tools. When regular differencing is applied together with AR and MA, they are referred to as ARIMA, with the I indicating "integrated" and referencing the differencing procedure. Seasonality: In addition to trend, which has now been provided for, stationary series quite commonly display seasonal behavior where a certain basic pattern tends to be repeated at regular seasonal intervals. The seasonal pattern may additionally frequently display constant change over time as well. Just as regular differencing was applied to the overall trending series, seasonal differencing (SD) is applied to seasonal nonstationarity as well. And as autoregressive and moving average tools are available with the overall series, so too, are they available for seasonal phenomena using seasonal autoregressive parameters (SAR) and seasonal moving average parameters (SMA). Establishing Seasonality: The need for seasonal autoregression (SAR) and seasonal moving average (SMA) parameters is established by examining the autocorrelation and partial autocorrelation patterns of a stationary series at lags that are multiples of the number of periods per season. These parameters are required if the values at lags s, 2s, etc. are nonzero and display patterns associated with the theoretical patterns for such models. Seasonal differencing is indicated if the autocorrelations at the seasonal lags do not decrease rapidly. Referring to the above chart, know that, the variance of the errors of the underlying model must be invariant (i.e. constant). This means that the variance for each subgroup of data is the same and does not depend on the level or the point in time. If this is violated then one can remedy this by stabilizing the variance. Make sure that, that there are no deterministic patterns in the data. Also one must not have any pulses or one-time unusual values. Additionally there should be no level or step shifts. Also no seasonal pulses should be present. The reason for all of this is that if they do exist then the sample autocorrelation and partial autocorrelation will seem to imply ARIMA structure. Also the presence of these kind of model components can obfuscate or hide structure. For example a single outlier or pulse can create an effect where the structure is masked by the outlier. Improved Quantitative Identification Method Relieved Analysis Requirements: A substantially improved procedure is now available for conducting Box-Jenkins ARIMA analysis which relieves the requirement for a seasoned perspective in evaluating the sometimes ambiguous autocorrelation and partial autocorrelation residual patterns to determine an appropriate Box-Jenkins model for use in developing a forecast model. ARMA (1, 0): The first model to be tested on the stationary series consists solely of an autoregressive term with lag 1. The autocorrelation and partial autocorrelation patterns are examined for significant autocorrelation often early terms and to see whether the residual coefficients are uncorrelated, that is the coefficient values are zero within 95% confidence limits and without apparent pattern. When fitted values as close as possible to the original series values are obtained, the sum of the squared residuals will be minimized, a technique called least squares estimation. The residual mean and the mean percent error should not be significantly nonzero. Alternative models are examined comparing the progress of these factors, favoring models which use as few parameters as possible. Correlation between parameters should not be significantly large and confidence limits should not bracket zero. When a satisfactory model has been established a forecast procedure is applied. ARMA (2, 1): Absent a satisfactory ARMA (1, 0) condition with residual coefficients approximating zero, the improved model identification procedure now proceeds to examine the residual pattern when autoregressive terms with order 1 and 2 are applied together with a moving average term with an order of 1.

Subsequent Procedure: To the extent that the residual conditions described above remain unsatisfied, the Box-Jenkins analysis is continued with ARMA (n, n-1) until a satisfactory model is arrived at. In the course of this iteration, when an autoregressive coefficient (phi) approaches zero, the model is reexamined with parameters ARMA (n-1, n-1). In like manner whenever a moving average coefficient (theta) approaches zero, the model is similarly reduced to ARMA (n, n-2). At some point, either the autoregressive term or moving average term may fall away completely and the examination of the stationary series is continued with only the remaining term until the residual coefficients approach zero within the specified confidence levels. Morphological Analysis The morfological analysis is actually a group of methods that share the same structure. This method breaks down a system, product or process into its essential sub-concepts, each concept representing a dimension in a multi-dimensional matrix. Thus, every product is considered as a bundle of attributes. New ideas are found by searching the matrix for new combination of attributes that do not yet exist. It doesnt provide any specific guidelines for combining the parameters. It tends to provide a large number of ideas. The morphological analysis has several advantages over less structured approaches: "It may help us to discover new relationships or configurations, which may not be so evident, or which we might have overlooked by other less structured methods. It encourages the identification and investigation of boundary conditions, i.e. the limits and extremes of different contexts and factors. It also has definite advantages for scientific communication and notably for group work. "[source: www.swemorph.com ] It allows us to find possible solutions to complex problems characterised by several parameters. Richness of data it can provide a multitude of combinations permutations not yet explored. Systematic analysis this technique allows for a systematic analysis of future structure of an industry (or system) and identification of key gaps. How to Use Morphological Analysis Many problems challenge us with too many possible solutions, though yet uncovered, only some of which may be new and useful. This process, drains the swamp, so to speak, by systematically arranging appropriate and promising aspects of the situation and combining them just as systematically in order to identify new and suitable combinations. The object is to break down the system, product, or process problem at hand into its essential parameters or dimensions and to place them in a multi-dimensional matrix. Then to find new ideas by searching the matrix for creative and useful combinations. Some combinations may already exist, others may not be possible or appropriate. The rest may represent prospective new ideas. If you can describe a problem situation in terms of its aspects or dimentions, morphological analysis will uncover original and often innovative solutions. Morphological Analysis Steps

1. Determine suitable problem characteristics. The individual problem solver or a facilitated group brainstorms to define problem characteristics, also refered to as parameters. 2. Make all the suggestions visible to everyone and group them in various ways until consensus is reached regarding the groupings. 3. Label the groups reduce them to manageable number. Rather than reaching for a recommended number, consider the capabilities of the group and the time available. Consider also that there are computer applications and other tools that can assist the process. When working with the tangible aspects of something like a consumer product, for example, the labels gleaned from the groupings might include parameters such as product ingredients, color, textures, temperature, and flavor as well as package size, shape, function, and graphics. In the case of manufacturing issues, parameters might include material, function, process, construction, maintenance, and the like. 4. The next step is to fill a grid or grids with lists of parameters arranged along the axes. Now combinations can be identified within the grid. Depending on the number of items in play, great numbers of combinations may be available. 5. Eliminate those combinations that are impossible or undesirable to execute, put aside those that you do not want to eliminate but do not want to execute, and develop as many of the rest as possible. Morphological analysis was first applied to the aerospace industry by F. Zwicky, a professor at the California Institute of Technology. Zwicky chose to analyze the structure of jet engine technology. His first task was to define the important parameters of jet engine technology, which include thrust mechanism, oxidizer, and fuel type. He continued, in turn, to break each of these technologies down into its component parts. Having exhausted the possibilities under each parameter heading, the alternative approaches were assembled in all possible permutations: for example, a ramjet that used atmospheric oxygen and a solid fuel. For some permutations, a jet engine system already existed; for others, no systems or products were available. Zwicky viewed the permutations representing "empty cells" as stimuli for creativity and for each asked, "Why not?" For example, "Why not a nuclear-powered ceramic fan-jet?" Morphological analysis is a proven ideation method that leads to "organized invention." The technique allows for two key elements: a systematic analysis of the current and future structure of an industry area (or domain) as well as key gaps in that structure. a strong stimulus for the invention of new alternatives that fill these gaps and meet any imposed requirements. "Essentially, morphological analysis is a method for identifying and investigating the total set of possible relationships contained in any given, multi-dimensional problem complex that can be parameterized."[source: www.swemorph.com ] In his main work on the subject, Discovery, Invention, Research through the Morphological Approach (Zwicky, 1966), Zwicky summarises the five (iterative) steps of the process: First step The problem to be solved must be very concisely formulated. Second step All of the parameters that might be of importance for the solution of the given problem must be localized and analysed. Third step

The morphological box or multidimensional matrix, which contains all of the potential solutions of the given problem, is constructed. Fourth step All solutions contained in the morphological box are closely scrutinized and evaluated with respect to the purposes that are to be achieved. Fifth step The optimally suitable solutions are selected and are practically applied, provided the necessary means are available. This reduction to practice requires in general a supplemental morphological study. Steps 2 and 3 form the heart of morphological analysis since Steps 1, 4, and 5 are often involved in other forms of analysis. Step 2, identification of parameters, involves studying the problem and present solutions to develop a framework. This step is useful to develop a relevance tree to help define a given topic. Once parameters are identified, a morphological box can be constructed that lists parameters along one dimension. The second dimension is determined by the nature of the problem. Morphological box

"The approach begins by identifying and defining the parameters (or dimensions) of the problem complex to be investigated, and assigning each parameter a range of relevant values or conditions. A morphological box also fittingly known as a Zwicky box is constructed by setting the parameters against each other in an n-dimensional matrix. Each cell of the ndimensional box contains one particular value or condition from each of the parameters, and thus marks out a particular state or configuration of the problem complex. This is the point: to examine all of the configurations in the field, in order to establish which of them are possible, viable, practical, interesting, etc., and which are not. In doing this, we mark out in the field what might be called a solution space. The solution space of a Zwickian morphological field consists of the subset of configurations, which satisfy some criteria. However, a typical morphological field can contain between 50,000 and 5,000,000 formal configurations, far too many to inspect by hand. Thus, the next step in the analysis-synthesis process is to examine the internal relationships between the field parameters and "reduce" the field by weeding out all mutually contradictory conditions. This is achieved by a process of cross-consistency assessment: all of the parameter values in the morphological field are compared with one another, pair-wise, in the manner of a crossimpact matrix. As each pair of conditions is examined, a judgment is made as to whether or to what extent the pair can coexist, i.e. represent a consistent relationship. Note that there is no reference here to causality, but only to internal consistency." Short: Auto correlation The autocorrelation ( Box and Jenkins, 1976) function can be used for the following two purposes: 1. To detect non-randomness in data. 2. To identify an appropriate time series model if the data are not random.

Randomness is one of the key assumptions in determining if a univariate statistical process is in control. If the assumptions of constant location and scale, randomness, and fixed distribution are reasonable, then the univariate process can be modeled as: where Ei is an error term. If the randomness assumption is not valid, then a different model needs to be used. This will typically be either a time series model or a non-linear model (with time as the independent variable). PACF In time series analysis, the partial autocorrelation function (PACF) plays an important role in data analyses aimed at identifying the extent of the lag in an autoregressive model. The use of this function was introduced as part of the Box-Jenkins approach to time series modelling, where by plotting the partial autocorrelative functions one could determine the appropriate lags p in an AR(p)model or in an extended ARIMA(p,d,q) model. In general, a partial correlation is a conditional correlation. It is the correlation between two variables under the assumption that we know and take into account the values of some other set of variables. For instance, consider a regression context in which y = response variable and x1, x2, and x3 are predictor variables. The partial correlation between y and x3 is the correlation between the variables determined taking into account how both y and x3 are related to x1 and x2. In regression, this partial correlation could be found by correlating the residuals from two different regressions: (1) Regression in which we predict y from x1 and x2, (2) regression in which we predict x3 from x1 and x2. Basically, we correlate the parts of y and x3 that are not predicted by x1 and x2. Relevance Trees Most major technological development projects are complex. Their fulfillment is likely to depend on the accomplishment of substantial improvements on existing technologies. These advances are not usually coordinated. Many products result from technological changes that were not originally intended to provide them assistance. The planner must be able to distinguish a large number of potentially supporting technologies and to forecast their futures. Relevance trees, a slight variant of the network analysis discussed earlier, are of great aid in such work. Relevance trees can be used to study a goal or objective, as in morphological analysis, or to select a specific research project from a more general set of goals, as in network analysis. The methodology of relevance trees requires that the planner determine the most appropriate path of the tree by arranging, in a hierarchical order, the objectives, subobjectives, and tasks in order to ensure that all possible ways of achieving the objectives have been found. The relevance of individual tasks and subobjectives to the overall objective is then evaluated. An example of a relevance tree is shown in Figure 14. The objective is to develop a means of air pollution control. The subobjectives "Develop Petroleum . . ." and "Develop Alternatives . . ." further define the main objective. Tasks and subtasks are then defined. once all the "good" alternative ways of achieving the subobjectives have been found, the relevance of individual solutions to the main objective can be evaluated.

Linear Trend A first step in analyzing a time series, to determine whether a linear relationship provides a good approximation to the long-term movement of the series; computed by the method of semiaverages or by the method of least squares. Multiple regression WHAT IS MULTIPLE REGRESSION? Multiple regression is a statistical technique that allows us to predict someones score on one variable on the basis of their scores on several other variables. An example might help. Suppose we were interested in predicting how much an individual enjoys their job. Variables such as salary, extent of academic qualifications, age, sex, number of years in full-time employment and socioeconomic status might all contribute towards job satisfaction. If we collected data on all of these variables, perhaps by surveying a few hundred members of the public, we would be able to see how many and which of these variables gave rise to the most accurate prediction of job satisfaction. We might find that job satisfaction is most accurately predicted by type of occupation, salary and years in full-time employment, with the other variables not helping us to predict job satisfaction. When using multiple regression in psychology, many researchers use the term independent variables to identify those variables that they think will influence some other dependent variable. We prefer to use the term predictor variables for those variables that may be useful in predicting the scores on another variable that we call the criterion variable. Thus, in our example above, type of occupation, salary and years in full-time employment would emerge as significant predictor variables, which allow us to estimate the criterion variable how satisfied someone is likely to be with their job. As we have pointed out before, human behaviour is inherently noisy and therefore it is not possible to produce totally accurate predictions, but multiple regression allows us to identify a set of predictor variables which together provide a useful estimate of a participants likely score on a criterion variable.

Experience Curve:

Normative Methods of Forecasting: Normative forecasting is at the opposite extreme on the sophistication scale, fully utilizing Bayesian statistics, linear and dynamic programming, and other operations research tools. Here, despite the uniqueness, uncertainty, and lack of uniformity of research and development activities, each of the designers of normative techniques has proposed a single-format wholly quantitative method for resource allocation. Along the dimensions of unjustified standardization and needless complexity, for example, the proposed R&D allocation methods far exceed the general cost-effectiveness approach used by the Department of Defense in its program and system reviews. For both exploratory and normative purposes, dynamic models of broad technological areas seem worthy of further pursuit. In attempting to develop "pure predictions" the explicit recognition of causal mechanisms offered by this modeling approach seems highly desirable. This feature also has normative utility, provided that the dynamic models are limited in their application to the level of aggregate technological resource allocation and are not carried down to the level of detailed R&D project funding. Moving averages:

In statistics, a moving average, also called rolling average, rolling mean or running average, is a type of finite impulse response filter used to analyze a set of datum points by creating a series of averages of different subsets of the full data set. Given a series of numbers and a fixed subset size, the first element of the moving average is obtained by taking the average of the initial fixed subset of the number series. Then the subset is modified by "shifting forward", that is excluding the first number of the series and including the next number following the original subset in the series. This creates a new subset of numbers, which is averaged. This process is repeated over the entire data series. The plot line connecting all the (fixed) averages is the moving average. A moving average is a set of numbers, each of which is the average of the corresponding subset of a larger set of datum points. A moving average may also use unequal weights for each datum value in the subset to emphasize particular values in the subset.

Você também pode gostar