A Methodology for Research Project Selection
Adrien Presley Division of Business and Accountancy Truman State University 100 East Normal Kirksville, MO 63501 Donald Liles Automation & Robotics Research Institute The University of Texas at Arlington 7300 Jack Newell Boulevard South Fort Worth, TX 76118
This paper presents a methodology for the identification of benefits for technology research and development projects. It specifically addresses projects involving new process technologies. It focuses on identifying and presenting both financial and strategic benefits of the project in an integrated manner. It uses a proven multi-attribute justification tool and frames it within a larger structured methodology for designing and planning the benefits measurement process through experiments and pilot implementations. The methodology supports the comparison of dissimilar projects having different strategic benefits. Finally, it acts as tool to aid the technologist in identifying and presenting the benefits of his technology. Keywords Project Selection, Multi-Attribute Decision Making, Management of Technology
The selection of research and development projects to be funded from a portfolio of projects is a common problem in many organizations. Internal and external funding agencies require information about the linkages of the projects to the financial and strategic goals of the agencies and the firm. A major difficulty in the R&D project selection problem is that there are often multiple objectives that must be considered. These objectives include financial measures such as Net Present Value and Internal Rate of Return as well as qualitative measures. When looking at several possible projects, the projects will often impact different measures. These measures must somehow be combined to provide a consistent means to compare projects. Several other factors complicate the problem. R&D projects are often initiated and championed in a bottom-up manner, where engineers or scientists develop projects and bring them forward for consideration. While these projects may have great technical merit, the technologists have difficulty in identifying and articulating the financial and strategic benefits of the technology. The data to support the benefits of the technology are often gathered from small experiments or pilot implementations. In some of these cases, only certain components of the technology are implemented — sometimes only those subsystems of the larger technology yet developed. Trying to extrapolate the benefits of a full implementation based on these pilot results is difficult as is
effectively designing the experiments and validations themselves. The primary goal of the experiments is often to act as a “proof of concept” of the new technology, and as a means to extract benefit data from such implementations can also prove difficult. There may be overlaps, synergies, and other interactions of the projects under study that must be considered. There is much in literature about methods for quantitative and qualitative methods for selecting projects. Reviews of these techniques can be found in, for example [1-3]. However, few methodologies for designing a set of experiments to validate perceived benefits from new technologies exist. This paper describes such a methodology.
2. Project Selection Methodology
This section describes a methodology for research project selection, which we will refer to as ICSM (Improvement Concept Selection Methodology) or ???the methodology,” in the remainder of this document. The terms ???improvement concept??? or ???project??? will be used to refer to the new technologies being considered for funding. The steps of the ICSM are discussed along with the project selection issues addressed by the steps. The environment for which the ICSM was specifically developed is one common in many R&D organizations. Several improvement concepts were being considered for funding. Most of the concepts were in an early stage of development, some only in conceptual design. There was a limited amount of resources available to fully develop the concepts. A set of validation experiments or pilot implementations to prove out the concepts were being conducted. The validations were required for several reasons. Some of the technologies under consideration were not fully developed technically, requiring that technical feasibility must be established. In other cases the organizational feasibility of the concepts was still in question. In all of these cases, the selection of projects had to be based on the expected benefits from full implementation, something very difficult to do at an early stage of development. When comparing different concepts, estimates for the concepts had to be made on a consistent basis. An additional level of difficulty was introduced in that much of the data to do this analysis would result from the experiments and pilots. Therefore, the validations looked at two things: 1) was the project technologically and organizationally sound, and 2) could the benefits be ???proven.” The ICMS adapts and uses an existing technology justification methodology, the Enterprise Performance Management Methodology (EPMM). EPMM integrates activity-based management with the use of a comprehensive set of performance measures. EPMM predicts and measures impact at the enterprise level. The EPMM is not the focus of this paper as it is described in detail in other references (see for example [4, 5]). A brief overview is provided here to facilitate the discussion of the use of EPMM in the ICMS. EPMM has capability in two related areas: 1) the justification of new or proposed systems, and 2) the monitoring and management of ongoing enterprise performance. In both of these activities, many of the same concepts and measures are utilized. ???System??? can refer to any practice, process, or technology. In the context of this paper, the system is the R&D concept under consideration for funding. EPMM is realized as a set of forms and matrices that leads the user through the methodology. Matrices similar to QFD matrices are created to link the system or process under consideration to the enterprise, both in terms of activities affected and strategic areas impacted. EPMM allows for
the integration of strategic metrics with traditional cost metrics in arriving at a final “score” for an alternative. The methodology encourages and supports a group-based approach to analysis. The ICSM assumes that the projects have been identified and technically specified. Most likely, some level of benefits would have been identified at this point. The methodology assists in formalizing the benefits and identifying additional benefits. The methodology focuses on the research and development of new process technologies to be implemented into the enterprise. In this context, technology refers not just to new physical pieces of machinery or software being developed but also to changes in the way processes are conducted. Therefore, a process technology being evaluated can run the gamut of changing supply chain governance practices to implementation of a new system on the shop floor. In all cases, however, the concepts under consideration are those that, if fully developed, would have strategic and pervasive impacts to the enterprise. At the highest level, there are four phases to the ICSM: Define Enterprise, Define Concepts, Plan and Conduct Validation Experiments, and Analyze Results. The following sections define the steps and tasks conducted in each of these phases. 2.1 Define Enterprise New process technologies, especially those which are innovative enough to warrant an extensive R&D effort, will impact the enterprise in multiple ways. These impacts must be measured in both financial and strategic terms [1, 2, 6]. This phase identifies the enterprise into which the new technology is to be implemented. As such, the strategies and objectives of the enterprise must be defined and linked. Based on these strategies and objectives a set of strategic metrics is identified. The need for considering both quantitative and qualitative (often strategic) benefits of R&D projects is well documented [1-3]. Quantitative benefits include traditional financial considerations (such NPV, IRR, payback) as well as non-financial benefits such as reduced cycle time. Examples of qualitative benefits include things like improved market share and flexibility that are not easily quantified. An activity-based approach is used to assist in defining cost savings. An activity model of the enterprise is created which defines all relevant activities that might be impacted by the concepts. When looking at benefits, cost savings outside of the area being directly impacted by a new system are often overlooked. For example, in looking at the savings from a new shop floor process technology, the savings from the manufacturing processes will be identified. However, savings in other activities such as procurement, planning and even marketing might be ignored. Development of a comprehensive enterprise model for analysis ensures a more comprehensive and accurate estimation of benefits. One set of strategic metrics and a single activity model are created for the evaluation of all concepts under consideration. This allows for the comparison of the concepts on a consistent basis. This step should be done independently of the concepts under consideration for funding. If a portfolio of concepts is to be evaluated, it is also advisable to have the definition of the enterprise and metrics be developed primarily by decision-makers and management personnel with an understanding of the enterprise??™s strategic objectives. Technologists often do not have access to this kind of understanding and may be biased by the technologies they are championing. This
does not, however, mean that the technologist should be excluded from the process. One of the advantages of this methodology is in its functioning as a learning mechanism. Having technologists involved will help them to articulate and, in many cases, discover the benefits of the technology they are pushing. Involvement will also assist in selecting and defining future projects for consideration. Finally, involvement will help in their understanding of why a particular project was or was not selected for funding. 2.2 Define Concepts The improvement concepts are an input into the methodology. In this phase the concept is further defined in terms of the enterprise. This phase consists of three steps: Define Concept Vision, Define Risks and Mitigation Plan, and Link Concept to Enterprise. The Define Concept Vision step outlines the strategic vision for the concept. It details how the enterprise and its processes will be transformed once the concept has been fully developed and implemented. The technologist is forced to articulate not just how the new technology will work but how it will be used in the enterprise and the effects that it will have on the enterprise. For many R&D concepts, the concept vision may be a long term vision in that technology and organizational factors will influence the rate of infusion of the new technology. It is our experience that a time phased approach in defining the impact is preferred. The concept under consideration is broken down into “components” to facilitate the identification of operational linkages to activities. Components are logical or physical breakdowns of the concept. Using a hypothetical example of a new shop floor control system, its components might include the scheduling module, data collection system, and the information system infrastructure. To link the new concept to the strategies of the enterprise, it is broken down into “attributes” that are defined as either strategic attributes about the system itself or strategic advantages gained by the company through the acquisition of the system. During the Link Concept to Enterprise step, how the system components and system strategic attributes interact with the enterprise are identified. It identifies which system components impact which activities and which strategic system attributes impact which strategies. The components and attributes are identified through the understanding of the concept gained and documented in the previous task. Strategies and enterprise activities, documented earlier, are analyzed and organized for analysis. Linkages are drawn using a series of matrices. On of the main problems inherent in the R&D selection process is the risk of a project [3, 7]. While risk can be considered as one of the strategic metrics chosen for selection, this methodology recommends that the risks of the project under development be explicitly stated and a plan to mitigate the risks be developed. This is accomplished during the Define Risks and Risk Mitigation Plan phase. Three types of risks are acknowledged: technical, cultural, and business/legal. A plan detailing how these risks will be mitigated and critical assumptions made in the development of the risk mitigation are outlined. 2.3 Plan and Conduct Validation Experiments For each concept a set of experiments is defined. As previously discussed, these experiments will support the feasibility of the concept under study and will lead to the development of data about its benefits. This phase consists of six steps described in the following sections Define Hypothesis,
Define As-Is and To-Be Processes, Develop Test Scenario, Define Estimates and Data Collection Plan, and Conduct Experiment. In the Define Hypothesis step, the hypothesis for each experiment is defined. This hypothesis defines ???what are we trying to prove??? The hypothesis can consist both technical and business aspects. As an example, consider a project looking at the implementation of a supply chain intranet system. The technical hypothesis may be that the implementation of an intranet is feasible given existing firewalls and other technical difficulties. The business hypothesis might be that the supply chain partners on this intranet could conduct business and that realizable savings are present. The latter would be stated in terms such as ???the implementation of the intranet will yield savings in transaction costs averaging 20% per transaction.??? The Define As-Is and To-Be Processes step forces the technologist to define exactly how the new technology will impact the processes being performed by the enterprise. The definition of the As-Is and To-Be processes will greatly facilitate the calculation of benefits. These processes are those directly affected by the concept and will typically be at a lower level of detail than the activities identified for the benefits measurement activity model. The plan to conduct the actual experiment is developed in the Develop Test Scenario step. The experimental scenario should link directly to the experiment??™s hypothesis, benefits being claimed, and the As-Is and To-Be processes identified. This test scenario defines how the cultural, and business/legal feasibility of the concept will be ???proven.” Related to this plan is the Define Estimates and Data Collection Plan step, which identifies the estimates to be made, and how data to support the making of these estimates will be collected. Estimates relate to several factors, including investment costs, incremental operating costs, benefits (As-Is ???minus??? To-Be), and scaling factors. The data defines the actual data points to be collected. Benefits estimates will most often be in two areas: cost savings and span time savings. Valid ways to collect or estimate the data include actual collection of experimental data, simulations, or paper studies. Based on the test scenario, the actual experiment is conducted in the Conduct Experiment step. The data outlined in the previous step are also collected for use in the next phase. 2.4 Analyze Results In this phase the data and estimates made in the previous phase are analyzed and savings estimates developed. Both individual concepts and the entire portfolio of concepts (if appropriate) are analyzed. This phase consists of three steps: Determine Benefits from Individual Experiments, Determine Overlaps and Synergies, and Allocate to Enterprise. Analysis is conducted first for the individual experiments in the Determine Benefits from Individual Experiments step. The estimation of the benefits for the processes directly attributable to the experiments is usually fairly straightforward, although the extrapolation of benefits from the experiments may require estimation experts. Cost savings are allocated to the activities of the activity model. For example, the sales order process above may involve tasks related to several different activities, such as Manage Financial Assets, Manage Materials, and Plan and Control Production. As another example, a shop floor control system will include savings in activities such as Plan and Control Production, Produce Parts, Manage Materials, and other activities. For this activity, the Activity Analysis Matrix (shown in Figure 1), of the EPMM methodology is used to derive the activity based costs and benefits of the new technology. It brings together many of the
elements developed earlier in the methodology. Along the vertical axis are listed the activities that the analysis team has determined are affected by the concept under consideration. The components of the concept, cost drivers, and cost categories (different components making up the total cost of performing an activity) identified earlier are listed along the horizontal axis. The use of cost drivers, a concept from activity based costing, assists in defining the actual savings. The values in the cost category/activity intersections represent the savings expected from implementing the concept. An activity analysis matrix is created for each year of analysis for each concept and a total calculated for each year.
Physical Robot System
Enterprise Activities Schedule Shop Floor Supervise Shop Floor Perform Manufacturing Operations Collect Data/Monitor Progress PerformControl Actions
x x x
x x x x x
x x x x
x x x x
x x x x x
x x x x
$3.5 $2.0 $28.5 $2.7 $2.6
$0.1 $0.0 $9.0 $0.0 $0.7
$3.6 $2.0 $37.5 $2.7 $3.3 $49.1
Figure 1 Activity Analysis Matrix The savings values are then transferred to a cash flow matrix for use in calculating standard financial criteria such as Net Present Value or Internal Rate of Return. Included in this cash flow analysis would be estimates of investments, incremental operating costs and other relevant factors. This is a fairly well understood process and will not be discussed in detail here. While the calculation of financial criteria is common, it should be pointed out again that the use of activity based costing techniques impose a level of detail and rigor not often used. The values for the financial criteria are then transferred into the financial metrics portion of the next matrix, the Strategic Analysis Matrix as seen in Figure 2. The methodology allows for the use of metrics in two other categories in addition to the financial metrics. Quantitative metrics are those which can be expressed in numerical terms, although not necessarily in dollar terms. It is in this category that span time savings would be placed. Calculation of span time savings often come directly out of the validation experiment. Qualitative metrics are used for those benefits which are not easily quantified. The methodology uses utility functions to convert metrics in different units of measure to a consistent 0 to 5 scale for comparison and tabulation. Additionally, as seen in the figure, weights are used to represent the relative importance of the various metrics. The calculation of the weights is accomplished in an early matrix not shown in this paper. Two factors are considered in assigning weights: the relative importance of each strategy to the overall objectives of the enterprise and the relative ability of each metric to measure the realization of each strategy. The importance of each metric is indicated by a numerical weight between 0 and 1. The sum of the strategic weights and the sum of the metrics weights for a strategy are normalized to 1. In this activity, group decision making techniques such as Analytic Hierarchy Process (AHP)
can be used, further promoting understanding and consensus among a disparate group of individuals and organizations. Multiplying the weights for each metric times its normalized score results in a score for that metric. The scores for all metrics are then added to arrive at an overall score for this concept.
Attributes Utility Functions Values Calculations
Support JIT & Cellular
NPV Payback period
X X X X X X X
X X X X
X X X
0 1 50 50 0
7.5 5 50 50 1000 5 5 5
0 2 0 0 0 0 0 0
i d i i d i i i
$4.8 3.0 47 48 220 5 4 4
$3.2 2.6 4.7 4.8 3.9 5.0 4.0 4.0 Score
0.40 0.10 0.08 0.06 0.06 0.05 0.15 0.10
1.28 0.26 0.38 0.29 0.23 0.25 0.60 0.40 3.69
% cycle time reduction % quality improvement WIP
Improved information flow Respond to market demands Safety
X X X
5 5 5
Figure 2 Strategic Analysis Matrix Having determined the benefits of each individual concept, the entire portfolio of concepts under consideration must be now be considered in the Determine Overlaps and Synergies step. This is a difficult step in that expert judgement must be used to ensure that if two or more concepts are being considered together, the interactions must be considered. First is the consideration of not ???double-counting??? benefits. The methodology provides assistance in this activity through the use of the activity model and the strategy model. By examining the exact activities and strategies affected and the benefits being claimed for them, many of these overlaps can be identified. Second, synergies must be considered. This is especially important when infrastructure or enabling technologies are among the portfolio. These technologies will often have little impact by themselves but make other technologies and concepts possible. An example of this is the installation of a network infrastructure. Depending on the enterprise and concepts under consideration, the Allocate to Entire Business may or may not be required. The purpose of this step is to take the benefits that might originally have been calculated for a particular product line or company segment and allocate the benefits to the entire business. The use of activity and strategic analysis matrices greatly facilitate this step.
This selection methodology was developed to support a multi-million dollar program involving selection of funding from several potential projects. This program was proprietary to the industry participants and cannot be discussed in detail in this paper. Generally however, the program had several teams of contractors, and subcontractors working together in competition with other
Strategic Metric Total
Type (D, P, I)
teams for a limited pool of funding. The authors worked with one of these teams in developing its benefits measurement process. The benefits measurement process had to provide data to both the external funding agency as well as each participating company as significant cost share was involved. Each team put forth several possible concepts for consideration Each concept had different companies involved, with a different company in leadership of many concept teams. The projects under consideration involved diverse technologies, spanning business practices to design and manufacturing technologies. In most of the concepts, the projects involved a change in the hard or soft technologies being utilized as well as a change in the business processes in which the technology was to be used. The response to the methodology was generally positive. The funding agency and management of the companies were pleased with the rigor and organization of the process and results. It is believed that the results had added credibility because of the rigor of the process. For the technologist, the methodology helped to define the concept and benefits. In several cases, the methodology acted as a facilitator in defining the experiments. On occasion, it led the technologists to design the experiments differently than they might have originally. The methodology was not without problems. Primarily, the rigor that made the results credible was sometimes a problem. In several cases, it was difficult to identify all of the elements required by the methodology. This was often the case when the concepts were in very early stages of development. Not knowing exactly what the technology would look like when mature made the identification of impacts extremely difficult. Additionally, some technologists resisted the work required to complete the methodology. Overall, however, the application was viewed as a positive factor in the team??™s winning funding from the government funding agency.
This paper has described a methodology for project selection in a research and development environment. There are several things that make the methodology innovative. First is the rigor of the methodology. This rigor helps to ensure that all factors and benefits for a research project are considered. It also assists in comparing disparate technologies on a consistent and auditable basis. The process, since it is documented and defined, has the property of repeatability on different projects and different environments. One factor that gives it this rigor is the use of the activity model. The explicit use of the activity model ties the methodology to activity based management practices, something that is not often seen in R & D environments. The methodology also serves as a facilitator to the development of the technology as it leads the technologist through the process of defining the concept and the validations to prove it out. The methodology has been used successfully in one difficult, multi-enterprise environment and should prove to be useful in most R & D selection situations, especially where rigor and depth of analysis are required.
1. Baker, N. and J. Freeland, “Recent Advances in R&D Benefit Measurement and Project Selection Methods”. Management Science. 21(10): p. 1164-1175, 1975. 2. Danila, N., “Strategic Evaluation and Selection of R&D Projects”. R&D Management. 19(1): p. 47-62, 1989. 3. Fahrni, P. and M. Spatig, “An Application Oriented Gude to R&D Selection and Evaluation Methods”. R&D Management. 20(2): p. 155-171, 1990. 4. Presley, A., J. Sarkis, and D. Liles, “A Multi-Criteria Justification Methodology”. Journal of Engineering Valuation and Cost Analysis. 1(2): p. 111-123, 1997. 5. Presley, A.R., L.E. Whitman, and D.H. Liles. “A Methodology for Enterprise Performance Management”. in 1997 Decision Sciences Institute Proceedings, San Diego, CA, 1997. 6. Das, R.K., et al., “Selection and Evaluation of Research and Development Projects”. Journal of Scientific & Industrial Research. 56: p. 202-206, 1997. 7. Coffin, M.A. and B.W. Taylor, “Multiple Criteria R&D Project Selection and Scheduling Using Fuzzy Logic”. Computers and Operations Research. 23(3): p. 207-220, 1996.