Você está na página 1de 7

Common problems with project planning

or how to dene and estimate tasks for a software development project


Peter Foldes
Masters of Software Engineering Graduate Candidate Institute for Software Research School of Computer Science Carnegie Mellon Institute Pittsburgh, PA 15213 Email: foldes@cmu.edu

AbstractAfter deciding on the requirements for a software development project, whether its in the form of a user story or as a detailed requirements specication, the requirements are usually rened further into tasks. These tasks are then used to create a more precise plan, as the tasks help with estimations, identifying risks, and understanding and managing the project. There are numerous questions that arise during the creation of tasks that this paper covers, namely: 1) What level of abstraction is necessary to understand a task? 2) Should tasks unrelated to the requirements (for example training) be included in the planning? 3) When can a task be considered as nished? 4) How can tasks be estimated? 5) How to create a plan using dened and estimated tasks for a development team? These problems arise no matter what kind of software development life cycle a team chooses to follow, whether its an agile process or a heavier process. The paper addresses the above questions based on a project done at Carnegie Mellon University under the Master of Software Engineering program. It details some of the considerations to keep in mind, and provides some guidelines on how to deal the most important aspects.

a plan, mostly focusing on how to create and estimate tasks to both reect the requirements and add enough granularity to be used for development and planning. Section II. introduces the environment the problem is being evaluated in, including the description of the used project and the university environment. Section III. describes a two very different software development life cycles way to turn requirements into an actual plan, while Section IV. goes into depth with the most common problems found in creating a task, trying to answer the following questions: 1) What level of abstraction is necessary to understand a task? 2) When can a task be considered as nished? 3) How can tasks be estimated? 4) Should tasks unrelated to the requirements (for example training) be included in the planning? 5) How to create a plan using dened and estimated tasks for a development team? II. T HE STUDIED PROJECT The project this paper is based on was done as part of the Carnegie Mellon Universitys Master of Software Engineering (MSE) program over the course of 16 month. A team of 4 people worked with Bosch Research and Technology Center (RTC) under the supervision of two mentors from the program to navigate a project from the dening a statement of work till maintaining the created software and gathering lessons learned. A. About the MSE program The Master of Software Engineering degree program is designed for early- to mid-career software professionals eager to increase the breadth and depth of their knowledge of the discipline. It focuses on innovative theories taught in a combination of core and elective

I. I NTRODUCTION It is common for any software development project that a given set of requirements in any format has to be translated to human manageable tasks. These tasks are then used to rene estimations, plan out the time schedule and resources allocated to that task, and to monitor and manage the progress of the project. The problem is that sometimes that translation is ambiguous, misunderstood by the people involved, or will just not produce good enough estimation to be used for project planning and tracking. This paper addresses the most common problems with turning requirements into

courses and their practical application in a mentored Studio environment. Graduates of the MSE program not only understand how to apply the best of current practice, but also act as Agents of Change to improve that practice as the eld evolves. [13] One of the MSE programs focus is a 16-month Studio Project. The Studio Project is unique to Carnegie Mellons Master of Software Engineering program. An application-based project, its purpose is to develop extremely high quality software in a mentored environment. Students work in teams to analyze a signicant and practical problem, plan and implement a realistic solution for a real external client. [14] B. About the project Bosch Research and Technology Center (RTC) has tasked the team with building a solution that would allow models developed in Ptolemy, an open-source modeling and simulation tool, to be run in real-time, manipulated, and operated using a user-congured layout on Androidpowered devices. Robert Bosch GmbH is a worldwide corporation that designs and develops embedded systems for automobiles. Because of the inherent complexity of modern engine systems and necessary precision to ensure safe operation, Bosch uses model-driven development. These systems are currently modeled by the various business units using a tool called ASCET that is developed by a Bosch subsidiary, ETAS. Though ASCET has sufcient capabilities for their current operations, the Bosch RTC has been researching additional capabilities that could be incorporated into the ASCET toolkit and provide benet. Ptolemy, developed by an open-source community primarily at the University of California at Berkeley, provides users with the ability to simulate real-world activity and perform advanced analysis on models. However, Ptolemy needs some additional enhancements to fully support the goals of Bosch RTC. Unfortunately, the current desktop solution of Ptolemy does not lend itself to more portable applications where the user is on-the-move, changing model inputs/outputs and parameters, observing actual engine conditions, and providing immediate feedback. There are also limitations to the handheld devices, namely the screen real-estate, processing power, battery life, and available memory, particularly tablets and phones, when compared to their desktop counterparts. That being the case, the simulation must provide only the displays appropriate to the enduser and, because the needs can vary greatly between engineers, must be laid out in a highly congurable

manner. An effort to support this functionality has been undertaken in the past, but is not fully functional in the Ptolemy tool. With these extensions in place, Bosch RTC will be able to demonstrate Ptolemy running on an Android device with a user-specic layout. To summarize, the ETAS commercial tool ASCET is used to facilitate model-driven software development of embedded systems. Bosch RTC, in an effort to explorer other potential uses and extension points, has been experimenting with an open-source, concurrency modeling and simulation tool called Ptolemy. This MSE project, which builds on the functionality of Ptolemy and extends its capabilities to another, more versatile platform, will act as a proof-of-concept of functionality that may, if proven to be both useful and feasible, be incorporated into Boschs commercial tool at some later point. [12] C. Processes used During the development of the project, numerous processes, methods, and techniques were used. Most importantly, the team started with the Scrum [2] agile software development life cycle process and later transitioned to use Team Software Process (TSP) [3], [4] for the rest of the project. Both of these processes were tailored over time to better suite the team and the project. Scrum is dened as [a] framework within which people can address complex adaptive problems, while productively and creatively delivering products of the highest possible value [2]. Two important parts of the Scrum process for this paper are that the requirements are handled in a prioritized list called the product backlog, and that the team is responsible to take the most important items from the product backlog and rene them for each iteration, called sprints. At every sprint, the sprint starts with a kickoff meeting where the product backlog is updated, the most important items are chosen, and are rened as smaller tasks. The rened tasks, called the sprint backlog, are used as a bases of implementing the requirements. Tasks are estimated in group setting, and the amount of work done is based on the groups velocity, which is the average work they managed to do per sprint over the project so far. TSP on the other hand is more rigorous. The TSP provides a disciplined context for engineering work. The principal motivator for the development of the TSP was the conviction that engineering teams can do extraordinary work, but only if they are properly formed, suitably trained, staffed with skilled members, and effectively led.

The objective of the TSP is to build and guide such teams. [4] TSP is also an iterative process, where each iteration starts with a launch meeting where goals are establish, team roles are dened, and risks are assessed. Based on these effort is estimated and tasks are allocated to the team members. For this paper the most important here is the well dened processes that govern the launch, which includes tasks creation and estimation as a group. III. F ROM REQUIREMENTS TO PLAN Moving from requirements to the point where part of the project is planned out means that from an abstract description of what the software system has to do measurable actions are dened. The point for these tasks are that resources can be assigned to them and they can be tracked over time. This is an important and sometimes highly overlooked part of the software development process. Without a clearly dened plan, even if its just for the next few days or weeks of software development, its hard to distribute work between team members, as it might not be clear where one persons responsibility starts and when anothers ends. A. Dening requirements Requirements gathering can happen multiple times during a project, depending on the environment and the nature of the project. When the underlying business context requires exibility and adherence to market changes, frequent iterations of requirements gathering and software development can ensure that the software produces immediate value. On the other hand, if the environment does not require exibility, less frequent iterations are required, since the requirement dont change that much and the project can be planned our more precisely ahead of time. When dening requirements, there are some problems that need to be addressed. It should be written in a way that all stakeholders can understand them, they should be unambiguous, consistent, and not redundant. Whether using user stories or a bulleted list. In the case of the MSE project, due to the research and proof of concept nature of the project, smaller iterations were used to identify and mitigate technological risks faster, and to change the direction of the project as was necessary. Requirements were gathered using Contextual Design [7], use cases [8], and other design principles [9]. This paper will not go into the details of these techniques, but those and continuous stakeholder interactions provided a rigorous way of gathering requirements.

While agile processes generally use an abstract concept to describe a requirement, for example by dening user stories [1], [10], the team used use cases to dene requirements, and then further rened these to a simple spreadsheet. The spreadsheet contained an id, the short one sentence description of the requirement, a categorization, and an importance to our stakeholders (must have, good to have, and nice if we have time).

Fig. 1: High level use cases of the system These requirements led for the creation of a notional architecture that described the initial version of our solution, which was rened by each iteration, especially at the beginning when we experimented with the different unknowns in the project. B. Rening requirements into tasks The rened requirement list and the architecture helped us to create a work breakdown structure [6]. This gave a high level overview of what needed to be done to reach our dened goals for the project. This included not only implementation items, but also items for training, documentation, or required presentations about the project. These were quite high level at the begin and became more detailed with iterations, but the structure stayed the same. We could also estimate the items dened, which gave as a ballpark for the system with our current knowledge. For estimating the WBS we used the Wideband delphi method [5]. This, the requirements list, and the numeration of some goals can help us choose some items from the backlog, and rene them into manageable tasks. This is generally true for both Scrum and TSP, with the most important difference being the formality of these steps. TSP expects goals and tasks to be explicitly measurable,

can be calculated this way. C. From tasks to plan After the goals are set, the tasks are dened and estimated, they can be allocated to resources (in case of TSP) or point-person can be assigned (Scrum) to ensure the tasks will get done. And with that the plan is ready to be used and tracked, depending on the process, the team, and the project. This can be done, rened, and improved with each iteration, but showing these techniques are outside the scope of this paper. Fig. 2: The work breakdown structure of the team with best case and worst case estimations. and required to collect those metrics, while most agile processes only require a denition of done [11] for the tasks. It tells when a task is actually done, when is that the activity [adds] veriable/demonstrable value to the product. [11] IV. P ROBLEMS WITH TASK CREATION There are numerous problems that arise during the above described activities, but the most important one that we perceived was with dening the tasks. It was hard to tell what task represents the problem well enough and is actionable by one person, what tasks should be included besides the ones from the requirements, or how to know when a tasks is done. It is also hard to tell how to estimate research and unknown tasks, or what should be included in the plan, and what should be just dened as an overhead. The sections below describe a problem, show how it was handled during the project, and give a list of guidelines that, at least in similar projects, might help avoid some pitfalls and mistakes. A. Level of abstraction 1) Problem: Choosing the level of abstraction is problematic, because a task must be dened enough that all parties involved understand what the task is about, and what is getting done during the execution. This also includes stating assumptions involved in creating the task and the risks associated with it. If it is too abstract its going to be hard to actually estimate it and for someone to solve it. If it is too rened, the task become a mechanic problem that just needs to be translated into the right format, but it takes too much time to dene. 2) In the project: Continuous improvement within the team and increasing familiarity with the project helped us identify this problem. The more tasks we estimated, the better we have seen if a task is too high level to estimate or too low that it can be done in an hour at most. We tried to make sure tasks have no or only a few assumptions and can be done in around 2 hours. While this helped us understand tasks better, not xing the abstraction level to a number of hours might helped us dene these tasks more precisely, as our estimations where on average 30% off, especially at the beginning.

Fig. 3: Snapshot of the product backlog. After the tasks are dened different methods can be used to estimate them. In case of Scrum we used the previous estimation of the backlog item, the planning poker method [10], and a correction factor based on our previous estimations. In case we were constantly under or overestimating, we could factor that in and generate more precise estimates for repeatable tasks. In case of TSP, TSP denes a method called the Probe estimation method that can be used. It takes the previous estimations and actual data, calculate correlation between the data, and, if the correlation is signicant enough (> 75%), it will give an estimated offset and multiplier to be used for estimations. This is very useful for planning, as management overhead and under or overestimations

3) Guidelines: The critical point is nding a middle ground. It should be easily estimated, understood, but should still require most of the creative thinking and action from the doers side. B. Including other tasks 1) Problem: A question arises what tasks to include when planning. In our project the resources were xed ahead of time, for example 48 hours per person per week for 4 people, but should we also include recurring role responsibilities? What about training and bug xing tasks? Some tasks are recurring tasks that take the same amount of time for each iteration. For example, should we include 4 hours of TSP launch meeting when it is done every single time. We can count exactly how much time we spend on it beforehand, and it might create confusion with tracking and managing. What about a buffer for unplanned tasks or context switching? Deciding what to include is a balancing act between making sure tasks are being completed and adding uncertainty and overhead to the planning and management processes. 2) In the project: We decided to include all tasks we were planning to get done. This would include management and role responsibilities, training, bug xing tasks, code reviews, and sometimes research tasks on a specic subject. This way we could plan for all available resources, excluding a certain amount we estimated as natural time lost, either due context switching or time management overhead. This we could estimate from our previous spent times, either by calculating our velocity, or by using the Probe method.

Some of these tasks were very inconvenient to include. For example, how would you dene well a bug xing tasks. Is that for one specic bug, or for a set of them? As seen in Fig. 4, some of these created very distant outliers in our estimations and therefore in our plans, leaving either unnished tasks that were not very well dened, or forcing us to spend too much time working on them. Another problem was tracking the project. Since we had a post-mortem session after each iteration to have some time to look at metrics, record lessons learned, and tailor our process, we allocated some time to do these. These meetings were usually not noted in the tracking as done when we looked at the data, and after we started a new iteration. This meant either the task consistently not being marked as nished, slightly dangering the integrity of our data, or getting marked but not really included until consolidated with the other data. 3) Guidelines: Its important to include tasks that need to be tracked through an iteration or the project as a whole. If a recurring task is getting done anyways, it can be included in the consolidated data even without introducing planning and tracking overhead for it. Buffer time might or might not be added to the plan, generally it highly depends on the team whether they can utilize the buffer time, or rather have a few lower priority tasks added that can be pushed to the next iteration if necessary. C. Denition of done and exit criteria 1) Problem: It must be known when the task is done. Without that information, collaboration within a team can only rely on continuous communication to know how is doing what, which is hard to maintain with a distributed or newly forged team. Members might also fall into pitfalls such as marking a task nished before covering all the cases, or spending too much time sugarcoating the tasks that is unnecessary. Knowing when a task is done correlates closely to the previously mentioned denition of done (DoD). When is the person doing the task is something missing that needs to be done? Is the person still within the scope of the task? TSP recommends using a measurable metric that can be held against the task or a goal. While in our case this would have been too conning, careful attention should be given to dene an exit criteria that is easily evaluated. 2) In the project: Since we did not wish to add extra confusion to our tasks and planning, we struggled to nd tasks where we needed to dene the exit criteria

Fig. 4: Correlation and Probe C estimation.

and to phrase it in a way everyone understands easily. In some cases we learned the importance of dening integration points in our architecture to measure tasks against them. In other cases we made sure people work more closely when tasks are in any way dependent on each other. Still, in some cases, we spent a lot more time than estimated, because we did not know a specic technology, did not create the right level of abstraction, and had trouble dening when the task is done. 3) Guidelines: For tasks that everyone understands and at the right level of granularity these are not too hard to dene or communicate between involved parties. So what about our bug xing session? What about the exit criteria for training or research tasks? In these cases a measurable approach can really help. For example, after a number of hours of training the team members took a small quiz to measure the understanding of the material. The quiz was graded, which provided a measurement that a predened number was either achieved, therefore the task is done, or not. In some cases introducing a hard time limit can help in making sure not too much time is spent on a unknown or risky task. D. Estimating a task 1) Problem: When the exit criteria is known and the task is understood there is still the question on how to estimate a task. When understanding the tasks, did all the team members assume the same conditions? In a development tasks, is design, integration, or testing included? These problems might completely sidetrack an estimation as tasks are understood slightly differently. 2) In the project: In the MSE program setting we had a specic amount of time we were supposed to work each week. Estimating a tasks that its doable in half a day of work seemed to t us well. Anything longer and the communication towards the other members decreased signicantly, and any less was hard to keep track of. This provided another way of making sure the granularity of a task was right. Estimation becomes the problem, when a task is hard to estimate. For example, what if we have should x a set of bugs before moving on the next set of features, but estimating all the bugs are inefcient? What if we want to experiment if a technology risk identied in the software architecture is going to be feasible, but since its an unknown there is trouble giving a good estimation? In these cases, the team decided to timebox some of the tasks. Lets say there is a ballpark estimation of an experiment taking 5 hours. In that case, an exit criteria can be dened as either mitigating the risk or the 5 hour

being spent, and if some new information or problem comes to the teams attention a plan must be reevaluated. The architecture rened, the backlog items re-prioritized, and the tasks redened. If its not critical than its easy to just move on and get back to the question later. For the assumption inherently included in some of tasks, we found that Wideband Delphi and Planning Poker methods both include a portion where assumptions are made explicit to the whole team, so that they can estimate on the same grounds. 3) Guidelines: Both timeboxing and making sure assumptions are explicit work really well for us. Using the Probe C method of TSP, we were able to have a fairly close estimation and actual values, although we still had some outliers due to the above mentioned problems like wrong level of abstraction or DoD. E. Creating a plan and tracking

Fig. 5: Product burndown chart for the rst quarter of the project. 1) Problem: After deciding what tasks to include and what resources to count with, creating a plan is mostly a question of creative balancing between deadlines, available resources, and tasks that are dependent on another. Usually processes have different philosophies in mind. Scrum and agile processes prefer is people are proactive and choose their tasks dynamically, while some others prefer to allocate all resources to tasks beforehand, creating Gantt charts and tracking based on that. Problems can arise if a lot of tasks are dependent on another, or if the team is having trouble working together, but these problems are outside the scope of this paper. 2) In the project: In the MSE project this was a question of member discipline in communicating the

task progress in detail, recording the measured metrics, and the decisions made while dening the tasks. We had a planning manager role as dened in TSP who created most of the plan based on the Scrum kickoff or TSP launch meetings. We also decided to allocated tasks even when using Scrum, since we had a newly formed team and wanted a slightly more manageable situation. Creating a plan was some overhead for the planning manager, but since the team came up with tasks and estimation together, generally plan creation was only a matter of recording the information in a tool and making sure everyone has some kind of access to it. 3) Guidelines: Generally when creating a plan it should be optimized for easily reading out who needs to do what, metrics collection, and tracking of progress. The plan should be easily accessible to everyone it involves, and should be clear on the goals and directions for the team. V. C ONCLUSION As requirements are turned into a plan, some considerations about the tasks to be executed are necessary. This might include clear denitions of tasks, eliciting assumptions, or just being careful about what tasks are tracked, and which ones are only counted with. The problems detailed in the paper can easily undermine team collaboration and project monitoring, sidetracking projects step-by-step, iteration-by-iterations. While this paper focuses on a research and similar greeneld projects, the solution of these problems are also very project and team dependent. What tailoring of the Scrum or TSP processes might have worked for us, it might not work on other projects or with other teams, depending on the nature of the project and the dynamics of the team. Still, the same problems are very likely to arise. To gure out the best process at a given situation, continuous improvement seemed to work well for us. Having data to consider, whether objective or subjective, helped us identify the more problematic points in our process and help improve it. Post mortem session and help from outside sources facilitated these improvements. ACKNOWLEDGMENT The author would like to thank his project team for the great experience and work they have provided. The project provided many learning opportunities and unique situations that led to deeper understanding of some of the software engineering fundamentals.

The author would further like to thank his mentors on the project for their continuous insights, and the clients for their support and understanding of the work involved in the project. R EFERENCES
[1] Mike Cohn User Stories Applied: For Agile Software Development Addison-Wesley Professional; 1 edition, 2004 March [2] Ken Schwaber and Jeff Sutherland The Scrum Guide http://www.scrum.org/scrumguides/ 2011 October [3] Watts Humphrey Introduction to the Team Software Process Addison-Wesley, 1999. [4] Watts Humphrey The Team Software Process http://www.sei.cmu.edu/reports/00tr023.pdf Software Engineering Institute, 2000 November [5] Andrew Stellman and Jennifer Greene Applied Software Project Management. OReilly Media, 2005, ISBN 0-596-00948-8. [6] DOD and NASA Guide PERT/COST System Design. 1962 June [7] Hugh Beyer and Karen Holtzblatt Contextual Design: Dening Customer-Centered Systems (Interactive Technologies) Morgan Kaufmann; 1st edition, 1997 September [8] Frank Armour and Granville Miller Advanced Use Case Modeling: Software Systems Addison-Wesley Professional; 1 edition, 2001 January [9] Donald A. Norman The Design of Everyday Things Basic Books, 2002 September [10] Mike Cohn Agile Estimating and Planning Prentice Hall; 1 edition, 2005 November [11] Dhaval Panchal What is Denition of Done (DoD)? http://www.scrumalliance.org/articles/105-what-is-denitionof-done-dod ScrumAlliance, 2008 September [12] Team HandSimDroid (Peter Foldes, Anar Huseynov, Justin Killian, Ishwinder Singh) Master Design Plan http://msesrv4avm.mse.cs.cmu.edu/wiki/images/2/25/MasterDesignPlan.pdf 2011 [13] Carnegie Mellon University Master of Software Engineering Program http://mse.isri.cmu.edu/software-engineering/web1Programs/MSE/index.html 2011 [14] Carnegie Mellon University Master of Software Engineering Program http://mse.isri.cmu.edu/software-engineering/web1Programs/MSE/FAQ.html 2011

Você também pode gostar