An example of how this works

As an example of how this works, we're going to walk through the process that we followed recently. This involves two small phases of development on one project. On this project, the developers, project managers, users, technology, and application all remained constant. We also followed a relatively formal process, beginning with a requirements phase, which led into development, testing, and final delivery of a portion of an application.

A first measurement

In our case we were able to start with one project that we were already developing. The project itself was small, but it was a distributed Java Swing application with an embedded database that runs on Windows, Mac, and Linux platforms, so even though the application itself was small, the degree of complexity here was very high.

The first thing we did was to finish the requirements documentation and initial prototypes for our application. Once we had this information, which included a simple, logical view of the database requirements, we were able to count the function points for this small project. We created what we thought was a fairly detailed requirements document, and the count at this time was 400 FPs.

Skipping a few details here, let's just say that the next thing we did was to develop this project. When we called the development "complete", we counted the number of FPs that were actually delivered to the users. This was 440 FPs, or a growth from the requirements stage of 11%.

At this point we also had development time information. Two developers worked on this project for a total of 540 man-hours. This comes out to be 0.815 FPs/hour (440 FPs divided by 540 man-hours). Had our customer kept a good record of time that users spent testing the application they also could have determined a metric of "Number of testing hours per FP", but they did not. IMHO, we think this would benefit them in the future, but in our role, application testing is not our responsibility, so we did not pursue this.

Although we spent 540 hours on this project, the real "calendar time" for delivery of the application was 10 weeks. This was because of several periods of down time during the development process. Therefore this project was delivered at the rate of 44 FPs per calendar week.

Depending on how you track cost information, you can also determine "Cost per FP". As we stated earlier, as an independent software development firm, we now develop complex applications like this for about $250/FP.

Your second measurement

Because this is an ongoing project, we basically repeated the same steps on the next phase of our project. For summary purposes, here are the steps we followed:
  • Develop the requirements, including an understanding of the necessary data stores and screens to be developed.
  • Count the FPs.
  • Supplied an estimate of the project cost, assuming another 11% gain in functionality (scope creep) during development.
  • Develop the code.
  • Track the amount of time people spend on the project during development and testing.
  • Count the FPs again.
  • Deliver useful project metrics, including:
    • Number of developer hours.
    • Number of testing hours.
    • Average number of hours per FP.
    • Elapsed calendar time, which yields something like "Number of calendar days per FP" or the converse of "Number of FPs per calendar day". This occurs when there is down time in a project, or when your development resources are not fully dedicated to the project at hand.
    • Development cost per FP.
    • Testing cost per FP.
    • Overall cost per FP (including other time for management, documentation, etc.).
    • The ratio of Requirements time to Development time, and other similar ratios.

Note that Step 3 in this example is "estimate the project cost". Because we have the same developers, users, and managers working on a different part of the same project, isn't it reasonable to assume that the project velocity for earlier phases will be the same for this new phase? For us, this is at the heart of estimating new development work with FPs. Given this scenario of having the same developers, users, and managers, and working with the same technology on the same application, We're glad to take our chances estimating with FPs

Now, if you suddenly change any of these factors, you can use this same information for your estimates, but you're estimate will probably vary somewhat. For instance, with our company, we've found that we can develop web applications much, much faster than we can develop Swing applications. Of course this is an over-simplification, but in general a simple web application conforming to HTML 3.2 specifications is much easier for us to develop, and hence our cost estimate and delivery times will be scaled down for this environment.

Another factor you'll be able to measure is the impact of new technology and tools on your development costs. As we mentioned, we deliver Web applications much faster than Swing applications, so a 500 FP Web application will be developed faster than a 500 FP Swing application. Although the size (amount) of functionality we're delivering to the customer is equivalent, the technology that we're using to deliver the applications is different, and for us, web applications are much less expensive.

That being said, we've found that other factors, including project managers and customers can also be a major influence on the overall development time and cost. For instance, when it comes to customers, it's much easier to work with a small team of customers that agree on what they want an application to deliver, versus a large committee with different opinions. The large committee is going to take more time during the requirements process, and IMHO is going to be subject to a larger scope creep during development.

In summary, given a little bit of time and statistics, your organization can rapidly begin applying FPs to use these same metrics. Over time, your cost and time estimates will get much more accurate. And, as you bring new technologies into your portfolio, you'll be able to look at these metrics and see the positive (or negative) correlation of new technology.