Software Sizing During Requirements Analysis

Featured
30907 Views
0 Comments
12 Likes

INTRODUCTION

Software production has become one of the key activities of the industrialized world. Software applications are now the driving force of business, government operations, military equipment, and most of the services that we take for granted: electric power, water supplies, telephones, and transportation.

Most major companies and government agencies build or commission new software applications every year. But software development and software contracts have been very troublesome. Cost and schedule overruns are common, and litigation for software problems is a frequent outcome. Successful development of large software projects is so difficult that significant percentage of large systems greater than 10,000 function points are canceled and never completed.

One of the major challenges of software cost and schedule estimation is “sizing” or predicting the amount of source code and other deliverables that must be built to satisfy the requirements of a software application. Sizing is a critical precursor to software cost estimating whether estimation is done manually or by means of a commercial software cost estimating tool.

For software applications that are similar to existing applications, size can be derived by analogy to the existing packages. When the software application is a new kind of application then sizing by analogy is not a feasible approach.

For much of the history of the software industry, sizing was considered a very difficult and intractable problem. Sizing is still difficult, but over the past 30 years an interesting new methodology for dealing with size prediction has been developed based on the use of the function point metric. This new methodology has the advantage that it can not only predict the volume of source code, but also the volumes of planning documents, specifications, user manuals, test cases, and even the probable number of errors or bugs that might be encountered.

Deriving Function Points From Software Requirements

The function point metric is a synthetic metric derived from five external aspects of a software application. Function points were developed by A.J. Albrecht of IBM, and placed in the public domain by IBM in 1979. Since the original publication, usage of function point metrics has spread throughout the world. The non-profit International Function Point Users Group (IFPUG) soon became the largest software metric association in the world, with more than 1000 members and affiliates in 24 countries. The non-profit International Software Benchmark Standards Group (ISBSG) has become the largest source of benchmark data, with more than 5000 projects available. New benchmarks are being added at a rate of perhaps 500 projects per year. All of the ISBSG data is based on function point metrics.

Although the actual rules for counting function points are complex, the underlying principles are straight-forward. The five major components of the function point metric consist of enumerating these items for an application being sized and estimated:

The inputs to the application
The outputs that leave the application
The logical files maintained by the application
The kinds of inquiries that the application supports
The interfaces between the application and other applications

The details of how function points are actually counted are beyond the scope of this article. For a full discussion, refer to the books cited the references and reading list at the end of the article.

The ability to use function point metrics as the basis of sizing is the fact that they can be enumerated directly from a statement of the requirements of an application. Once the function point total of an application is calculated, then sizing of most deliverable items becomes comparatively straight-forward.

Function Points and Software Requirements

By fortunate coincidence the structure of function point metrics are a good match to the fundamental issues that should be included in software requirements. In chronological order seven fundamental topics should be explored as part of the requirements gathering process:

  1. The outputs that should be produced by the application.
  2. The inputs that will enter the software application.
  3. The logical files that must be maintained by the application.
  4. The entities and relationships that will be in the logical files of the application.
  5. The inquiry types that can be made to the application.
  6. The interfaces between the application and other systems.
  7. Key algorithms that must be present in the application.

The similarity between the topics that need to be examined when gathering requirements and those used by the functional metrics makes the derivation of function point totals during requirements a fairly straightforward task.

There is such a strong synergy between requirements and function point analysis that it would be possible to construct a combined requirements analysis tool with full function point sizing support as a natural adjunct, although the current generation of automated requirements tools are not quite at that point.

If full automation of both requirements and function points are to be possible, the requirements themselves must be fairly well structured and complete. Toward that end, in addition to the seven fundamental requirement topics there are also thirteen other ancillary topics that should be resolved during the requirements gathering phase:

  1. The size of the application in function points and source code.
  2. The schedule of the application from requirements to delivery.
  3. The cost of the application by activity and also in terms of cost per function point.
  4. The quality levels in terms of defects, reliability, and ease of use criteria.
  5. The hardware platform(s) on which the application will operate.
  6. The software platform(s) such as operating systems and data bases.
  7. The security criteria for the application and its companion data bases.
  8. The performance criteria, if any, for the application.
  9. The training requirements or form of tutorial materials that may be needed.
  10. The installation requirements for putting the application onto the host platforms.
  11. The reuse criteria for the application in terms of both reused materials going into the application and also whether features of the application may be aimed at subsequent reuse by downstream applications.
  12. The use cases or major tasks users are expected to be able to perform via the application.
  13. The control flow or sequence of information moving through the application.

These 13 supplemental topics are not the only items that can be included in requirements, but none of these 13 should be omitted by accident since they can all have a significant affect on software projects.

The synergy between function points and software requirements is good enough so that it is now technically possible to merge the requirements gathering process and the development of function point metrics, and improve both tasks simultaneously.

In the future, automated tools that support both requirements gathering and function point analysis could add rigor and improve the speed of both activities. Since requirements gathering has been notoriously difficult and error-prone, this synergy could benefit the entire software engineering domain.

Once function point totals have been developed, the information can be used to size any and all software deliverables. Following are several examples.

Sizing Paper Deliverables With Function Points

Software development is a very paper-intensive occupation. As many as 50 different kinds of planning documents, specifications, control documents, and user manuals have been observed on large software systems.

It is interesting that the volumes of paper deliverables vary significantly from industry to industry. Military software projects produce far and away the largest volume of paperwork of any known kind of software. Indeed, the cost of producing paper documents for military software absorbs more than 50% of the total software cost.

Here are selected size examples, drawn from systems, MIS, military, and commercial software domains. In this context, "systems" software is that which controls physical devices such as computers or telecommunication systems. MIS software stands for "management information systems" and refers to the normal business software used by companies for internal operations. Military software constitutes all projects which are constrained to follow various military standards. Commercial software refers to ordinary packaged software such as word processors, spreadsheets, and the like.

Table 1: Number of Pages Created Per Function Point for Software Projects

 

                                              

Systems Software

MIS Software

Military Software

Commercial Software

User Requirements

0.45

0.50

0.85

0.30

Functional specifications

0.80

0.55

1.75

0.60

Logic specifications

0.85

0.50

1.65

0.55

Test plans

0.25

0.10

0.55

0.25

User tutorial documents

0.30

0.15

 0.50

0.85

User reference documents

0.45

0.20

 0.85

0.90

Total document set

3.10

2.00

 6.15

3.45

This kind of sizing for software documentation is now a standard feature of several commercial software cost estimating tools. At least one commercial software estimating tool can even predict the number of English words in the document set, and also the numbers of diagrams that are likely to be present, and can change the page count estimates based on type size.

Since software is now a global commodity, some software cost estimating tools can also predict document volumes and translation costs for converting screens and text into various national languages such as Japanese, French, and German.

Sizing Source Code With Function Points

Another major sizing capability associated with function point metrics is the ability to predict source code size for any programming language, or even for applications that use two or more languages at the same time such as Java and HTML, or COBOL and SQL for older applications.

As of 2008 there are more than 700 programming languages and dialects in existence. There are far too many languages to do more than illustrate the concept, but source code sizing consists of predicting the number of logical statements that will be needed to encode one function point.

Table 2: Ratios of Logical Source Code Statements to Function Points 

 

 

 

Source Statements Per Function Point

 

 Nominal Level

Low

Mean

High

Basic assembly

1.00

200

320

450

Macro assembly

1.50

130

213

300

C

2.50

60

128

 170

FORTRAN

3.00

75

107

160

COBOL

3.00

65

107

150

PASCAL

3.50

50

91

125

ADA 83

4.50

60

71

80

C++

6.00

30

53

125

JAVA

6.00

35

55

115

Ada 9X

6.50

28

49

110

SMALLTALK

15.00

15

21

40

SQL

27.00

7

12

15

Once the function point total of an application is calculated, it is possible to predict the approximate volume of source code in new applications. The normal method for doing this is to use the central or “average” column in a table of ratios between source code statements and function points. For greater precision, adjustments can be made for complexity and programming styles by specific teams.

Most commercial software cost estimating tools in include source-code sizing from function point metrics. Some cost estimating tools also include more accurate size predictions than just a table look-up based on average values. For greater accuracy, it is necessary to include adjustments for both complexity and for reusable code. It is also necessary to deal with applications that include multiple programming languages. However, all of these capabilities are now available in commercial software estimating tools.

Sizing Defects or Bugs With Function Points

The most time-consuming and expensive activities for software projects are those concerned with finding and removing bugs or defects. Thus accurate cost estimation is built on the ability to predict both the numbers of potential defects that might occur, and the costs and schedules of various defect removal operations.

One of the reasons why function point metrics are more useful than “lines of code” metrics for quality estimation is that function points can be used to predict bugs or defects in requirements, design, and user documentation as well as coding defects. Function points can also be used to predict “bad fixes” or secondary defects accidentally injected in defect repairs themselves.

Table 3: Average Defects per Function Point by Origin 

Defect Origins

Defects per Function Point

Requirements

1.00

Design

1.25

Coding

1.75

Document

0.60

Bad Fixes

0.40

Total

5.00

 

 

These numbers represent the total numbers of defects that are found and measured from early software requirements throughout the remainder of the life cycle of the software. The defects are discovered via requirement reviews, design reviews, code inspections, all forms of testing, and user-reported problem reports.

Sizing Test Case Volumes With Function Points

Another useful sizing capability associated with function point metrics is the ability to predict the number of test cases that are likely to be created for the application. Here too, there are major differences in test case numbers by industry, with military software and systems software producing much larger totals of test cases than information systems. There is now at least preliminary data available for all standard kinds of testing. Following are some representative examples:

Table 4: Number of Test Cases Created per Function Point 

                                          

Software Systems

MIS Software

Military Software

Commercial Software

Unit test

 0.30

 0.20

0.50

0.30

New function test

 0.35

 0.25

 0.35

 0.25

Regression test

 0.30

 0.10

 0.30

 0.35

Integration test

 0.45

 0.25

 0.75

 0.35

System test

 0.40

 0.20

 0.55

 0.40

Total test cases

 1.80

 1.00

 2.45

 1.65

Here too, test case sizing logic is now available in commercial software estimating tools. The total number of kinds of testing covered is now up to about a dozen discrete forms. The sizing logic for test cases in the best commercial software estimating tools also includes adjustments for enhancements rather than for new projects, for complexity factors, and for the impact of ancillary tools such as test case generators and test coverage analysis tools.

Estimating Rules of Thumb Using Function Point Metrics

While accurate software cost estimating is a complex task involving scores of variable factors, a number of useful rules of thumb based on function point metrics are now available for quick estimates or “sanity checks” to be sure that software projects are within the boundaries of reasonable probability. Some of these useful rules of thumb include:

Requirements creep: Once the application’s requirements have been defined and are agreed to, expect them to grow at an average rate of 2% for every calendar month that passes. Thus for projects with two year development schedules, about 24% of the features will be added after the requirements phase. (This rule was discovered by enumerating function points at the end of requirements and again at delivery of the software.)

Software schedules: Raise the function point total of the application to the 0.4 power. This rule yields the approximate number of calendar months necessary to develop the application from the start of requirements until the first delivery to clients. Although this rule approximates U.S. averages, some methods such as Agile development will require lower values such as using 0.36 power instead of the 0.4 power. Military applications, on the other hand, may need to use the 0.45 power due to the extensive oversight requirements that slow down military applications.

Development software staffing: Divide the function point total of the application by 150. This rule yields the approximate number of development personnel needed to build a software application.

Maintenance software staffing: Divide the function point total of the application by 1500. This rule yields the approximate number of maintenance personnel necessary to keep a software application up to date and repair minor defects once it has been deployed and is being used.

Software defects or bugs: Raise the function point total of the application to the 1.25 power. This rule will predict the approximate number of bugs or defects that will be found in all major deliverables: requirements errors, design errors, coding errors, documentation errors, and “bad fixes” or errors accidentally injected while repairing previous problems.

These rules of thumb have a significant margin of error, but they do provide a useful first-order approximation of some key software cost factors. For greater precision, some of the commercial software estimating tools can handle adjustments for experience, tools, formal methods, and other topics such as the maturity level of the organization based on the Software Engineering Institute’s capability maturity model.

Summary and Conclusions

Sizing has been a challenge to software project managers since the software industry began. Sizing from the basis of function point metrics is not perfect but it is a useful method and better than older approaches such as guessing at lines of code. However, sizing with function point sizing used with sophisticated estimating tools is accurate enough to serve as the basis for contracts and outsource agreements.

Suggested Readings on Function Points and Software Cost Estimating

Dreger, Brian; Function Point Analysis; Prentice Hall, Englewood Cliffs, NJ; 1989; ISBN 0-13-332321-8; 185 pages.

Galorath, Daniel D. & Evans, Michael W.; Software Sizing, Estimation, and Risk Management: When Performance is Measured Performance Improves; Auerbach, Philadelphia, AP; ISBN 10-0849335930; 2006; 576 pages.

Garmus, David & Herron, David; Function Point Analysis; Addison Wesley, Boston, MA; ISBN 0-201069944-3; 363 pages; 2001.

Garmus, David & Herron, David; Measuring the Software Process: A Practical Guide to Functional Measurement; Prentice Hall, Englewood Cliffs, NJ; 1995.

Harris, Michael D., Herron, David, and Iwanicki, Stasia; The Business Value of IT: Managing Risks, Optimizing Performance, and Measuring Results; CRC Press, Boca Raton, FL; ISBN 978-14200-6474-2; 2008; 266 pages.

IFPUG Counting Practices Manual, Release 4, International Function Point Users Group, Westerville, OH; April 1995; 83 pages.

Jones, Capers; Applied Software Measurement; McGraw Hill, 3rd edition 2008; ISBN 978-0-07-150244-3; 575 pages; 3rd edition (March 2008).

Jones, Capers; Software Assessments, Benchmarks, and Best Practices; Addison Wesley Longman, Boston, MA, 2000; 659 pages.

Jones, Capers; Conflict and Litigation Between Software Clients and Developers; Version 6; Software Productivity Research, Burlington, MA; June 2006; 54 pages.

Jones, Capers; Estimating Software Costs; McGraw Hill, New York; 2nd edition, 2007; 644 pages; ISBN13: 978- 0-07-148300-1.

Jones, Capers; “The Economics of Object-Oriented Software;” American Programmer Magazine, October 1994; pages 29-35.

Jones, Capers; Software Quality – Analysis and Guidelines for Success; International Thomson Computer Press, Boston, MA; ISBN 1-85032-876-6; 1997; 492 pages.

Jones, Capers: “Sizing Up Software;” Scientific American Magazine, Volume 279, No. 6, December 1998; pages 104-111.

Kan, Stephen H.; Metrics and Models in Software Quality Engineering, 2nd edition; Addison Wesley Longman, Boston, MA; ISBN 0-201-72915-6; 2003; 528 pages.

Kemerer, C.F.; “Reliability of Function Point Measurement - A Field Experiment”; Communications of the ACM; Vol. 36; pp 85-97; 1993.

McConnell, Steve; Software Estimation – Demystifying the Black Art; Microsoft Press, Redmond, Wa; ISBN 10: 0-7356-0535-1; 2006.

Parthasarathy, M.A.; Practical Software Estimation – Function Point Methods for Insourced and Outsourced Projects; Addison Wesley, Boston, MA; ISBN 0-321-43910-4; 2007; 388 pages.

Roetzheim, William H. and Beasley, Reyna A.; Best Practices in Software Cost and Schedule Estimation; Prentice Hall PTR, Saddle River, NJ; 1998.

Stutzke, Richard D.; Estimating Software-Intensive Systems – Projects, Products, and Processes; Addison Wesley, Boston, MA; ISBN 0-301-70312-2; 2005; 917 pages.

Author: Capers Jones is the President of Capers Jones & Associates LLC. He is also the founder and former chairman of Software Productivity Research, LLC (SPR), where he holds the title of Chief Scientist Emeritus. He is a well-known author and international public speaker, and has authored the books “Patterns of Software Systems Failure and Success,” “Applied Software Measurement,” “Software Quality: Analysis and Guidelines for Success,” “Software Cost Estimation,” and “Software Assessments, Benchmarks, and Best Practices.” Jones and his colleagues from SPR have collected historical data from more than 600 corporations and more than 30 government organizations. This historical data is a key resource for judging the effectiveness of software process improvement methods. The total volume of projects studied now exceeds 12,000. 

Copyright * 2008 by Capers Jones & Associates LLC.  All Rights Reserved.

 



Upcoming Live Webinars

 




Copyright 2006-2024 by Modern Analyst Media LLC