Foreword
The standard economic definition for productivity formore than 200 years has been “goods or services produced per unit of labor or expense.” For software itwas long difficult to per- form valid economic studies because the “lines ofcode” met- ric is not a suitable economic unit for the “goods or services” thatsoftware provides.
As of 2010,there are more than 2500 programminglanguages in existence. Many of these languages have no standard rules for evencounting lines of code. The software literature that does attempt to countlines of code is divided between counts of physical lines and counts of logicalstate- ments. There can be as much as a 500% difference in appar- ent sizebetween physical and logical code counts for many common programming languages.Some applications have as many as 15 different programming languages in use atthe same time, and a majority of software applications have at least two programming languages inuse at the same time. There are no standard rules for counting applicationsusing multiple programming languages concurrently.
More than half of the total effort devoted to softwaredevelopment is not concerned with codingitself, but rather with gathering requirements, with architecture, with design,with creation of user documents, with testing, with manage- ment, and withscores of other noncoding activities. None of these can be measured using“lines of code” metrics, but all can be measured using function point metrics.
Even worse, the “lines of code” metric tends topenalize modern high-level programming languages such as Java, C++, Ruby, andthe like. This is because noncodedevelopment activities are a higher percentage of total effort with mod- ernlanguages than they are with low-level languages such as Assembler or C. Thebottom line is that “lines of code” met- rics are essentially worthless forsoftware economic studies. In fact, “lines of code” metrics are actuallyharmful because they violate the assumptions of standard economics.
In the 1970s, A.J. Albrecht and his colleagues at IBMdeveloped the function point metric. This metric has since been usedsuccessfully for economic analysis of individual
projects andfor large-scale economic studies of industry segments and even nationalsoftware studies. The function point metric has become the de factostandard for software economic analysis,for benchmarks of software productivity and quality, for studies of softwareportfolios, and for all seri- ous measurement purposes.
In 1978, IBM put the function point metric into thepub- lic domain. A nonprofit organization of function point users was quicklycreated to share data and maintain the rules for counting function pointmetrics. This organization is the International Function Point Users Group,commonly iden- tified as IFPUG. As of 2010, IFPUG has grown to become thelargest software metric association in the world, with thousands of individualmembers and hundreds of corpora- tions as members. As of 2010, there are IFPUG affiliates inabout 25 countries, and the number is increasing each year.
To ensure accuracy and consistency in countingfunction points, the IFPUG organization has created formal count- ing rules. Inaddition, the IFPUG organization has created a certification examination thatis administered several times a year.Successful completion of the IFPUGcertification examination has long been a criterion for consultants who countfunction point metrics.
Over time, as new kinds of software emerged, it hasbeen necessary to update the function point counting rules to ensure that therules encompass all known forms of soft- ware. The current IFPUG counting rulesas of 2010 are ver- sion 4.3. The function point counting rules have also grownin size and sophistication. In 1978, the IBM counting rules consisted of about15 pages of general guidance. Today, the IFPUG counting rules top 100 pages andinclude a number of detailed counting practices that need to be understood foraccurate counts.
David Garmus, Janet Russac, and Royce Edwards havelong been involved with function point analysis and with establishing the IFPUGcounting rules. All have been mem- bers of IFPUG for more than 20 years and haveserved as officers and committee members. Their new book, Certified FunctionPoint Specialist Examination Guide, is intended as a study guide for functionpoint specialists who plan to take the IFPUG certification examination.Although a number of solid books oncounting function points are available, this new book fills a gap in thefunction point literature by providing useful information on the specifics ofbecoming a certified function point counter. The authors are all quali- fiedfor the work at hand and indeed have contributed to the function point countingexaminations.
It is interesting that after more than 60 years of usage there has never been any kind ofcertification or examination for counting lines of code. Not only does the“lines of code” metric have serious economic flaws, but it also remains one ofthe most ambiguous metrics ever utilized by any engineering field. Codecounting variations can cause apparent size dif- ferences of more than 10 to 1,which is an astonishing range of uncertainty.
By contrast, results of studies involving certifiedfunction point counters using standard test cases usually come within about 5% of achieving identicalcounts. This precision is about as highas any form of analysis based on learned skills. In fact, the accuracy offunction point counting is higher than the accuracy levels noted for otheranalytical tasks such as preparing income taxes or preparing financial reports.
Function point metrics have become the de facto stan-dard for software economic studies in part because function points are validfor economic analysis and in part because function point metrics are based onformal counting rules and are supported by formal examinations andcertification procedures. This new book by David Garmus, Janet Russac, andRoyce Edwards fills an important niche in the function point literature.
Capers Jones
President, Capers Jones & Associates LLC
转载于:https://blog.51cto.com/1023763175/1352840