Case study – Software maintainability assessment methods

The following is a case study of a software maintainability assessment method. The organization using this method, Footech, and the Foo software are  fictional. It was written as an exercise from the book “Managing the software enterprise” by Pat Hall and Juan Fernandez-Ramil.

1.  Introduction

In August 2009, management of Footech Ltd decided to conduct a maintainability assessment of its Java-based legacy software, Foo. This report was written in response to the request from the CEO of Footech, Mr. Bob Brown, to analyze the findings of this assessment and investigate the effectiveness of the current maintainability assessment method.

The report firstly outlines the maintainability assessment method and includes a maintainability index (MI) for Foo. It then, based on the results of the MI, discusses the maintenance requirements for each deficient aspect of the system and introduces commonly used maintenance tools. Thirdly, the currently used maintainability assessment method is analyzed to determine limitations and identify improvements. Finally, a commonly used MI model, the Coleman-Oman regression model, is examined as a consideration for future MI developments. Conclusions are then drawn from these findings.


2.  Discussion

2.1  Maintainability assessment method 

The software maintainability assessment method currently used by Footech is based on a table of factors as shown below.

a.         Each factor is awarded a score between 0 and 10 by an engineer who knows the system, to indicate how maintainable the system is relative to that factor. For example, a relatively old system may be awarded a score of 8 out of 10 to indicate that due to its age the system will be relatively difficult to maintain.

b.         Each factor will have been assigned a weighting between 0 and 10 by a group of experienced software engineers to indicate its importance to the overall maintainability of the system – the higher the score the less maintainable the system.

c.         The scores for each of the factors assessed are then multiplied by the appropriate weighting and the resultant products are then summed to give an overall score which forms the maintainability measure (MM) of the system (the lower the score, the better the maintainability of the software system).

d.         If the overall score is more than 300, something needs to be done about the system.

2.2 Maintainability assessment index

The following table represents maintainability of the Foo software system according to the aforementioned maintainability assessment method.

Factor Weight Actual Score Weighted Score
Business Requirement Complexity




Application Complexity




Data Structures Complexity




Code Complexity




Change History Documentation




Business Documentation




Architectural Documentation




Code Annotation




Code Size




Release Frequency




Overall total MM



2.3 Maintenance requirements

Footech’s maintenance assessment method calls for maintenance work to be carried out if the MM is over 300, so with ten factors examined, the most each factor should score is 30. Six out of the ten factors scored higher than 30 with the highest, application complexity, scoring more than double the acceptable level. These factors are individually examined below with a view to how the maintainability of each can be improved. Examples of tools that can assist maintainability are also given.

2.3.1 Application complexity

An excessive level of application complexity indicates that the Foo software architecture may be inappropriate for intended changes, and to some extent, the system needs to be re-engineered. In order to reduce complexity of an application the software firstly needs to be understood. Without sufficient documentation, as in this case, reverse engineering is necessary to find higher level descriptions of the system for re-documentation.  Reverse engineering tools include call graphs, to assist understanding of software processes by mapping the relationships between system subroutines, and execution tracers, to track execution through the software. Profiling tools, such as JProfiler, are also useful to calculate and produce a visual indication of which part of the program needs to be optimized.

Main screen of JProfiler

Main screen of JProfiler

After sufficient reverse engineering, the system can be forward engineered to produce an improved, restructured version of the same program, with decoupled and cohesive modules. Complex modules are prone to error, require many tests and are harder to understand and to modify. Therefore, when forward engineering, application design should divide modules so that each module is of equal or near equal complexity, with no module being overly complex. To achieve this, a measure of complexity must be established and options should be evaluated against this measure. The option that results in near equal measures for each module can then be selected. To find a good threshold, a software metric developed by Thomas McCabe called Cyclomatic Complexity can be used to measure the number of linearly independent paths through the program’s source code. Enerjy, a free plug-in for Eclipse, measures complexity via the system’s cyclomatic complexity number.

Enerjy Memory Profiler used within the Eclipse IDE

Enerjy Memory Profiler used within the Eclipse IDE

There are some basic change processes that must be followed while forward engineering. These include configuration management, to ensure compatible software versions, and release planning, which prioritises changes to the system. Regression testing is a critically important process in software evolution to ensure previously implemented functionality still works after bug-fixes. 

2.3.2 Code annotation

To make programming tasks simpler, source code annotation provides a way adding of metadata to code that is available to programmers at runtime. In Java EE 5, it enables code reduction through injecting dependencies, resources, services, and life-cycle notifications into the application.

In addition, code annotation makes it possible for a Footech team member to view a complete history of current code lines in one view, with details including the developer who wrote the code, date and time, and a link to other files checked in at the same time. This makes it valuable for team development and fixing bugs in legacy code, such as Foo, as there is an easy means of collaborating with the writer of the code and a link to possibly related files. Eclipse has strong support for annotation, and an added benefit of using annotations in Eclipse is that the Eclipse Annotation Processing Tool can be used to generate files and compile new java classes based on annotations found in the source code.  

2.3.3 Change history documentation

Following Footech’s QA program, in order to produce high-quality software, programmers must follow a strict set of coding standards. One reason why maintaining change history documentation is important is to ensure that these quality assurance practices are followed properly. Therefore, Foo’s source code should be managed in a way that each modification performed is traceable.  Improved tracking of detailed history per module allows management to better identify risks, resolve issues, and improve planning of projects.

The Perforce Source Control Management System from ThoughtWorks provides access to versioned files by treating each change made to code as a submission, where a change-list card is filled out with a change description. Each item in a change-list is associated with a module of the project to allow traceability and enable monitoring of coding practices. Repository code is easily accessed via quick links from its associated “story”, or module, and all changes can be monitored from a dashboard. Additionally, changes can be visualized through dynamically generated, customizable reports. All this enforces accountability for Footech engineers, fostering quality development software procedures.

 Perforce Source Control Management change submission

Perforce Source Control Management change submission

2.3.4 Data structures complexity

The choice between an efficient and inefficient algorithm can make the difference between a practical and impractical solution to a problem, so it is important that resources used during execution of the Foo program can be measured. Complexity theory provides a means for measuring resources needed for a computation to solve a given problem. According to this theory, the complexity of a data structure relates directly to how much time and space (computer memory) the algorithm uses, or how efficient it is. Time complexity is the number of steps involved in a solution to solve a problem and space complexity focuses on the number of elementary objects that a program needs to store during its execution. Big-theta notation is a common metric for calculating time and space complexity, where all constant factors are removed from a function so that the running time can be estimated in relation to N as N approaches infinity, allowing users to concentrate on growth rates of the algorithm. In algorithm analysis, it is common to classify algorithms according to shapes of their graphs, normally based on worst-case analysis. Using these graphs to analyze Foo’s data structures can assist with the software’s maintainability.

2.3.5 Architectural documentation

An architectural document lays out the general requirements that would motivate the existence of a routine. This would include Foo’s major software components and their interactions, a description of Foo’s hardware and software platforms, and a justification of how the architecture meets requirements. “A good architecture document is short on details but thick on explanation” (2009, Software documentation).

Common problems involved with creating good architecture documentation are fragmentation of documentation, non-standard modeling conventions, duplication, and inconsistent information. A pragmatic solution would be to use a wiki, such as Confluence, together with a UML tool, like that of Sparx Enterprise Architect, to facilitate knowledge management and documentation of the Foo software. Another benefit of using Enterprise Architect is that it also includes version control for change history documentation.

Enterprise Architect UML

Enterprise Architect UML











2.3.6 Code complexity

Code complexity can be reduced by using object-oriented programming (OOP) techniques such as information hiding, data abstraction, encapsulation, modularity, polymorphism, and inheritance, which help build a flexible and scalable application. Footech programmers who are educated in the principles of OOP programming can identify code deficiencies and implement refactoring when necessary. Because refactoring does not change the behaviour of the system, to some, the process may be seen as a drain on resources, but it is in fact conducive to faster programming and essential for building a quality system. Ideally, refactoring should be included in Footech’s normal activities. Extreme programming fosters a refactoring culture and is designed to adapt to changes. Key practices include iterative development, self testing code, and pair programming (with one programmer being a class writer and the other a class user) to encourage evolutionary design.


2.4 Limitations of current assessment method
The current maintainability assessment method requires that a score is given to each factor by “an engineer who knows the system”. This doesn’t specify how the Footech engineer should gain knowledge about the system in order to rate it. Without this specification, the engineer might be tempted to base the score on personal experience with the system, introducing the risk of human error. Moreover, Footech engineers are given the difficult task of rating factors, relative to each other, based on their importance to the system’s maintainability. This method is overly subjective, relying solely on opinion. Cognitive factors can influence how people answer questions, such as latter answers being influenced by prior answers. Engineers may not make the required mental effort to recall all the relevant information, and worse still, attitudes given toward a factor may not even exist in a coherent form. Nevertheless, considering the scale of the factors involved, accurate estimation of Foo’s system maintainability is beyond the ability of even experienced engineers.

To reduce this uncertainty, a more objective approach would be to analyze Foo software’s source code and system metrics to better facilitate understanding of the underlying software. Metrics are useful to estimate effort, both already done and expected in the future, and almost every metric is more useful than none at all. Using design metrics to investigate software trends in the system provides a better indication of software quality, and the discovery of trend correlations lends to fact-based, informed decisions for preventative maintenance. Conveniently for programmers using a metric-based MI, tools exist to automatically calculate the maintainability score, whenever code changes are made.

A repository to store Foo’s software history should be kept, which can be accessed by a software trend analyzing tool, such as Solid Trend Analyzer from Solid Source. This provides graphs and diagrams based on metric data, such the rate of edit file addition, system proportion of complex files, and the size of frequently changed files. Such information is invaluable to allow cost reduction, quality improvement, and decision making support.

Addition rate of edit files

Addition rate of edit files

Another useful indicator of maintainability for the Foo application is the Object-oriented Metrics Suite, originally put forward by Chidamber & Kemerer, which consists of six metrics for each class in an application. It utilizes measurement theory to improve object-oriented design and development processes, and provides structural measures as indicators of maintainability. For example, the Coupling Between Objects (CBO) measure is the count of classes to which a class is coupled, with a higher CBO indicating more difficult testing, maintenance, and reuse.

Although the aforementioned methods have been proven to improve maintainability of a system, and the current maintainability assessment method would benefit with some level of incorporation, there are still some unresolved issues with software maintainability, especially with those that arise when software components are built with different programming languages and technologies. The current assessment method doesn’t consider the variability of languages and technologies, and could be improved if these are taken into account. Other considerations for maintainability include the time taken to fix a defect (time to mean change), the backlog of user requests, and the ratio between initial development and defect fixing costs.

2.5 Maintainability Index structure

In order to improve Footech’s current maintainability assessment method, it is important to examine ways that a maintainability index can be structured. The most commonly used model for determining the maintainability index of a software system is the Coleman-Oman regression model, which was developed in the 1990’s by the University of Idaho, as shown in the polynomial expression below.



  • aveV is the average Halstead Volume per module.
  • aveV(g’) is the average extended cyclomatic complexity per module.
  • aveLOC is the average lines of code per module.
  • perCM is the average percent of lines of comment per module.

Source: Liso, A. (August, 2001)

It was subsequently determined that this method was not satisfactory because comment blocks, which typically do not influence the maintainability of an application, were being included as lines of code. A second model was developed that is derived from the first model, but removes code comments from equation, as below.


This maintainability index is based on Halstead’s effort metrics, cyclomatic complexity (as described earlier), and lines of code. It attempts to “objectively determine the maintainability of software systems based upon the status of the source code” (Oman et. al) and can be calculated at method, class, package, and system level. Hewlet-Packard validated the index in the field and determined that, on scale from 1 to 100, modules scoring more than 65 are considered difficult to maintain. The index has been successfully tested on large-scale military and industrial systems. It is interesting to note that there is a high correlation between modern-day system metric tools, as examined earlier, and the principles used in the Coleman-Oman regression model.

The first component of the model, Halstead metrics, is primarily based on the number of operators and operands in a system, indicating how complex the application’s statements are. Measurements include Halstead Length, Vocabulary, Volume, Difficulty, Effort, and Bugs. These measurements provide valuable insight into which areas of the application need to be modified, such as Halstead Vocabulary, which counts the number of different variables, or Halstead Difficulty, which counts unique operators and operands. The measurements can also be used together in calculations to determine various aspects of an application’s complexity. For example, a small Halstead Length, or number of statements, with a high Halstead Volume suggests that individual statements are overly complex.

Metric measurements become more difficult to calculate where the data is more semantic in nature, such as determining the appropriateness of data structures and meaningful documentation. Therefore, as with Halstead Effort and Bugs which can not be inherently analyzed through code analysis, a certain amount of estimation is required. Pizka et al. propose one possible solution to this may be to use a broader “quality model” tree instead of a maintainability index. This would include a technical dimension of “maintainability” as a top-level quality attribute of a system, with more concrete attributes like “analyzability” on lower levels. The values determined by the metrics are then aggregated towards the root of the tree to obtain values for a higher level.

3.  Conclusions

1.      According to Footech’s maintainability index there are six aspects of the Foo software system that require maintenance. These are: application complexity, code annotation, change history documentation, data structures complexity, architectural documentation, and code complexity. Maintaining each of these factors involves specialized requirements and tools.

2.      Footech’s maintainability assessment method is overly subjective.

3.      The maintainability assessment could be improved by using design metrics to judge the quality of the system design and identify candidates for preventative maintenance.

4.      The Coleman-Oman regression model is a well-tested, commonly used method for determining the maintainability index of a system. It uses Halstead metrics and McCabe’s cyclomatic complexity, along with other factors, to determine an indicator of a systems overall maintainability.

5.      Functionality of many modern-day system metrics analysis tools is based on techniques used in the Coleman-Oman regression model for developing a maintainability index.

Note: Thanks to Bruce Ludgate for pointing out that there are no references. All references are pending until I can find the original essay.


1 Comment

Filed under Case studies

One response to “Case study – Software maintainability assessment methods

  1. A

    quite amusing to see this up. Massey has just set what seems like the exact same assignment this term for systems management. And this blog comes up on the first google search for “define maintainability assessment method”. So if you are getting more hits on this page than usual…..

    Ended up having a look round your blog. Interesting. Keep it up.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s