The intersection of technology and leadership

Book Review: Object-oriented Software Metrics

Working for a client in Berlin, I find the plane time where I normally catch up on some reading. Services like Read It Later make bookmarking online pages for offline reading a pleasure. This morning’s trip, I finished reading a book Michael Feathers tweeted about that. Titled, “Object-oriented software metrics”, and published 15 years ago I found this book most easily from a online second hand book store, and have to say I enjoyed many aspects to this book.

Wondering how much interest a metrics book could be, the author did well to keep the short book punchy and brief. I enjoyed the conversational style of the writing, and the pragmatic nature of his recommendations, such as “I put the threshold at zero so that we explicitly consider any violations.” He starts the book describing metrics and that they should be used for a real purpose, not just randomly corrected, and something I’m pleased resonates very well with a chapter I’m contributing to a book. It’s obvious he comes from applying metrics with real purpose in the real world, talking about examples where various metrics might be used to drive various design choices, or further investigation.

The author divides the metrics into two sections, the first focusing on metrics related to estimating and sizing, or project planning. The second set focuses on design metrics related to code. The metrics that emphasise estimation piqued my interest as a reflection on how estimation methods used to be run, or maybe in some places still are run such as Developer Days per Line of Code, or his suggestion on Developer Days per Public Responsibility. I think the second set proved more relevant to me.

The author shares some of his preferred metrics thresholds and, they too, resonate strongly with my own views of size of methods, number of instance variables in classes, etc. In fact, I’d almost say they were definitely much more extreme such as 6 message sends per method, with my preferred number between 5-10 depending on the team I’m working with. Part of this, something that the author emphasises, is also heavily influenced by the programming language of choice.

Few of the metrics talked about were new to me, having made use of tools like Checkstyle and PMD, although I found he used several I’ve not really tracked such as number of classes thrown away, number of times a class is reused and the number of times a class is touched, something I’d like to ponder on a lot more. One metric I’ve also never considered collecting or tracking is number of problems or errors reported by class/module though I suspect the overhead in tracking this may outweigh the benefit it brings because it’s much harder to automate this.

His emphasis on the influencing factors on code metrics also got me reflecting on my own experiences, once again strongly resonating with my own experiences. His mentioning of key classes resonate with the domain model described in Eric Evans’ Domain Driven Design. I would also anecdotally agree that the type of UI heavily influences the number of supporting classes with technical interfaces (i.e. APIs) requiring less classes than rich GUIs. I like a lot of the distribution graphs he uses and will definitely consider using these in the future.

I’d highly recommend this book if you’ve never really sat down and thought about using code metrics on your projects. It’s got me thinking about a number of other interesting side projects about visualisation and further feedback loops on projects.

2 Comments

  1. Greg Wilson

    I enjoyed Lorenz & Kidd when it came out, but then I read:

    Khaled El Emam, Saida Benlarbi, Nishith Goel, and Shesh N. Rai: “The Confounding Effect of Class Size on the Validity of Object-Oriented Metrics.” IEEE Transasctions on Software Engineering, 27(7), July 2001.

    The abstract doesn’t do it justice, so I’ll summarize. Previous work to validate software metrics (cyclomatic complexity, coupling, etc.) has looked at correlations between metric values and things like post-release bug count. The authors repeated those experiments using bivariate analysis so that they could allocate a share of the blame to code size (measured by number of lines) and the metric in question. Turns out that code size accounted for all of the variation, i.e., the metrics didn’t add have any actual predictive power once you normalized for the number of lines of code.

    There’s a good summary of current research in Hassan and Herraiz’s chapter in “Making Software” (O’Reilly, 2010, 9780596808303) that reinforces El Emam et al’s finding.

  2. Patrick

    Hi Greg,

    Thanks for the detailed summary and recommendation. I’ll definitely have to look it up.

1 Pingback

  1. Tweets that mention thekua.com@work » Book Review: Object-oriented Software Metrics -- Topsy.com

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

© 2024 patkua@work

Theme by Anders NorenUp ↑