September 10, 2011

Do managers have bigger brains?

Within the last couple of days, I have been intrigued by a news item that said that Managers have bigger brains. Then, I used Google to find out the related text. The purpose of this post is not to analyze whether or not "managing" results in a bigger brain, or even the implications of this discovery. The focus of this post is to analyze how simply using natural language to describe the result of a study can distort what is being communicated. Then, move on to the application of this analysis to requirements analysis in software testing.

Managers have bigger brains: Since this study was conducted University of New South Wales researchers, I searched the UNSW website and found this article here. My comments were:
1. I found the following text at this link "UNSW researchers have, for the first time, identified a clear link between managerial experience throughout a person’s working life and the integrity and larger size of an individual’s hippocampus – the area of the brain responsible for learning and memory – at the age of 80." So, they are not talking about the entire human brain but only one of its parts.
2. The article talks about finding a relationship between the size of the hippocampus and the number of employees managed. It does not state the exact relationship.
3. Per the article, the researchers used MRI in a sample of 75 to 92 years old. Around the middle, the article moves on to the relationship between exercise and learning and other topics presented in the symposium where this study was presented.

This news item also appeared on other websites such as MSN India here.
4. I found the text "Staffers agree or not, managers do have bigger brains, says a new study." The prior article had no mention of the staff agreement to the manager having a bigger brain. So, did the research take the subjects' staff's agreement into account?
5. This news item says "Researchers, led by the University of New South Wales, have, in fact, for the first time, identified a clear link...". The previous article just mentions "UNSW researchers". So, were there teams from elsewhere involved in the research?

What can we take away from the analysis?
a. It is possible for people to over-generalize or over-specialize a description. So, we should probe further to find out the caveats and exceptional conditions. For example, a requirement may say "The user account shall be locked after 3 unsuccessful attempts". On probing further, we may find that this is true for all users, except the administrator user. The system may be rendered in-operational if the administrator gets locked out :)
b. It is possible for people to just provide references to other information, without naming it explicitly. Instead of relying on our assumptions, we should ask for the reference directly. For example, a requirement may say "Other requirements implemented in the previous release shall continue to be supported". We should ask for which release - the last major release, the last release (major or minor), the first release or any other release? If there is a conflict between the requirements of this release and the "previous release", is it okay to have the requirement of this release take priority?
c. We should question the source of the requirement. Has it come directly from the customer or system analyst or just a suggestion from somebody? In other words, is it a genuine requirement? For example, someone may suggest making reports completely configurable. It may be against the client's objective that any user comes up with any format of a report, leading to confusion. The suggestion should be declined. Of course, suggestions made by the management need to be declined tactfully, if infeasible to implement.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.