Having spent a considerable amount of time as a student in a few academic institutions around the globe and having read a substantial number of management related articles so far I would like to suggest the following hypotheses about academic life as a discussion about academics, especially around the area of management, organizational psychology, and organizational sociology.
Here goes.
H1 : Some research articles contain repeating issues with sample choice, data collection and/or data coding. Some are unaware, some do this by mistake, some choose to submit regardless and hope for the best.
H2 : Some researchers make odd choices of statistical tests to measure their hypotheses or include strange reports of test results or the interpretation of what their results really mean.
H3 : Few researchers really have a full understanding of the outputs given by stat programs.
H4 : Few researchers have really read all the articles they’ve quoted in their articles.
H5 : Few researchers have read much outside of their specific research field (Micro OB to Macro OB to Strategy to Sociology to Psychology to HRM to economics to political sciences, etc. etc. etc.).
H6 : Of those researchers that have, even fewer hold positive views on research done in other fields or see any relevance in that regarding their own work in their field.
H7 : Few researchers have really taken the time to read most of the classics in their own field (dating back to the "beginning of time", in management – since the 20th century, maybe 30s or 40s, depends who you talk to).
H8 : Of those who have, very few see actual relevance in the classics to their actual work done today.
H9 : Most researchers that are stat software experts (be it SPSS, SAS, or any other software) can find supporting results to their hypotheses with almost any garbage or random data collected.
H10 : Of those who have taken the time to read the articles they quoted, some have misunderstood the original intent of the quoted article or have mistakenly or intentionally "massaged" the meanings to fit their hypotheses.
H11 : Few researchers have really read and understood the entire article they’ve approved or rejected as reviewers in journal submission.
H12 : Most researchers report only selective parts of their results which usually include only (1) the data most supportive of the hypotheses made or , (2) the most interesting results most likely to lead to a publication.
H13 : Data included in articles regarding data collection, the empirical process, and the way analysis was performed is limited to an extent that makes it very difficult to follow or visualize it.
H14 : Following the previous hypotheses, it is – (1) difficult to assess whether the article did a good job or not, (2) difficult to impossible to repeat the exact same empirical process.
Disclaimer – There are some really good articles out there. Using the convenience of "few" "quite" "some" hopefully keeps me in the clear and makes it somewhat difficult to reject my hypotheses, if you can really call them that, doesn’t it?
That should be enough for the first one. Got lots more where this came from …