- Christopher A Lawrence
The databases have been built upon strength and casualty returns and historical analysis of the actions to determine what sort of action the division was in (categorised into 7 types). So far so good, but of course any model or analysis is only as good as the data used, and as the author confesses data available from the Wehrmacht deteriorates in completeness and quality as WW2 progresses, that from the Russian side is incomplete (or incompletely available). Fair enough, one has to go with what one can find, but that pushed the potential for error up. The author never acknowledges this, nor the possibility that selecting actions on the availability of data may compromise their universal relevance. Worse he quotes percentages to two decimal places, that is an accuracy of 1 in a thousand, which is simply ludicrous and either demonstrates a complete ignorance of statistics or a desire to obfuscate with data.
He also choses to express casualty rates in casualties per division per day; this further obfuscates. A 10,000 strong division losing at (say) 2% per day is 200 men per day, which is survivable for some time if they are evenly spread. But of course casualties are not born evenly across a division, the bulk of them happen at the sharp end. 200 casualties per day to the fighting troops actually in contact, say 2,000 men at any one time, is rather more significant. Now, it may well be that the data is not available to track casualties to battalion or even brigade level, nor to establish arm of service of the casualties, but the net result is that the measures are obscured. It also means that it is impossible to determine why casualty rates vary, which in turn limits the utility of the entire book to any would be soldier, commentator or commander.
This is serious. In the Vietnam war the Rand corporation came up with a hypothesis that killing the North Vietnamese armed forces at a rate faster than they could be replaced would inevitable lead to victory. This led to the command obsession with body count. Although the US forces did achieve (on their measurement) the target rate they lost. General Marshall conducted a study into battle participation at D Day which established that only 10% or so of soldiers fired their weapons in combat. That has now been discredited, but it caused significant design problems. Sadly, this book is making similar errors, diligently producing numbers (to four or five significant figures) on the back of poor underlying data separated from context.
It even demonstrates this; after several pages discussing the difference in casualty rates in Vietnam between US Army and USMC actions (USMC casualties are consistently higher) through interminable tables the author reveals that the cause is actually the simple fact that the US Army and USMC defined casualties differently. In any analysis of data, the first thing that anyone should check is that they are comparing apples with apples, not oranges.
The book contains reams of data expressed in tables, much of which is irrelevant and little of which is clear; why the author can’t be bothered to create graphs to make the points escapes me and that his editor or publisher didn’t insist on it is a disgrace.
There are errors: the author claims that no stochastic simulation has been verified; this I know to be incorrect as I was working on one simulation that absolutely was against field trials. He alleges that there are few weapons of .22” calibre on any battlefield. Google confirms my recollection that 5.56mm = 0.22 inches.
There are some interesting points in the book, worthy of more research and discussion. But they are buried in weak data, flawed analysis and what seems to be a marketing drive for the Dupuy institute. There are far, far better ways to spend £30 - some of which will teach you more about warfare.