Gen Nick Carter - a year in post as CGS. Give us a progress update?

Caecilius

LE
Kit Reviewer
Book Reviewer
At the end of the day, you if you can't define success in an objective, measurable way, how can you measure progress towards success or achievement of success?

Subjectively. You don't measure it, you just assess it.

I suppose the clearest example is test exercises like CSTTX or BATUS. There are no clear, measurable numerical standards to hit but there are defined performance criteria against which a battlegroup can be assessed. The key is trusting the assessor to make the right judgement.

Interestingly, this ties in to a point repeatedly made by Stonker about the army spending years looking for the DS solution and punishing creativity. Arguably that's part of a drive for objectivity. Once you look for pure, measurable objectivity, you need a right answer. There's not really any room for saying 'their process was different but it came up with a good result'. Fortunately the army has moved away from the DS solution assement criteria of old towards far more subjective assessments.

An obvious civilian equivalent is marking an academic essay. There are clear performance criteria that can be listed but it's ultimately a judgement call on the part of the marker to work out whether those have been met and to assign a numerical score accordingly. The mechanism for having meaningful results is to train your exam markers appropriately so they generally give similar scores, then moderate their work to ensure quality control. You can then given each student a numerical representation of their performance, but it sure as hell isn't objective.
 
What do you mean I can't measure value? In my job we measure value everyday.

If you are referring to the upper echelons of the Army then I don't know. .
I was referring to the upper echelons; I have no idea what you do or who you work for.

The consultant should be there to provide the solution; he should be enabling the executive to find and deliver the solution. The fact that the deliverable is so often a glossy report says it all really.

I fully concur with your comment about the number of ex-military who end up talking bollicks loudly in consultancy. They are fundamentally the wrong people to be advising the senior cohort.
 
Subjectively. You don't measure it, you just assess it.

I suppose the clearest example is test exercises like CSTTX or BATUS. There are no clear, measurable numerical standards to hit but there are defined performance criteria against which a battlegroup can be assessed. The key is trusting the assessor to make the right judgement.......You can then given each student a numerical representation of their performance, but it sure as hell isn't objective.
In reality that is the case with almost any human endeavour. There are very few things, particularly complex things, that can be scored numerically without a degree of subjective assessment. England's winning score this afternoon will be one (he hopes) but even a scoreline doesn't show the whole picture. England have been winning ugly this year....

The key is to find a way to score subjective assessments in a way that is relevant, transparent, repeatable and has integrity.
 
Thread drift: in what type of applications? I had some exposure to setting up diagnostic analytics for gas turbines in a previous role and was just wondering how similar practices could be applied to J2 data
It's not that hard, really - paradoxically, the greater the volume and variety of data, the more effective the use of statistical techniques to establish relationships and implicit taxonomies. The actual content of the data isn't that important, really, the ability to break it down into coherent characteristics and individual fields is. Many of these techniques got their start in trading and fraud detection, but they have relevance to all manner of things, from network performance diagnostics, to cyber security, to industrial control systems, to urban traffic analysis - the list is endless.
 

Sarastro

LE
Kit Reviewer
Book Reviewer
Again, civilian experience might help here - I currently work in environments where unstructured data volumes in the multiple petabyte range are subject to analysis and it's difficult and expensive to implement but by no means impossible - and it works well once set up.
Interestingly, this ties in to a point repeatedly made by Stonker about the army spending years looking for the DS solution and punishing creativity. Arguably that's part of a drive for objectivity. Once you look for pure, measurable objectivity, you need a right answer. There's not really any room for saying 'their process was different but it came up with a good result'.
This makes little sense to me...of course there is room: you specify the objective as the result and not the process. If you can define a result as good, you can define one as bad. Those are metrics, which means you can be objective.

And still, it is better to have a system where the default is objectivity which necessarily involves some subjectivity, than the other way around. Anyway, a DS solution has nothing necessarily to do with objectivity (it's simply a subjective standard), and are you seriously suggesting that what we have now is a system that doesn't punish creativity?

The nature of the exam (warfighting) is objective, so that's what we need to prepare for. If this was a debate about the Church or the arts, that might be different. But it's not.
 

Sarastro

LE
Kit Reviewer
Book Reviewer
It's not that hard, really - paradoxically, the greater the volume and variety of data, the more effective the use of statistical techniques to establish relationships and implicit taxonomies. The actual content of the data isn't that important, really, the ability to break it down into coherent characteristics and individual fields is. Many of these techniques got their start in trading and fraud detection, but they have relevance to all manner of things, from network performance diagnostics, to cyber security, to industrial control systems, to urban traffic analysis - the list is endless.
Exactly. I've started to come to the opinion that there are basically three graphs you need to describe any information problem. A bell curve distribution, a straight linear extrapolation, and a logarithmic Pareto distribution. The vast majority of information problems I've seen, like the one you describe, look like the Pareto distribution. That quality is its own optimal stopping algorithm - you can tell very quickly where and when it's going to be worth investing effort.

(For everyone else, that basically means that either you can solve 80% of the problem very quickly, or you aren't going to solve any part of it at all without a lot of effort. Our problem is we tend to put in effort either randomly - subjectively - or evenly across all problems, which is pretty much the least effective way to get anything done.)
 

Caecilius

LE
Kit Reviewer
Book Reviewer
This makes little sense to me...of course there is room: you specify the objective as the result and not the process. If you can define a result as good, you can define one as bad. Those are metrics, which means you can be objective.

And still, it is better to have a system where the default is objectivity which necessarily involves some subjectivity, than the other way around. Anyway, a DS solution has nothing necessarily to do with objectivity (it's simply a subjective standard), and are you seriously suggesting that what we have now is a system that doesn't punish creativity?

The nature of the exam (warfighting) is objective, so that's what we need to prepare for. If this was a debate about the Church or the arts, that might be different. But it's not.
Except we can't easily specify the result with a lot of what we do. We are never able to conduct live fire exercises against a real enemy and are rarely able to do it with TES on any meaningful scale. We also operate in a system with a degree of complexity that means that we ruoitineky exercise and assess only constituent parts and not the whole.

While it's just about possible to assess BATUS in an objective manner although a win/loss assessment is a poor metric given the play of chance within that and the possibility of doing badly but still managing to scrape a victory. However, there's no clear outcome metric for something like CSTTX; given that it will always produce a plan and that plan will never actually be tested (perhaps in BC2T, but we all know how useless that is) the assessment is how well the headquarters functions. It used to be the case that marks were awarded for slavishly following a process - objective assessment. Now the assessment is a much more subjective view of how well the HQ works together to come up with the plan, even if they deviate from the doctrinal process.

I also disagree very firmly with the concept that the nature of warfighting is always objective. Perhaps in 1939 or 1982, but the view in 2003 that 'victory' was an objective rather than a subjective outcome is significantly responsible for the mess that Iraq is in now. Since then everything the army has done has only been subject to objective measurement; it is very, very hard to measure COIN objectively. There are a good 10 pages of the learning culture thread where I and a few others try to get an explanation about how one might measure it objectively and nothing was forthcoming from those who insisted it could be done.
 
At the end of the day, you if you can't define success in an objective, measurable way, how can you measure progress towards success or achievement of success?
And that is never impossible.

But it requires that leaders devote more effort than they are accustomed, to understanding what they are attempting, and requires that they open their efforts to objective appraisal.

In a tribal, hierarchical organisation with a tradition of subjective assessment, where the tics, mannerisms, vocabulary, prejudices and dress codes drilled into otherwise average kids who make up a vanishingly small proportion of the total population, by a tiny number of expensive skules, are still regarded - inexplicably - as having military value, it is easy to understand why there's a reluctance to do so.
 

Caecilius

LE
Kit Reviewer
Book Reviewer
And that is never impossible.

But it requires that leaders devote more effort than they are accustomed, to understanding what they are attempting, and requires that they open their efforts to objective appraisal.

In a tribal, hierarchical organisation with a tradition of subjective assessment, where the tics, mannerisms, vocabulary, prejudices and dress codes drilled into otherwise average kids who make up a vanishingly small proportion of the total population, by a tiny number of expensive skules, are still regarded - inexplicably - as having military value, it is easy to understand why there's a reluctance to do so.
I'd say the reluctance comes from the fact that a supposed expert on it, i.e. you, couldn't explain how to do it when questioned and repeatedly refused to answer a straight question. If the experts can't do it then what hope do others have?

I'll repeat the question that was put to you multiple times on the learning culture thread. Can you give us a set of objectively measurable outcomes that would work for HERRICK? For a bonus point, you could answer how on earth you run a non-hierarchical army given that you also refused to answer that one but have brought it back up again.
 
And that is never impossible.

But it requires that leaders devote more effort than they are accustomed, to understanding what they are attempting, and requires that they open their efforts to objective appraisal.

In a tribal, hierarchical organisation with a tradition of subjective assessment, where the tics, mannerisms, vocabulary, prejudices and dress codes drilled into otherwise average kids who make up a vanishingly small proportion of the total population, by a tiny number of expensive skules, are still regarded - inexplicably - as having military value, it is easy to understand why there's a reluctance to do so.
Jesus wept.

Beyond parody.
 
And that is never impossible.

But it requires that leaders devote more effort than they are accustomed, to understanding what they are attempting, and requires that they open their efforts to objective appraisal.

In a tribal, hierarchical organisation with a tradition of subjective assessment, where the tics, mannerisms, vocabulary, prejudices and dress codes drilled into otherwise average kids who make up a vanishingly small proportion of the total population, by a tiny number of expensive skules, are still regarded - inexplicably - as having military value, it is easy to understand why there's a reluctance to do so.
More seriously do you extend your critique to other employment areas? Are you a member of an judiciary websites complaining about the number of judges that are otherwise 'average kids' are regarded as having legal value?

How about Whitehall permanent secretaries? Journalists? Lawyers? Doctors? Leaders of public bodies? Bankers? The Police? The BBC? Diplomats? The Arts? Musicians? Footballers?

All lead by average kids who inexplicably have been thought to have value.
 
It's not that hard, really - paradoxically, the greater the volume and variety of data, the more effective the use of statistical techniques to establish relationships and implicit taxonomies. The actual content of the data isn't that important, really, the ability to break it down into coherent characteristics and individual fields is. Many of these techniques got their start in trading and fraud detection, but they have relevance to all manner of things, from network performance diagnostics, to cyber security, to industrial control systems, to urban traffic analysis - the list is endless.

it's interesting stuff. most of the sites we monitored had a few hundred sensors and we had the ability to compare a units performance against the rest of the fleet, as you said; the more data you have the more accurately you can identify abnormalities. these tools (and the correlation tools we used for diagnosis) can apply to any dataset regardless of application or units of measure.

I was just curious of the J2 uses (other than things like communications data) mainly because I'm solidly ignorant of what J2 work actually looks like.
 
This makes little sense to me...of course there is room: you specify the objective as the result and not the process. If you can define a result as good, you can define one as bad. Those are metrics, which means you can be objective.

And still, it is better to have a system where the default is objectivity which necessarily involves some subjectivity, than the other way around. Anyway, a DS solution has nothing necessarily to do with objectivity (it's simply a subjective standard), and are you seriously suggesting that what we have now is a system that doesn't punish creativity?

The nature of the exam (warfighting) is objective, so that's what we need to prepare for. If this was a debate about the Church or the arts, that might be different. But it's not.
Except that process matters too.
 
I'll repeat the question that was put to you multiple times on the learning culture thread. Can you give us a set of objectively measurable outcomes that would work for HERRICK?.
I think we could have done. The London Agreementand its predecessors set a pretty clear objective view of what success would look like in Afghanistan. From that itnought to have been possible to extract a clear definition of what military success would look like and then to plot a road map for getting there. One with measurable objectives and resource budgets to get there. Of course such a plan needs constant review because no plan survives contact.

What we actually did was do what we could, reacting to circumstance rather than creating circumstance. A seemingly endless parade of commanders claiming to be making progress without any real idea of the context of that progress.

The core problem is one of accountability. If you change the key leaders in a programme frequently and on a schedule totally unrelated to key programme milestones, you create the conditions for failure.

Turn it around the other way. How do we know whether Herrick was a success or a failure if we can't measure it?
 
The core problem is one of accountability. If you change the key leaders in a programme frequently and on a schedule totally unrelated to key programme milestones, you create the conditions for failure.
Spoken in the best PM drone.

So how long is long enough for a key leader to be involved in a change programme?
 
Turn it around the other way. How do we know whether Herrick was a success or a failure if we can't measure it?
I ask people to think in terms of a journey.

You might be traveling at 100 mph, and getting exemplary fuel economy, but in the absence of a destination, you're never going to be able to set way points, let alone assess whether or not you're making progress.

Reporting of activity is no substitute for reporting attainment, which is - in turn - only possible if clear measurable goals are set.

If the leader is unable to set clear measurable goals, then one has to conclude that the leader simply doesn't know what he/she is doing.
 
Last edited:
I ask people to think in terms of a journey.

You might be traveling at 100 mph, and getting exemplary fuel economy, but in the absence of a destination, you're never going to be able to set way points, let alone assess whether or not you're making progress.

Reporting of activity is no substitute for reporting attainment, which is - in turn - only possible if clear measurable goals are set.

If the leader is unable to set clear measurable goals, then one has to conclude that the leader simply doesn't know what he is doing.


Do you extend your critique to other employment areas? Are you a member of an judiciary websites complaining about the number of judges that are otherwise 'average kids' are regarded as having legal value?

How about Whitehall permanent secretaries? Journalists? Lawyers? Doctors? Leaders of public bodies? Bankers? The Police? The BBC? Diplomats? The Arts? Musicians? Footballers?

All lead by average kids who inexplicably have been thought to have value.
 

Caecilius

LE
Kit Reviewer
Book Reviewer
What clear measurable goals would you have set had you been in a position to do so?
You won't get an answer to this. @Stonker has been asked a hundred times but has never come up with a response other than to suggest that we buy a book about it. He simply has no answer. A cynic would say that's because it can't be done.

(You may even get a 'funny', which is his way of trying to salvage something from the fact that he can't answer a point you've made.)
 
You won't get an answer to this. @Stonker has been asked a hundred times but has never come up with a response other than to suggest that we buy a book about it. He simply has no answer. A cynic would say that's because it can't be done.

(You may even get a 'funny', which is his way of trying to salvage something from the fact that he can't answer a point you've made.)
I find it interesting that he won't answer my other question either. It's hard to take a critic seriously when they don't even provide the simplest explanation about how they would do things differently. Especially when a main bone of contention seems to be where children go to school.

One has to assume they are just here to chimf.
 

Similar threads

Latest Threads