Drone & counter drone technology.

I don't believe that is a key constraint. The USAF demonstrated some time ago the ability for co-operative UCAS sorties which could adapt in flight, whilst on task. in theory they could operate completely autonomously.

Not long after project Alpha demonstrated an AI engine that defeated an "ace" pilot time and time again.

These events were now some time ago. The need for continuous TCS control may well be long gone and it may only be our 'nervousness' and desire for a 'kill switch' that keeps us wanting the reigns. As M-M says there are alternate means to communicate and most people understand and accept the potential vulnerability on SATCOM access, even the protected military capabilities, which will be robust and existent until a finite kinetic event.

In sum I think its more a cultural resistance as opposed to a technological limitation which will inhibit complete autonomic operation. Where I look at AI in my own workstreams the 'man-in-the-loop' argument is prevalent until we point out the threats demand a machine speed reaction, even then wanting a pink floppy thing to take action is argued. In time it will be accepted and I suspect the capabilities will become smarter at an increasingly rapid pace.
There's a big difference between "in theory" and "willing to bet your entire air force on it". We're not talking about something that can supplement manned fighters in certain applications, We are, in this context, talking about something that will completely replace manned fighters throughout an entire air force (and navy) in all applications.

At this point nobody has demonstrated an actual flying plane that can replace everything a manned fighter can do under all circumstances and can do so with absolute certainty. Something that works some of the time under some circumstances isn't going to pass muster in this application.
 

A2_Matelot

LE
Book Reviewer
There's a big difference between "in theory" and "willing to bet your entire air force on it". We're not talking about something that can supplement manned fighters in certain applications, We are, in this context, talking about something that will completely replace manned fighters throughout an entire air force (and navy) in all applications.

At this point nobody has demonstrated an actual flying plane that can replace everything a manned fighter can do under all circumstances and can do so with absolute certainty. Something that works some of the time under some circumstances isn't going to pass muster in this application.
The trials I referred to were a number of years ago. Look at the capabilities of the USNs new acquisition. As I said, technology change is accelerating and if you look at commercial air you could say the aircraft are already around 95% autonomous and AI is developing better learning and adaptive algorithms all the time.

I think there can be little doubt that we can solve the problem of autonomous flight (we pretty much have commercially). It’s a question of assurance: certification procedures, regulatory requirements and, more significantly, public perception.

It would be interesting to compare human pilots against AI/autonomous systems for reaction and reliability.

As I travel to the US I often wonder what's in the hidden hangars being developed, or already flying.
 
For a whole number of reasons which have been discussed numerous times here, I’d say we’re at least 40 years from RPAS being able to approach the capabilities of manned aircraft in WVR air-air combat.
The rate of growth of AI, missile technology...and the fact that RPAS/UAV’s are in fact ‘manned’ systems, would certainly seem to point to more capable drone air to air abilities...I would respectfully venture, well within 40 years... and 4th Gen, more vulnerable opponent's aircraft, are certainly going to be around for a while longer
 
Last edited:
There is no doubt that autonomous technology and AI could replace simple civilian airliner and cargo operations right now if it wasn’t for commercial and customer concerns. However, we’re not talking about such simple activity; we’re considering AI’s ability to replace infinitely more complex, demanding and contested military activity.

I don't believe that is a key constraint. The USAF demonstrated some time ago the ability for co-operative UCAS sorties which could adapt in flight, whilst on task. in theory they could operate completely autonomously...
I’m afraid that SATCOM and data exchange is a major constraint of current and indeed the emerging generation of RPAS; alternatives have only limited application and introduce significantly greater critical nodes. Bandwidth, footprint, vulnerability to EA, cyber and kinetic attack are all factors absent or optional from manned types and which require consideration.

...Not long after project Alpha demonstrated an AI engine that defeated an "ace" pilot time and time again...
That trial was an AI engine in an entirely synthetic environment considering merely very simplistic manoeuvre dynamics; not much different in essence to the processes used in computer games. It did not take into account obscure but real-world factors such as weather, temperature, dew-point, glare, shadow, temperature, the quality of engine servicing and birds.

However, computers and AI have an incredibly long way to go before they can even start to approach human intuitive capacity. There are any number of subtle cues which, while obvious to a human, would require trillions of lines of software code, the complex integration of several sensors, bandwidth and all the implications therein (eg bench testing and OT&E) to prove effective.

To a pilot, a brief puff of exhaust may indicate his opponent has selected afterburner in a poorly maintained engine and is about to take the fight into the vertical. We are decades away from AI being able to reliably differentiate that from any number of factors such as changing backgrounds, shadow, reflections, glare, haze, industrial pollution, condensation from the airframe, chaff, flares, fuel being dumped, weapons being launched or even a bird strike. If you wish to go air-air, missiles offer parity with - and will likely surpass - a larger UCAV’s g-loading, and that's even before we get into emerging Directed Energy weapons. Moreover, automation incurs just as many vulnerabilities as carbon based life forms.

The most optimistic predictions suggest it’ll be at least 40 years before AI and autonomous capabilities can outperform manned aircraft in all tasks. That’s why emerging US 6th Gen concepts - all designed to remain competitive until at least 2060 - involve manned or optionally manned assets.

Where I do think AI has potential is in the ISR Processing, Intelligence and Dissemination (PED) arena to identify Pattern of Life and other factors from complex background ‘clutter.’

...In sum I think its more a cultural resistance as opposed to a technological limitation which will inhibit complete autonomic operation...
There is a degree of culture, particularly in commercial aspects. However, there are genuine limitations for which we are many years from being able to resolve.

The trials I referred to were a number of years ago. Look at the capabilities of the USNs new acquisition. As I said, technology change is accelerating and if you look at commercial air you could say the aircraft are already around 95% autonomous and AI is developing better learning and adaptive algorithms all the time...
The MQ-25 is designed for fairly simplistic ops such as AAR and I suspect in due course ISR and strike ops; essentially a ‘reusable TLAM’ in the latter example.

The rate of growth of AI, missile technology...and the fact that RPAS/UAV’s are in fact ‘manned’ systems, would certainly seem to point to more capable drone air to air abilities...I would respectfully venture, well within 40 years... and 4th Gen, more vulnerable opponent's aircraft, are certainly going to be around for a while longer
Time will tell but I confidently stand by my assertion that we’re looking at 40 plus years for AI to offer a genuine autonomous alternative to manned assets.

It’s worth remembering that even the very latest RPAS and autonomous systems are assessed to have a lower intelligence level to the average insect. I’ll take a Tempest over a housefly any day...although the latter’s RCS is undeniably impressive!

Again, that’s why every single 6th Gen concept envisages a man in the cockpit for many missions.

Regards,
MM
 
What may be the problem here is your arguments have been based on autonomous drones piloted by AI and I would tend to agree that there is a very definite aversion to handing over ‘kill’ authority...and your own reservations about AI ability V humint.

What has been the present discussion is air to air kill ability in drones that are ‘manned’.
 
(...) Where I do think AI has potential is in the ISR Processing, Intelligence and Dissemination (PED) arena to identify Pattern of Life and other factors from complex background ‘clutter.’ (...)
There have been decades of research into AI techniques, and we are beginning to see useful practical applications today.

However, we are also starting to see research into how AI systems can be fooled or confused with the digital equivalent of optical illusions. To take image classification as an example (the simplest to grasp), if you inject the right sort of noise into a signal representing an image, you can create a resulting image where the changes are imperceptible to the human eye but which cause the AI system to think it is seeing something entirely different.

These are called "adversarial examples". They are called "adversarial" because the experimenter is deliberately doing things which confuse the AI instead of assisting the AI. In future military applications involving a sophisticated opponent, adversarial attacks can be considered a given. There is a maxim in cyber security that attacks only get stronger, not weaker. Some AI researchers are of the opinion that adversarial attacks will always hold the advantage over AI systems because they need only find one chink in the AI's armour while the AI must defend against all possible attacks.



As for who holds the technology lead in AI in general, many people would say China is in first place, not the US.
Terms of Service Violation
Note that the following quote from Bloomberg refers to civilian applications, but that is where the basic underlying technology is being developed these days, not military research labs.
There's a general consensus that the Chinese are probably ahead of their U.S. counterparts in terms of data because not only is it a larger country, but the opportunities to capture information are colossal. A January piece by the Wall Street Journal laid out the argument for China gaining momentum with a quote from Microsoft Corp. President Brad Smith noting that in some parts of the world, privacy laws have the potential to constrict AI development or use.
Russia is also ranked very highly in terms of AI technology by many sources.

It is not much of a stretch to say that a strong position in research in AI technology will form a good basis for developing technology to defeat AI systems.

Here's an example.
[1607.02533] Adversarial examples in the physical world
Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake. Adversarial examples pose security concerns because they could be used to perform an attack on machine learning systems, even if the adversary has no access to the underlying model. Up to now, all previous work have assumed a threat model in which the adversary can feed data directly into the machine learning classifier. This is not always the case for systems operating in the physical world, for example those which are using signals from cameras and other sensors as an input. This paper shows that even in such physical world scenarios, machine learning systems are vulnerable to adversarial examples. We demonstrate this by feeding adversarial images obtained from cell-phone camera to an ImageNet Inception classifier and measuring the classification accuracy of the system. We find that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera.

Note that this is very different from traditional "cyber warfare" technology. There is no "hacking" into the system or bypassing of the defences. Instead it relied upon the fact that AI systems are fundamentally so "stupid" that if you can present an image or signal to the AI that is deliberately but subtly altered in a way which confuses it, you can cause it to perform unpredictable (to the owner) actions.
 
What may be the problem here is your arguments have been based on autonomous drones piloted by AI and I would tend to agree that there is a very definite aversion to handing over ‘kill’ authority...and your own reservations about AI ability V humint.

What has been the present discussion is air to air kill ability in drones that are ‘manned’.
No; I assume a similar timeline with RPAS as well. Indeed, by definition these must rely on BLoS C2 which incurs another significant vulnerability. Overall, from my experience with them, I see RPAS as a bit of a technology cul de sac for A-A with AI having far greater potential in the latter half of this century.

Regards,
MM
 

seaweed

LE
Book Reviewer
People have been picking away at this since 1959 ..


Gyrodyne QH-50 DASH - Wikipedia

exx from 1968:

A US frigate had Drone Antisubmarine Helicopters (DASH) on board. She only had two but was itching for a chance to show them off and feeling somewhat upstaged by HMS Euryalus' pilot buzzing about in his Wasp. At last the chance came. The DASH took off and promptly disintegrated into a zillion pieces. Euryalus: "One down, one to go". USS Hiram K Hickenlooper or whatever: "That's not funny."
 
People have been picking away at this since 1959 ..


Gyrodyne QH-50 DASH - Wikipedia

exx from 1968:

A US frigate had Drone Antisubmarine Helicopters (DASH) on board. She only had two but was itching for a chance to show them off and feeling somewhat upstaged by HMS Euryalus' pilot buzzing about in his Wasp. At last the chance came. The DASH took off and promptly disintegrated into a zillion pieces. Euryalus: "One down, one to go". USS Hiram K Hickenlooper or whatever: "That's not funny."
I also recall in the early days of RPAS ops over Bosnia with the USAF/CIA operated Gnat750 flying from one of the Croatian islands (Vis iirc) pre-SATCOM connectivity when an RG-8 had to be used to relay the C-band data link.

For the second time in the same week, one of them went AWOL and they again called us up asking basically ‘have you seen our UAV?’ It had disappeared off our screen as well (subsequent investigations revealed both had spread themselves liberally over hillsides) and a female USN EP-3E voice came up and said ‘jeesh...you guys haven’t lost another one have you?! You guys really should take more care!’

Regards,
MM
 
Was doing some single seat gyro instruction at West Wales airport last year. The tower very kindly allowed us to use half of the runway with the Thales/Watchkeeper program since it appeared that we only required a limited amount of runway, and could vacate at a moments notice.

Over the four days we were there there were a number of times, that were were requested to vacate as various Watchkeeper drones took to the runway, then stood there with engine running for varying ammounts of time before vacating, and the various personnel then retrieved from the various stations they had been dropped at around the airfield.

It was a somewhat less that impressive display of what we had hoped to see of the departure and arrival of the latest and ‘greatest’?
 
Was doing some single seat gyro instruction at West Wales airport last year. The tower very kindly allowed us to use half of the runway with the Thales/Watchkeeper program since it appeared that we only required a limited amount of runway, and could vacate at a moments notice.

Over the four days we were there there were a number of times, that were were requested to vacate as various Watchkeeper drones took to the runway, then stood there with engine running for varying ammounts of time before vacating, and the various personnel then retrieved from the various stations they had been dropped at around the airfield.

It was a somewhat less that impressive display of what we had hoped to see of the departure and arrival of the latest and ‘greatest’?
The Watchkeeper has been an utter disaster and frankly should've been cancelled about a decade ago. The Army could have achieved far more, at a fraction of the price, with something like Scan Eagle.

I'd like to see the AAC wrestle control of the Army's UAS from the RA but the former don't appear to be interested and the latter are convinced they're the right ones to use it.

Regards,
MM
 

seaweed

LE
Book Reviewer
Last edited:
For all who are confidently predicting decades before AI maturity will allow autonomous vehicles (air or otherwise) should note the words of Def Sec who said that what has surprised us is not the new technology of our adversaries but the pace at which that technology has come....or some words like this:)

Many companies are now putting AI into their commercial offerings as standard. Many purely for gimmick value, but this merely allows lessons to be learnt, software to be finessed, data to be built up. Every aspect of a human does not need to be programmed, only that smaller subset of those aspects pertaining to the operating envelope.

These debates do normally descend into an assumption that AI cannot be allowed to lead to autonomous operation in case something goes wrong. Clearly humans are infallible then are they?

The big change will come when an adversary puts AI into some weapon resulting in reaction times that mean that humans-in-the-loop simply will not be acceptable.
 
...Clearly humans are infallible then are they?...
No.

However, they are demonstrably far more capable of complex reasoning than AI is and will be for a considerable time. Humans also do not require expensive, complex and vulnerable bandwidth, C2 and infra.

...The big change will come when an adversary puts AI into some weapon resulting in reaction times that mean that humans-in-the-loop simply will not be acceptable.
They already have in missiles and other systems.

As I’ve said several times, time will tell. However, those of us in the business, and every major aerospace contractor I’m aware of envisage many more decades before AI can even approach that of humans.

Regards,
MM
 
No.

However, they are demonstrably far more capable of complex reasoning than AI is and will be for a considerable time. Humans also do not require expensive, complex and vulnerable bandwidth, C2 and infra.

MM
Complex reasoning in/under what operational envelope? In calculating closing velocity, interception geometry, impact point...?

Please don't confuse General AI with applied AI. Latter can reduce the need for complex and vulnerable bandwidth, C2 and infra now.
 
Complex reasoning in/under what operational envelope? In calculating closing velocity, interception geometry, impact point...?

Please don't confuse General AI with applied AI. Latter can reduce the need for complex and vulnerable bandwidth, C2 and infra now.
Closing velocity etc is the easy. It’s intuition, experience and a million and one other factors which is the hard bit.

It’ll happen eventually. However, we in the military are constantly bombarded by ‘you’re almost redundant’ claims and suggestions of AI panaceas from boffins. In reality, most are unviable, some utterly barking.

Once again, that’s why aerospace and militaries all envisage manned fighters for at east the next 50 years.

Regards,
MM
 
Again let's not confuse the issue. AI against specific operational use cases is very different to AI against intuition and experience. We can throw in other human-like qualities such as hope, and aspiration, and fear to dilute the debate .. which should not be the target of today's applied AI.

Leave that to General AI !

However, we in the military are constantly bombarded by ‘you’re almost redundant’ claims and suggestions of AI panaceas from boffins. In reality, most are unviable, some utterly barking
Best not mention the DSTL AI Lab then, and the fanfare that surrounded it....

As part of the MOD’s commitment to pursue and deliver future capabilities, the Defence Secretary announced the launch of AI Lab – a single flagship for Artificial Intelligence, machine learning and data science in defence based at Dstl in Porton Down. AI Lab will enhance and accelerate the UK’s world-class capability in the application of AI-related technologies to defence and security challenges.
 
Again let's not confuse the issue. AI against specific operational use cases is very different to AI against intuition and experience. We can throw in other human-like qualities such as hope, and aspiration, and fear to dilute the debate .. which should not be the target of today's applied AI.

Leave that to General AI !



Best not mention the DSTL AI Lab then, and the fanfare that surrounded it....

As part of the MOD’s commitment to pursue and deliver future capabilities, the Defence Secretary announced the launch of AI Lab – a single flagship for Artificial Intelligence, machine learning and data science in defence based at Dstl in Porton Down. AI Lab will enhance and accelerate the UK’s world-class capability in the application of AI-related technologies to defence and security challenges.
As I said, AI is not without it’s applications, particularly in terms of PED. However, right now, it ain’t going to be replacing humans in many combat air disciplines.

In terms of DSTL, one of my most amusing boffin v military type experiences was with them!

Regards,
MM
 
In terms of DSTL, one of my most amusing boffin v military type experiences was with them!
"I have a very amusing story. Which I'm not going to tell you."

Speaking as a former boffin (in another place), I am agog. Do tell.
 

Similar threads

New Posts

Latest Threads

Top