A change of strategy is needed to develop machine autonomy, according to DST Group analysts Jason Sholz and Darryn Reid.
We have circuits. We have sensors. We have actuators and motors. We have batteries. We have algorithms and the means to implement them.
Yet despite the technology and techniques, deployable autonomy always seems to lie just a few tantalising moments into the future.
It seems perpetually just out of reach, soon to arrive with the next development in circuits, in sensors, in batteries and in algorithms.
Despite all the developments aimed at building machines that can do our work for us since the time when Turing first proposed his famous criterion for artificial intelligence, we are yet to acquire a single operationally usable autonomous system worthy of the name.
What we have instead is automation. Defence applications of automation systems under managed conditions have reached a limit of diminishing returns, with the inability to deal with uncertainty being a fundamental limitation to large scale future deployable systems.
We can’t expect a different result by doing more of the same, but there is hope if we redress the choice of research problem. Through a program of strategic research we intend to lift the account of deployable autonomous systems in Defence over the next decade and beyond.
Humans rightly fear the potential unboundedness of autonomous systems, and this will remain a profoundly unsolvable problem if we continue the path of making automations more complex in an attempt to achieve autonomy. More of the same will result in systems that are less trustworthy, less verifiable, and more dependent on complex human interaction (likely to be at times that are unwelcome) in an attempt to manage the risk of ever-more-spectacular failures.
A case in point: in October 2014, Hyundai held the Future Automobile Technology Competition in South Korea. Four finalists out of twelve teams navigated a test circuit, requiring obstacle avoidance, stopping for pedestrians, obeying traffic laws, and so on. On the first day everything worked correctly, as was expected. On the second day it rained and seven different kinds of catastrophic failure were reported (e.g. driving over kerbs and running stop lights) in just a single 8-minute run. All of these were, in this case, due to what we might term perceptual failure, and appear to have resulted from reflections in the water on the road. To recover from these situations, a human operator had to get into the car, back it out onto the road and set it back on its path again.
These problems are not unique to self-driving automobiles; the myriad of robots that fly, crawl, walk, roll, run and swim may appear autonomous, but remain fragile in the sense of being susceptible to unacceptable failure outside of carefully controlled conditions.
It is our aim for autonomy to avoid unacceptable failures whilst remaining open to real world uncertainties. Yet, we know of no other body of organised research that addresses the true nature of this problem. Instead, we find a nascent belief that more of the same will somehow, one day, deliver acceptable systems.
In light of the history of non-delivery of this promise – and, indeed, in light of the history of science itself – it is simply not credible to expect that the autonomy we dream about will come from the extension of the automation we possess.
Future machines will need to be autonomous in the sense of being able to deal with fundamental uncertainty, for which sample spaces of possible outcomes cannot be known in advance, if at all. They operate where data is poor or non-existent, around the tails of those distributions. They need to operate in unmanaged or weakly managed situations where we may not even know what parameters are required to be sensed. The kinds of danger they can be exposed are globalised and non-immediate. The language of autonomy is more commonly found in the natural sciences and includes terms like resiliency, degeneracy, innovativeness, resourcefulness, or plasticity.
The term “plasticity” may conjure notions of passive adaptation and compliance with the environment. However, we also mean it to be playful, resourceful, and opportunistic. These are aligned with some of the characteristics of Australian culture such as persistence and obedience despite hardship, mateship, and resourcefulness; characteristics also witnessed in Australian military traditions.
The locus of the plasticity imperative is in decision and control. For the purpose of our research program, we define a trusted autonomous system to include the machine, the human and their integration. Integration exists to complement the weaknesses of some parts of the system with strengths in other parts of the system.
We have named this strategic research initiative on “trusted autonomous systems” Program Tyche, to capture the essence of uncertainty with research on implementing systems that satisfy the “plasticity” property at its heart. (Tyche was the Greek goddess of fortune).
Recognising the inadequacy of current autonomous systems development programs provided the basis for developing a new, organised research program. This may place Defence on a path to research, develop and acquire operationally effective autonomous systems.
We have outlined* the basis for our trusted autonomous systems research around four themes: understanding the foundations of autonomy; realising these in cognitive machines; ensuring these machines operate as trustworthy partners; and their embodiment within novel platforms, sensors and effectors to achieve new capabilities.
Our endeavour through each of these themes is to focus specifically on autonomous capabilities for managing uncertainty. Having set a broad agenda, the successful management of the research program will hinge on the same principles of plasticity espoused here as the property sought within the program. This will also provide first-hand experience for researchers and developers, as well as new shared insights into achieving plasticity in capability development for the acquisition and operation of these systems in future.