![]() The study provides a solution for future multi‐aircraft autonomous air combat. Finally, the ideas of engineering implementation of the maneuvering control architecture are given. The obtained strategies are analyzed and it is found that the formations exhibit effective cooperative behavior in symmetric and asymmetric scenarios. The training performance of the training algorithm of sub‐strategies and high‐level strategy in different air combat scenarios is evaluated. A novel hierarchical multi‐agent reinforcement learning (HMARL) algorithm is proposed to obtain the high‐level strategy for target and sub‐strategy selection. Subsequently, three air combat sub‐tasks are designed, and recurrent soft actor‐critic (RSAC) algorithm combined with competitive self‐play (SP) is incorporated to learn the sub‐strategies. With regard to the aforementioned factors, a novel hierarchical maneuvering control architecture is introduced that is applied to the multi‐aircraft close‐range air combat scenario, which can handle air combat scenarios with variable‐size formation. A majority of the previous studies on autonomous air combat were focused on one‐on‐one air combat scenarios, however, the modern air combat is mostly conducted in formations. The close‐range autonomous air combat has gained significant attention from researchers involved in applications related to artificial intelligence (AI). The experimental results show that the selforganizing neural network is able to make good use of the domain knowledge with complex knowledge structure to discover the knowledge to out-maneuver the doctrine-driven CGF consistently in an efficient manner. Experiments are conducted using several initial conditions in round robin fashions. The state space and action space are automatically extracted using the flattened doctrine as well. Therefore, it has to be flattened and then translated to vector pattern before it can inserted into the self-organizing neural network. ![]() The hierarchical structure and symbol representation of the propositional rules are incompatible to the self-organizing neural network. The doctrine contains a set of specialized knowledge for conducting 1-v-1 dogfights. The challenge of defining the state space and action space and the lack of domain knowledge to initialize the adaptive CGF are addressed using the doctrine used to drive the non-adaptive CGF. A self-organizing neural network is used for the adaptive CGF to learn and generalize knowledge in an online manner during the simulation. This paper reports on an agent-oriented approach for the modeling of adaptive doctrine-equipped computer generated force (CGF) using a commercial-grade simulation platform known as CAE STRIVE®CGF.
0 Comments
Leave a Reply. |