Clifford-Valued Distributed Optimization Based on Recurrent Neural Networks
Science Citation Index Expanded
浙江师范大学; y
摘要
In this paper, we address the Clifford-valued distributed optimization subject to linear equality and inequality constraints. The objective function of the optimization problems is composed of the sum of convex functions defined in the Clifford domain. Based on the generalized Clifford gradient, a system of multiple Clifford-valued recurrent neural networks (RNNs) is proposed for solving the distributed optimization problems. Each Clifford-valued RNN minimizes a local objective function individually, with local interactions with others. The convergence of the neural system is rigorously proved based on the Lyapunov theory. Two illustrative examples are delineated to demonstrate the viability of the results in this article.
关键词
Optimization Algebra Recurrent neural networks Artificial neural networks Neurons Neurodynamics Computer science Clifford-valued distributed optimization Clifford-valued neural networks Lyapunov theory nonsmooth analysis
