| Peer-Reviewed

Distributed Subgradient Algorithm for Multi-agent Convex Optimization with Local Constraint Sets

Received: 12 June 2016     Accepted: 20 June 2016     Published: 13 July 2016
Views:       Downloads:
Abstract

This paper considers a distributed constrained optimization problem, where the objective function is the sum of local objective functions of distributed nodes in a network. The estimate of each agent is restricted to different convex sets. To solve this optimization problem which is not necessarily smooth, we study a novel distributed projected subgradient algorithm for multi-agent optimization with nonidentical constraint sets and switching topologies. The algorithm shows that each agent minimizes its own objective function while communicating information locally with other agents over a network with time-varying topologies but satisfying a standard connectivity property. Under the assumption that the network topology is weight-balanced, the novel distributed subgradient algorithm we proposed is proven to be convergent. Particularly, we suppose the step-size is various, which is different from previous work on multi-agent optimization that makes worst-case assumption with constant step-size.

Published in Applied and Computational Mathematics (Volume 5, Issue 3)
DOI 10.11648/j.acm.20160503.19
Page(s) 150-159
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2016. Published by Science Publishing Group

Keywords

Distributed Optimization, Subgradient Algorithm, Multi-agent Network, Weight-Balanced

References
[1] A. Nedić and A. Ozdaglar. “Distributed subgradient methods for multiagent optimization”, IEEE Transactions on Automatic Control, 54 (1): 48-61, 2009.
[2] B. Gu, V. S. Sheng, K. Y. Tay, et al. “Incremental support vector learning for ordinal regression,” IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 7, pp. 1403-1416, 2015.
[3] A. Nedić, A. Ozdaglar, and P. Parrilo. Constrained consensus and optimization in multi-agent networks, IEEE Transactions on Automatic Control, 2010, 55 (4): 922-938.
[4] H. Li, X. Liao, X. Lei, et al. Second-order consensus seeking in multi-agent systems with nonlinear dynamics over random switching directed networks. IEEE Transactions on Circuits and Systems I: Regular Papers, 2013, 60 (6): 1595-1607.
[5] B. Gu, V. S. Sheng, Z. Wang, D. Ho, S. Osman, and S. Li, “Incremental learning for ν-Support Vector Regression,” Neural Networks, vol. 67, pp. 140-150, 2015.
[6] M. Zhu and S. Martínez. On distributed convex optimization under inequality and equality constraints via primal-dual subgradient methods, arXiv preprint arXiv: 1001. 2612, 2010.
[7] P. Bianchi, J. Jakubowicz, Convergence of a multi-agent projected stochastic gradient algorithm for non-convex optimization, IEEE Transactions on Automatic Control, 2013, 58 (2): 391–405.
[8] Z. Xia, X. Wang, X. Sun, and Q. Wang, “A Secure and dynamic multi-keyword ranked search scheme over encrypted cloud data,” IEEE Transactions on Parallel and Distributed Systems, vol. 27, no. 2, pp. 340-352, 2015.
[9] I. Necoara, Random coordinate descent algorithms for multi-agent convex optimization over networks, IEEE Transactions on Automatic Control, 2013, 58 (8): 2001-2012.
[10] Y. Lou, G. Shi, K. H. Johansson, Y. Hong, Approximate projected consensus for convex intersection computation: Convergence analysis and critical error angle, IEEE Transactions on Automatic Control, 2014, 59 (7): 1722-1736.
[11] Z. Fu, X. Sun, Q. Liu, L. Zhou, and J. Shu, “Achieving efficient cloud search services: multi-keyword ranked search over encrypted cloud data supporting parallel computing,” IEICE Transactions on Communications, vol. E98-B, no. 1, pp. 190-200, 2015.
[12] Z. J. Towfic, A. H. Sayed, Adaptive penalty-based distributed stochastic convex optimization, IEEE Transactions on Signal Process, 2014, 62 (15) 3924-3938.
[13] H. Li, X. Liao, T. Huang, et al. Second-order global consensus in multi-agent networks with random directional link failure. IEEE Transactions on Neural Networks and Learning Systems, 2015, 26 (3): 565-575.
[14] B. Chen, H. Shu, G. Coatrieux, G. Chen, et al. “Color image analysis by quaternion-type moments,” Journal of Mathematical Imaging and Vision, vol. 51, no. 1, pp. 124-144, 2015.
[15] A. H. Sayed, Adaptation, learning, and optimization over networks, in: Foundations and Trends in Machine Learning, Vol. 7, NOW Publishers, Boston-Delft, 2014, pp. 4-5.
[16] H. Li, G. Chen, T. Huang, et al. High-performance consensus control in networked systems with limited bandwidth communication and time-varying directed topologies. IEEE Transactions on Neural Networks and Learning Systems, 2016, DOI: 10.1109/TNNLS.2016.2519894.
[17] X. Wen, L. Shao, Y. Xue, and W. Fang, “A rapid learning algorithm for vehicle classification,” Information Sciences, vol. 295, no. 1, pp. 395-406, 2015.
[18] B. Johansson, P. Soldati, and M. Johansson, “Mathematical decomposition techniques for distributed cross-layer optimization of data networks,” IEEE Journal on Selected Areas in Communications, vol. 24, no. 8, pp. 1535-1547, Aug. 2006.
[19] M. Rabbat, R. Nowak, and J. Bucklew, “Generalized consensus computation in networked systems with erasure links,” in IEEE SPAWC, 2005.
[20] Y. Zheng, B. Jeon, D. Xu, Q. M. Jonathan Wu, and H. Zhang, “Image segmentation by generalized hierarchical fuzzy C-means algorithm,” Journal of Intelligent and Fuzzy Systems, vol. 28, no. 2, pp. 961-973, 2015.
[21] H. Li, X. Liao, G. Chen, et al. Event-triggered asynchronous intermittent communication strategy for synchronization in complex dynamical networks. Neural Networks, 2015, 66: 1-10.
[22] M. Zhu and S. Martinez, “On distributed convex optimization under inequality and equality constraints,” IEEE Transactions on Automatic Control, 2012, vol. 57, no. 1, pp. 151-164.
[23] M. Zhu and S. Martinez, “An approximate dual subgradient algorithm for multi-agent non-convex optimization,” IEEE Transactions on Automatic Control, 2013, vol. 58, no. 6, pp. 1534-1539.
[24] Z. Xia, X. Wang, X. Sun, and B. Wang, “Steganalysis of least significant bit matching using multi-order differences,” Security and Communication Networks, vol. 7, no. 8, pp. 1283-1291, 2014.
[25] D. Yuan, S. Xu, and H. Zhao, “Distributed primal–dual subgradient method for multiagent optimization via consensus algorithms,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 41, no. 6, pp. 1715-1724, Dec. 2011.
[26] B. Gu, X. Sun, and V. S. Sheng, “Structural minimax probability machine,” IEEE Transactions on Neural Networks and Learning Systems, DOI: 10.1109/TNNLS.2016.2544779, 2016.
[27] T. H. Chang, A. Nedic, and A. Scaglione, “Distributed constrained optimization by consensus-based primal–dual perturbation method,” IEEE Transactions on Automatic Control, 2014, vol. 59, no. 6, pp. 1524-1538.
[28] H. Li, X. Liao, T. Huang. Second-order locally dynamical consensus of multi-agent systems with arbitrarily fast switching directed topologies. IEEE Transactions on Systems Man and Cybernetics: Systems, 2013, 43 (6): 1343-1353.
[29] C. Godsil and G. Royle, Algebraic Graph Theory, New York: Springer-Verlag, 2001.
[30] M. Zhu, and S. Martínez. Discrete-time dynamic average consensus, Automatica, 2010, 46 (2): 322-329.
[31] H. Li, G. Chen, T. Huang, et al. Event-triggering sampling based leader-following consensus in second-order multi-agent systems. IEEE Transactions on Automatic Control, 2015, 60 (7): 1998-2003.
[32] B. Johansson, A. Speranzon, M. Johansson, and K. H. Johansson, “On decentralized negotiation of optimal consensus,” Automatical, pp. 1175-1179, Jul. 2008.
[33] H. Li, G. Chen, X. Liao, et al. Quantized data-based leader-following consensus of general discrete-time multi-agent systems, IEEE Transactions on Circuits and Systems II:Express Briefs, 2016, 63 (4): 401-405.
[34] Ren W, Beard R W. Distributed consensus in multi-vehicle cooperative control. Springer-Verlag, London, 2008.
[35] H. Li, G. Chen, T. Huang, et al. Event-triggered distributed consensus over directed digital detworks with limited bandwidth, IEEE Transactions on Cybernetics, DOI: 10.1109/TCYB.2015.2496977.
[36] J. Shen, H. Tan, J. Wang, J. Wang, and S. Lee, “A novel routing protocol providing good transmission reliability in underwater sensor networks,” Journal of Internet Technology, vol. 16, no. 1, pp. 171-178, 2015.
Cite This Article
  • APA Style

    Qingguo Lü, Huaqing Li, Li Xiao. (2016). Distributed Subgradient Algorithm for Multi-agent Convex Optimization with Local Constraint Sets. Applied and Computational Mathematics, 5(3), 150-159. https://doi.org/10.11648/j.acm.20160503.19

    Copy | Download

    ACS Style

    Qingguo Lü; Huaqing Li; Li Xiao. Distributed Subgradient Algorithm for Multi-agent Convex Optimization with Local Constraint Sets. Appl. Comput. Math. 2016, 5(3), 150-159. doi: 10.11648/j.acm.20160503.19

    Copy | Download

    AMA Style

    Qingguo Lü, Huaqing Li, Li Xiao. Distributed Subgradient Algorithm for Multi-agent Convex Optimization with Local Constraint Sets. Appl Comput Math. 2016;5(3):150-159. doi: 10.11648/j.acm.20160503.19

    Copy | Download

  • @article{10.11648/j.acm.20160503.19,
      author = {Qingguo Lü and Huaqing Li and Li Xiao},
      title = {Distributed Subgradient Algorithm for Multi-agent Convex Optimization with Local Constraint Sets},
      journal = {Applied and Computational Mathematics},
      volume = {5},
      number = {3},
      pages = {150-159},
      doi = {10.11648/j.acm.20160503.19},
      url = {https://doi.org/10.11648/j.acm.20160503.19},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.acm.20160503.19},
      abstract = {This paper considers a distributed constrained optimization problem, where the objective function is the sum of local objective functions of distributed nodes in a network. The estimate of each agent is restricted to different convex sets. To solve this optimization problem which is not necessarily smooth, we study a novel distributed projected subgradient algorithm for multi-agent optimization with nonidentical constraint sets and switching topologies. The algorithm shows that each agent minimizes its own objective function while communicating information locally with other agents over a network with time-varying topologies but satisfying a standard connectivity property. Under the assumption that the network topology is weight-balanced, the novel distributed subgradient algorithm we proposed is proven to be convergent. Particularly, we suppose the step-size is various, which is different from previous work on multi-agent optimization that makes worst-case assumption with constant step-size.},
     year = {2016}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Distributed Subgradient Algorithm for Multi-agent Convex Optimization with Local Constraint Sets
    AU  - Qingguo Lü
    AU  - Huaqing Li
    AU  - Li Xiao
    Y1  - 2016/07/13
    PY  - 2016
    N1  - https://doi.org/10.11648/j.acm.20160503.19
    DO  - 10.11648/j.acm.20160503.19
    T2  - Applied and Computational Mathematics
    JF  - Applied and Computational Mathematics
    JO  - Applied and Computational Mathematics
    SP  - 150
    EP  - 159
    PB  - Science Publishing Group
    SN  - 2328-5613
    UR  - https://doi.org/10.11648/j.acm.20160503.19
    AB  - This paper considers a distributed constrained optimization problem, where the objective function is the sum of local objective functions of distributed nodes in a network. The estimate of each agent is restricted to different convex sets. To solve this optimization problem which is not necessarily smooth, we study a novel distributed projected subgradient algorithm for multi-agent optimization with nonidentical constraint sets and switching topologies. The algorithm shows that each agent minimizes its own objective function while communicating information locally with other agents over a network with time-varying topologies but satisfying a standard connectivity property. Under the assumption that the network topology is weight-balanced, the novel distributed subgradient algorithm we proposed is proven to be convergent. Particularly, we suppose the step-size is various, which is different from previous work on multi-agent optimization that makes worst-case assumption with constant step-size.
    VL  - 5
    IS  - 3
    ER  - 

    Copy | Download

Author Information
  • College of Electronic and Information Engineering, Southwest University, Chongqing, PR China

  • College of Electronic and Information Engineering, Southwest University, Chongqing, PR China

  • Department of Mathematics and Information Engineering, Chongqing University of Education, Chongqing, PR China

  • Sections