Ion Necoară



Courses     Biography      Publications      Projects      Talks      Software



Project PCE L2O-MOC (UEFISCDI)



Numerical Methods course support

Optimization course support



Short Biography (Full CV)
Higher education:

  • Nov. 2014, Habilitation thesis, UPB, Coordinate Descent Methods for Sparse Optimization (pdf)
  • 2002 – 2006: PhD in Applied Mathematics (Cum Laude) from Technical University Delft, Holland
  • 2000 – 2002: Master in Optimization and Statistics, Faculty of Mathematics, University Bucharest
  • 1996 – 2000: Bachelor in Mathematics, Faculty of Mathematics, University Bucharest

Academic positions:

  • 2015 – : PhD Advisor in Systems Engineering (see attached some phd thesis proposals)
  • 2015 – : Professor of Numerical Methods & Optimization, Fac. Automatic Control & Computers, UPB
  • 2012 – 2015: Associate Professor, Faculty of Automatic Control & Computers, UPB
  • 2009 – 2012: Assistant Professor, Faculty of Automatic Control & Computers, UPB
  • 2007 – 2009: Post-doctoral fellowship at KU Leuven, Belgium

Research activities:

  • Author of +100 research papers; +2000 citations and h-index 23 on Google Scholar.
  • Published +80 ISI research papers; +600 citations and h-index 15 on Web of Science.
  • Involved in several EU-FP7 projects: EMBOCON (principal investigator), HD-MPC (member), HYCON (member); National projects (Uefiscdi): METNET, MoCOBiDS, ScaleFreeNet (principal investigator).
  • Head of Distributed Optimization and Control Group (DOC) (see website).
  • Supervised 4 PhD students and several MSc & bachelor students.
  • Invited professor to a number of universities including: MIT, Cornell Univ., Lehigh Univ., UNC (USA); Univ. Catholique Louvain and KU Leuven (Belgium); Edinburgh Univ. and Cambridge Univ. (UK); TU Delft (Netherlands); ETH Zurich and EPFL (Switzerland); Lund University (Sweden); SUPELEC (France); NTNU (Norway); IMT Lucca (Italy); OVGU (Germany); KAUST (Saudi Arabia).
  • Member of IFAC Committee on Optimal Control and in IPC of various international conferences.

Awards:

  • National Order Faithful Service from Romanian Presidency, 2017.
  • Fulbright Visiting Professor fellowship at UNC (USA), 2017.
  • Excellence in Research award in Engineering Sciences, awarded by Ad Astra, 2016.
  • Romanian Academy Award in Mathematical Sciences & Information Technology (Gr. Moisil), 2015.
  • Best Paper Award for a paper published in
    Journal of Global Optimization in 2015.
  • Best Paper Award at International Conference on Systems Theory, Control and Computing 2014.
  • Awarded UEFISCDI Fellowship (Young Independent Research Team Fellowship, 2010-2013 & 2015-2017). University Fellowship (UPB, Excellence in Research Fellowship, 2010-2013).

Main current fields of interest:

  • Theory and methods for Convex/Distributed/Big Data Optimization.
  • Developing optimization algorithms with a focus on structure exploiting (sparsity, convexity, stochasticity, low-rank, parallel and distributed computations).
  • Mathematical guarantees about performance of numerical optimization algorithms.
  • Applying optimization techniques to Machine Learning problems and to develop new advanced Controller design algorithms for complex systems (Embedded and Distributed Control/MPC).
  • Practical applications include: Big Data Models (Data Analytics, Machine Learning, Weather Forecasts, Smart Electricity Grids, Traffic Networks, Distributed Control, Compressive Sensing, Image/Signal processing), Embedded Control, Control of Robots, Automotive Industry.

Phd Thesis Proposals:

  • See attached some list of phd thesis proposals.
  • I am always looking for talented and self-motivated phd students that want to perform research within the broad areas of Optimization, Big Data Analytics and Control. You will receive competitive benefits and work at international standards. We support visits to strong research groups from Europe, conference travels and interactions with the best researchers in the field.
  • Students interested in doing some research/practical projects can also contact me.
  • For anyone interested, here are some tips on how read a scientific paper (by Mitzenmacher) (pdf).

My phd students at UPB:

  • drd. Daniela Lupu (2019-…): Distributed methods over graphs.
  • dr. Andrei Patrascu (2012-2015): Efficient first order methods for sparse convex optimization (phd thesis). Assist. Professor at University Bucharest.
  • dr. Valentin Nedelcu (2010-2013): Rate analysis of dual gradient methods (phd thesis). Research Scientist at Assystem.

Books:




Recent publications
2019

  • I. Necoara, A. Nedich, Minibatch stochastic subgradient-based projection algorithms for solving convex inequalities, June 2019.
  • I. Mezghani, Q. Tran-Dinh, I. Necoara, A. Papavasiliou, A globally convergent Gauss-Newton algorithm
    for AC optimal power flow, submitted, May 2019, (arxiv).
  • A. Nedich, I. Necoara, Random minibatch projection algorithms for convex problems with functional constraints, submitted, March 2019, (arxiv).
  • I. Necoara, Faster randomized block Kaczmarz algorithms, March 2019, to appear in SIAM Journal on Matrix Analysis and Applications, (arxiv).
  • I. Necoara, T. Ionescu, H2 model reduction of linear network systems by moment matching and optimization, February 2019, to appear in IEEE Transactions on Automatic Control, (arxiv).
  • O. Fercoq, A. Alacaoglu, I. Necoara, V. Cevher, Almost surely constrained convex optimization, International Conference on Machine Learning (ICML), January 2019, (arxiv).

2018

  • I. Necoara, T. Ionescu, Optimal H2 moment matching-based model reduction for linear systems by (non)convex optimization, submitted, December 2018 (arxiv).
  • I. Necoara, M. Takac, Randomized sketch descent methods for non-separable linearly constrained optimization, submitted, July 2018 (arxiv).
  • T. Sun, I. Necoara, Q. Tran-Dinh, Composite Convex Optimization with Global and Local Inexact Oracles, submitted, July 2018 (arxiv).
  • I. Necoara, P. Richtarik, A. Patrascu, Randomized projection methods for convex feasibility problems: conditioning and convergence rates, to appear in Siam Journal Optimization, March 2017 (arxiv).
  • A. Patrascu, I. NecoaraNonasymptotic convergence of stochastic proximal point algorithms for constrained convex optimization, Journal of Machine Learning Research, 18(198): 1−42, 2018. (pdf).
  • A. Patrascu, I. NecoaraOn the convergence of inexact projection first order methods for convex minimization , IEEE Transactions on Automatic Control, 63(10): 3317–3329, 2018. (pdf).

2017

  • I. NecoaraNonasymptotic convergence rates of stochastic first order methods for composite convex optimization, Technical Report, University Politehnica Bucharest, June 2017. (pdf).
  • I. NecoaraCoordinate gradient descent methods, chapter in book: Big Data and Computational Intelligence
    in Networking, Y. Wu et al. (Eds.), Taylor & Francis LLC – CRC Press, 2017 (pdf).
  • I. Necoara, Yu. Nesterov, F. Glineur, Random block coordinate descent methods for linearly constrained
    optimization over networks
    , Journal of Optimization Theory and Applications, 173(1): 227–254, 2017, (pdf) or (pdf arxiv).
  • A. Patrascu, I. Necoara, Q. Tran-Dinh, Adaptive inexact fast augmented Lagrangian methods for constrained convex optimization , Optimization Letters, 11(3): 609–-626, 2017, (pdf arxiv).
  • N.A. Nguyen, S. Olaru, P. Rodriguez-Ayerbe, M. Hovd, I. NecoaraConstructive solution of inverse parametric linear/quadratic programming problems, Journal of Optimization Theory and Applications, 172(2): 623–648, 2017 (pdf).

2016

  • I. Necoara, D. Clipici, Parallel random coordinate descent methods for composite minimization: convergence analysis and error bounds, SIAM Journal on Optimization, 26(1): 197-226, 2016 (pdf arxiv).
  • I. Necoara, A. Patrascu, Iteration complexity analysis of dual first order methods for conic convex programming, Optimization Methods and Software, 31(3):645-678, 2016, (pdf arxiv).
  • Q. Tran Dinh, I. Necoara, M. Diehl, Fast Inexact Decomposition Algorithms For Large-Scale Separable Convex Optimization, Optimization, 65(2): 325–356, 2016, (pdf arxiv).

2015

  • I. Necoara, Yu. Nesterov, F. Glineur, Linear convergence of first order methods for non-strongly convex optimization, Mathematical Programming, 2018, (updated version here: pdf)(or on arxiv).
  • I. Necoara, A. Patrascu, F. Glineur, Complexity certifications of first order inexact Lagrangian and penalty methods for conic convex programming, Optimization Methods and Software, 2015, (arxiv).
  • I. Necoara, A. Patrascu, A. Nedich, Complexity certifications of first order inexact Lagrangian methods for general convex programming, chapter in Developments in Model-Based Optimization and Control, Springer, 2015, (arxiv).
  • A. Patrascu, I. NecoaraRandom coordinate descent methods for l0 regularized convex optimization, IEEE Transactions on Automatic Control, 60(7):1811–-1824, 2015, (arxiv).
  • A. Patrascu, I. NecoaraEfficient random coordinate descent algorithms for large-scale structured nonconvex optimization, Journal of Global Optimization, 61(1):19–46, 2015 (received Best Paper Award for a paper published in Journal of Global Optimization in 2015), (arxiv).
  • I. Necoara, V. Nedelcu, On linear convergence of a distributed dual gradient algorithm for linearly constrained separable convex problems, Automatica J., 55(5):209–-216, 2015, (arxiv).
  • I. Necoara, L. Ferranti, T. Keviczky, An adaptive constraint tightening approach to linear MPC based
    on approximation algorithms for optimization
    , Optimal Control Appl. & Met., 36(5):648–-666, 2015, (pdf).
  • I. NecoaraComputational complexity certification for dual gradient method: application to embedded MPC, Systems and Control Letters, 81(7):49–56, 2015 (pdf).
  • I. Necoara, A. Patrascu, DuQuad: an inexact (augmented) dual first order algorithm for quadratic programming, Tech. Rep., UPB, 2015, (arxiv).

2014

  • I. Necoara, V. Nedelcu, Rate analysis of inexact dual first order methods: application to dual decomposition, IEEE Transactions on Automatic Control, 59(5): 1232 – 1243, 2014, (arxiv).
  • I. Necoara, A. Patrascu, A random coordinate descent algorithm for optimization problems with composite objective function and linear coupled constraints, Computational Optim. & Applications, 57(2): 307-337, 2014, (arxiv).
  • Q. Tran Dinh, I. Necoara, M. Diehl, Path-Following Gradient-Based Decomposition Algorithms For Separable Convex Optimization, Journal of Global Optimization, 59(1): 59-80, 2014, (arxiv).
  • V. Nedelcu, I. Necoara, Q. Tran Dinh, Computational Complexity of Inexact Gradient Augmented Lagrangian Methods: Application to Constrained MPC, SIAM J. Control and Optimization, 52(5): 3109-3134, 2014, (pdf).

2013

  • I. Necoara, Yu. Nesterov, F. Glineur, A random coordinate descent method on large-scale optimization problems with linear constraints, Tech. rep, UPB, 2011 (ICCOPT 2013, Lisbon), (pdf).
  • I. NecoaraRandom coordinate descent algorithms for multi-agent convex optimization over networks, IEEE Transactions on Automatic Control, 58(8): 2001-2012, 2013, (pdf).
  • Q. Tran Dinh, I. Necoara, C. Savorgnan, M. Diehl, An inexact Perturbed Path-Following Method for Lagrangian Decomposition in Large-Scale Separable Convex Optimization, SIAM J. Optimization, 23(1): 95-125, 2013, (pdf).
  • I. Necoara, D. Clipici, Efficient parallel coordinate descent algorithm for convex optimization problems with separable constraints: application to distributed MPC, Journal of Process Control, 23(3): 243–253, 2013, (pdf).
  • I. Necoara, V. Nedelcu, Distributed dual gradient methods and error bound conditions, Tech. rep., 2013, (pdf).

Some old papers

  • I. Necoara, V. Nedelcu, I. Dumitrache, Parallel and distributed optimization methods for estimation and control in networks, Journal of Process Control, 21(5): 756 – 766, 2011, (pdf).
  • P. Tsiaflakis, I. Necoara, J.A.K. Suykens, M. Moonen, Improved dual decomposition based optimization for DSL dynamic spectrum management, IEEE Trans. Signal Processing, 58(4): 2230–2245, 2010.
  • I. Necoara, J. Suykens, An interior-point Lagrangian decomposition method for separable convex optimization, Journal of Optimization Theory and Applications, 143(3): 567–588, 2009,
    (pdf).
  • I. Necoara, J. Suykens, Application of a smoothing technique to decomposition in convex optimization, IEEE Transactions on Automatic Control, 53(11): 2674–2679, 2008,
    (pdf).
  • M. Baes, M. Diehl, I. NecoaraEvery nonlinear control system can be obtained by parametric convex programming, IEEE Transactions on Automatic Control, 53(8): 1963–1967, 2008.


Research Projects (principal investigator)

  • 6. UEFISCDI, PCE: ScaleFreeNet (Scale-free modeling and optimization techniques for control of complex networks), 2017-2019, see website for this ongoing project.
  • 5. UEFISCDI, Human Resources: MoCOBiDS (Modelling, Control and Optimization for Big Data Systems), 2015-2017.
  • 4. WBI Belgium-Romanian Academy: Programme de cooperation scientifique entre L’Academie roumaine, WBI et le FRS/FNRS, 2016-2018
  • 3. EU, FP7, ICT-STREP: EMBOCON (Embedded Optimization for Resource Constrained Platforms), 2010 – 2013.
  • 2. ANCS, PNII: EMBOCON (Embedded Optimization for Resource Constrained Platforms), 2010-2012.
  • 1. UEFISCDI, Human Resources: METNET (Mathematical Tools for Network Systems), 2010-2013.


Some talks

  • 8. ICSTCC 2018- International Conference on System Theory, Control and Computing, Sinaia, October 2018, Optimization in control: recent advances and challenges.
  • 7. EMBOPT 2014 – Workshop on embedded optimization, Lucca, September 2014, Iteration complexity analysis of dual first order methods(pdf).
  • 6. HYCON2 Workshop on Distributed Optimization in Large Networks and its Applications, Zurich, July 2013, Coordinate descent methods for huge scale problems(pdf).
  • 5. IMT Lucca, 2013, Rate analysis of inexact dual gradient methods: application to embedded and distributed MPC(pdf).
  • 4. ACSE – Univ. Politehnica Bucharest, December 2012, Decomposition methods for large-scale convex problems: applications in engineering(pdf).
  • 3. ETH Zurich, Oct. 2010, Distributed optimization methods for estimation and control in networks,
    (pdf).
  • 2. Lund University, May 2010, Smoothing Techniques for Distributed Control over Networks(pdf).
  • 1. Supelec, ETH Zurich, 2008, Robust control of a class of hybrid systems(pdf).


Software

  • Primal-Dual Toolbox for SVM (PD-SVM): Matlab code toolbox for solving large-scale SVM problems – download
  • QP solver (DuQuad): C code toolbox for solving convex QPs with dual first order methods – download
  • Parallel Optimization Toolbox (POPT): C code toolbox for solving large-scale structured QPs – download


Project PCE, L2O-MOC

Project PCE, L2O-MOC

The research leading to these results has received funding from:

Unitatea Executiva pentru Finantarea Invatamantului Superior, a Cercetarii, Dezvoltarii si Inovarii (UEFISCDI, PNIII-P4-PCE-2021-0720)

Project title: Learning to optimise: from mathematical foundations to modeling and control (L2O-MOC)

PN-III-P4-PCE-2021-0720, Contract No. 70/2022.

Institutul de Statistica Matematica si Matematica Aplicată „Gheorghe Mihoc- Caius Iacob” al Academiei Romane (ISMMA)

Abstract: Next generation of smart control systems (SCS) are expected to learn models from data and take optimal decisions in real-time, leading to increased performance, safety, energy efficiency, and ultimately value creation. Since, SCS produce large amounts of data, machine learning (ML) aims at extracting information from them. The key step in any ML technique is training, where an optimization problem is solved to tune the parameters of the ML model. In this project we reverse this paradigm, i.e. we use ML for devising efficient optimization algorithms. Our key observation is that the modelling and control approaches for SCS yield optimization problems that are extremely challenging due to their large dimension, stochasticity, nonconvexity, etc. Optimization algorithms addressing such problems usually involve many parameters that need to be hand-tuned after time-consuming experimentation and are prone to ill-conditioning and slow convergence. To address these challenges, L2O-MOC will develop learning-based techniques for devising efficient tuning-free optimization algorithms. A novel universal framework will be developed, which will serve as a solid theoretical ground for the design of new learning paradigms to train optimization methods with mathematical guarantees for convergence. Modelling and control problems for SCS, e.g. those arising in power networks, will provide the datasets for the training.

Project team:

    ISMMA project team: Prof. Ion Necoara, CSI Gabriela Marinoschi, Prof. Tudor C. Ionescu, Prof. Lucian Toma; PhD students: Daniela Lupu and Nitesh Kumar Singh.

Expected results:

  • 2022 – Efficient optimization algorithms (majorization-minimization type); Methods for modeling and control of smart systems. Expected results: 1 journal paper, 1 conference paper published. Achieved results: 1 ISI journal paper accepted; 2 conference papers accepted; 2 papers submitted to journals; 2 papers submitted to conferences. Scientific Report 2022
  • 2023 – Efficient optimization algorithms (using relaxation of convexity/smoothness); Methods for modeling and control of smart systems (model reduction methods); Learning to optimise (control-based approach). Expected results: 3 journal papers, 3 conference papers. Achieved results:….; 1 Matlab toolbox. Scientific Report 2023
  • 2024 – Efficient optimization algorithms (stochastic algorithms); Methods for modeling and control of smart systems (model predictive control); Learning to optimise (model predictive control for power systems). Expected results: 3 journal papers, 3 conference papers. Achieved results:…..  Scientific Report 2024

Publications

Papers accepted/submitted in ISI journals

  • [J3] L. El Bourkhissi, I. Necoara, Complexity of linearized augmented Lagrangian for optimization with nonlinear equality constraints, submitted to Journal of Global Optimization, December 2022.
  • [J2] I. Necoara, D. Lupu, General higher-order majorization-minimization algorithms for (non)convex optimization, submitted to Siam Journal on Optimization, December 2022.
  • [J1] I. Necoara, N.K. Singh, Stochastic subgradient projection methods for composite optimization with functional constraints, Journal of Machine Learning Research (Q1, IF=3.6), 23(265), 1-35, 2022 (https://www.jmlr.org/papers/volume23/21-1062/21-1062.pdf).

Papers in progress

  • [P1] …

Papers in conferences

  • [C4] D. Lupu, I. Necoara, Deep unfolding projected first order methods-based architectures: application to linear model predictive control, submitted to European Control Conference 2023.
  • [C3] Y. Nabou, L. Toma, I. Necoara, Modified projected Gauss-Newton method for constrained nonlinear least-squares: application to power flow analysis, submitted to European Control Conference 2023.
  • [C2] F. Chorobura, D. Lupu, I. Necoara, Coordinate projected gradient descent minimization and its application to orthogonal nonnegative matrix factorization, IEEE Conference on Decision and Control (CDC), accepted, 2022.
  • [C1] T.C. Ionescu, L. El Bourkhissi, I. Necoara, Least squares moment matching-based model reduction using convex optimization, International Conference on System Theory, Control and Computing (ICSTCC), 2022, pp. 325-330, doi: 10.1109/ICSTCC55426.2022.9931837.

Software packages

  • …..


Visits: