Ion Necoară


University Politehnica Bucharest
Faculty of Automatic Control and Computers
Department of Automatic Control and Systems Engineering
Spl. Independentei, 313, Bucharest, Romania
Tel: +40.21.402.9195
Emails: ion.necoara@upb.ro,   i.necoara@yahoo.com



Courses     Biography      Publications      Projects      Talks      Software



Project PCE L2O-MOC (UEFISCDI)



Numerical Methods course support

Optimization course support

Machine Learning Techniques course support

  • Course: Curs ML
  • Labs and seminars: Lab. ML


  • Short Biography (Full CV)
    Higher education:

    • July 2021, Habilitation thesis in Mathematics, Romanian Academy – ISMMA, Stochastic Optimization
    • Nov. 2014, Habilitation thesis in Engineering Sciences, UPB, Coordinate Descent Methods for Sparse Optimization (pdf)
    • 2002 – 2006: PhD in Applied Mathematics (Cum Laude) from Technical University Delft, Holland
    • 2000 – 2002: Master in Optimization and Statistics, Faculty of Mathematics, University Bucharest
    • 1996 – 2000: Bachelor in Mathematics, Faculty of Mathematics, University Bucharest

    Academic positions:

    • 2021 – : Senior Researcher I, Institute of Mathematical Statistics and Applied Mathematics of the Romanian Academy (ISMMA)
    • 2021 – : PhD Advisor in Mathematics (see attached some phd thesis proposals)
    • 2015 – : PhD Advisor in Systems Engineering (see attached some phd thesis proposals)
    • 2015 – : Professor of Numerical Methods & Optimization, Fac. Automatic Control & Computers, UPB
    • 2012 – 2015: Associate Professor, Faculty of Automatic Control & Computers, UPB
    • 2009 – 2012: Assistant Professor, Faculty of Automatic Control & Computers, UPB
    • 2007 – 2009: Post-doctoral fellowship at KU Leuven, Belgium

    Research activities:

    • Author of about 180 research papers (130 ISI research papers).
    • Citations: 3200 citations and h-index 29 on Google Scholar; 1300 citations and h-index 20 on Web of Science; 1200 citations, h-index 22 on Scopus.
    • Principal investigator in several EU projects: TraDE-OPT (H2020), ELO-X (H2020), EMBOCON (FP7); National projects (Uefiscdi): L2O-MOC (PCE), ELO-Hyp (NO Grants), ScaleFreeNet (PCE), MoCOBiDS (TE), METNet (TE).
    • Head of Optimization, Learning and Control Group (OLC) (see website).
    • Supervised 8 PhD students and several MSc & bachelor students.
    • Invited professor to a number of universities including: MIT, Cornell Univ., Lehigh Univ., UNC (USA); Univ. Catholique Louvain and KU Leuven (Belgium); Edinburgh Univ. and Cambridge Univ. (UK); TU Delft (Netherlands); ETH Zurich and EPFL (Switzerland); Lund University (Sweden); SUPELEC (France); NTNU (Norway); IMT Lucca (Italy); OVGU (Germany); KAUST (Saudi Arabia).
    • Member of IFAC Committee on Optimal Control and in IPC of various international conferences.
    • Member of editorial board of IEEE Transactions on Automatic Control (IEEE); Computational Optimization and Applications (Springer); EURO Journal on Computational Optimization (Elsevier).

    Awards:

    • General chair of European Control Conference 2023 ( ECC23).
    • Romanian representative on General Assembly of European Control Association (EUCA), 2020-.
    • National Order Faithful Service from Romanian Presidency, 2017.
    • Fulbright Visiting Professor fellowship at UNC (USA), 2017.
    • Excellence in Research award in Engineering Sciences, awarded by Ad Astra, 2016.
    • Romanian Academy Award in Mathematical Sciences & Information Technology (Gr. Moisil), 2015.
    • Best Paper Award for a paper published in Journal of Global Optimization in 2015.
    • Best Paper Award at International Conference on Systems Theory, Control and Computing 2014.
    • Awarded UEFISCDI Fellowship (Young Independent Research Team Fellowship, 2010-2013 & 2015-2017). University Fellowship (UPB, Excellence in Research Fellowship, 2010-2013).

    Main current fields of interest:

    • Theory and methods for Convex/Distributed/Big Data Optimization.
    • Developing optimization algorithms with a focus on structure exploiting (sparsity, convexity, stochasticity, low-rank, parallel and distributed computations).
    • Mathematical guarantees about performance of numerical optimization algorithms.
    • Applying optimization techniques to Machine Learning problems
    • Develop new advanced Controller design algorithms for complex systems (Embedded, Distributed, MPC).
    • Practical applications include: Big Data Models (Data Analytics, Machine Learning, Weather Forecasts, Smart Electricity Grids, Traffic Networks, Distributed Control, Compressive Sensing, Image/Signal processing), Embedded Control, Control of Robots, Automotive Industry.

    Phd Thesis Proposals:

    • See attached some list of phd thesis proposals.
    • I am always looking for talented and self-motivated phd students that want to perform research within the broad areas of Optimization, Big Data Analytics and Control. You will receive competitive benefits and work at international standards. We support visits to strong research groups from Europe, conference travels and interactions with the best researchers in the field.
    • Students interested in doing some research/practical projects can also contact me.
    • For anyone interested, here are some tips on how read a scientific paper (by Mitzenmacher) (pdf).

    My phd students at UPB:

    • drd. Lahcen El Bourkhissi (2021-): Optimization and learning techniques for nonlinear MPC.
    • Flavia Chorobura (2020-2024): Adaptive first-order methods for structured optimization.
    • Yassine Nabou (2020-2024): Higher-order methods for composite optimization and applications.
    • Nitesh Kumar Singh (2020-2024): Stochastic methods for functional constrained optimization.
    • Liliana Ghinea (2019-2024, in cotutela): Advanced control of complex processes.
    • Daniela Lupu (2019-2023): Contributions to the analysis of some higher-order methods and applications.
    • Andrei Patrascu (2012-2015): Efficient first order methods for sparse convex optimization (phd thesis). Assist. Professor at University Bucharest.
    • Valentin Nedelcu (2010-2013): Rate analysis of dual gradient methods (phd thesis). Research Scientist at Assystem.

    Books:




    Recent publications

    2024

    • Ion Necoara, Higher-order tensor methods for minimizing difference of convex functions, arxiv, 2024.
    • Nitesh Kumar Singh, Ion Necoara, A stochastic moving ball approximation method for smooth convex constrained minimization, arxiv, 2024.
    • Daniela Lupu, Joseph Garrett, Tor Arne Johansen, Milica Orlandic, Ion Necoara, Quick unsupervised hyperspectral dimensionality reduction for earth observation: a comparison, arxiv, 2024.

    2023

    • Y. Nabou, Ion Necoara, Moving Higher-Order Taylor Approximations Method for Smooth Constrained Minimization Problems, Siam Journal on Optimization, 2023.
    • Lionel Tondji, Ion Necoara, Dirk A. Lorenz, An accelerated randomized Bregman-Kaczmarz method for convex linearly constrained optimization, Linear Algebra and Its Applications, 2023.
    • Nitesh K. Singh, Ion Necoara, Stochastic halfspace approximation method for convex optimization with nonsmooth functional constraints, IEEE Transactions on Automatic Control, 2023.
    • L. El Bourkhissi, I. Necoara, Complexity of linearized quadratic penalty for optimization with nonlinear equality constraints, Journal of Global Optimization, 2023.
    • F. Chorobura, I. Necoara, Coordinate descent methods beyond smoothness and separability, Computational Optimization and Applications, doi: 10.1007/s10589-024-00556-w, 2024.
    • D. Lupu, I. Necoara, Exact representation and efficient approximations of linear model predictive control laws via HardTanh type deep neural networks, Systems and Control Letters, 186, 2024.
    • I. Necoara, F. Chorobura, Efficiency of stochastic coordinate proximal gradient methods on nonseparable composite optimization, Mathematics of Operations Research, 2023.
    • Y. Nabou, I. Necoara, A proximal gradient method with inexact oracle of degree q for composite optimization, Optimization Letters, 2023.

    2022

    • Y. Nabou, I. Necoara, Efficiency of higher-order algorithms for minimizing general composite optimization, Computational Optimization and Applications, doi: 10.1007/s10589-023-00533-9, 2023.
    • D. Lupu, I. Necoara, Convergence analysis of stochastic higher-order majorization-minimization algorithms, Optimization Methods and Software, doi: 10.1080/10556788.2023.2256447, 2023.
    • F. Chorobura, I. Necoara, Random coordinate descent methods for nonseparable composite optimization, Siam Journal on Optimization, vol. 33, nr. 3, 2023.
    • I. Necoara, D. Lupu, General higher-order majorization-minimization algorithms for (non)convex optimization, 2022.

    2021

    • N.K. Singh, I. Necoara, V. Kungurtsev, Mini-batch stochastic subgradient for functional constrained optimization, Optimization, DOI: 10.1080/02331934.2023.2189015, 2023.
    • I. Necoara, N.K. Singh, Stochastic subgradient projection methods for composite optimization with functional constraints, Journal of Machine Learning Research, 23, 1-35, 2022.
    • D. Lupu, I. Necoara, J. Garrett, T.A. Johansen, Stochastic higher-order independent component analysis for hyperspectral dimensionality reduction, IEEE Transactions on Computational Imaging, vol. 8, pp. 1184-1194, 2022.

    2020

    • I. Necoara, O. Ferqoc, Linear convergence of dual coordinate descent on non-polyhedral convex problems, Mathematics of Operations Research, vol. 47, pp. 2641–2666, 2022.
    • I. Necoara, Stochastic block projection algorithms with extrapolation for convex feasibility problems, Optimization Methods and Software, vol. 37, pp. 1845–1875, 2022.
    • T. Ionescu, O. Iftime, I. Necoara, Model reduction with pole-zero placement and high order moment matching, Automatica, vol. 138, 2022.

    2019

    • I. Necoara, A. Nedich, Minibatch stochastic subgradient-based projection algorithms for solving convex inequalities, Computational Optimization and Applications, vol. 80, nr. 1, pp. 121–152, Sept. 2021.
    • I. Mezghani, Q. Tran-Dinh, I. Necoara, A. Papavasiliou, A globally convergent Gauss-Newton algorithm
      for AC optimal power flow, submitted, May 2019, (arxiv).
    • A. Nedich, I. Necoara, Random minibatch projection algorithms for convex problems with functional constraints, Applied Mathematics and Optimization, vol. 80, nr. 3, pp. 801–833, 2019, (arxiv).
    • I. Necoara, Faster randomized block Kaczmarz algorithms, Siam Journal on Matrix Analysis and Applications, vol. 40, nr. 4, pp. 1425–1452, 2019, (arxiv).
    • I. Necoara, T. Ionescu, H2 model reduction of linear network systems by moment matching and optimization, IEEE Transactions on Automatic Control, vol. 65, nr. 12, pp. 5328–5335, Dec. 2020 (arxiv).
    • O. Fercoq, A. Alacaoglu, I. Necoara, V. Cevher, Almost surely constrained convex optimization, International Conference on Machine Learning (ICML), January 2019, (arxiv).

    2018

    • I. Necoara, T. Ionescu, Optimal H2 moment matching-based model reduction for linear systems by (non)convex optimization, submitted, December 2018 (arxiv).
    • I. Necoara, M. Takac, sketch descent methods for non-separable linearly constrained optimization, IMA Journal of Numerical Analysis, vol. 41, nr. 2, pp. 1056–1092, 2021 (arxiv).
    • T. Sun, I. Necoara, Q. Tran-Dinh, Convex Optimization with Global and Local Inexact Oracles, Computational Optimization and Applications, vol. 76, nr. 1, pp. 69–124, 2020 (arxiv).
    • I. Necoara, P. Richtarik, A. Patrascu, Randomized projection methods for convex feasibility problems: conditioning and convergence rates, Siam Journal on Optimization, vol. 29, nr. 4, pp. 2814–2852, 2019(arxiv).
    • A. Patrascu, I. NecoaraNonasymptotic convergence of stochastic proximal point algorithms for constrained convex optimization, Journal of Machine Learning Research, 18(198): 1−42, 2018. (pdf).
    • A. Patrascu, I. NecoaraOn the convergence of inexact projection first order methods for convex minimization , IEEE Transactions on Automatic Control, 63(10): 3317–3329, 2018. (pdf).

    2017

    • I. NecoaraGeneral convergence analysis of stochastic first order methods for composite optimization, Journal of Optimization Theory and Applications, vol. 189, nr. 1, pp. 66–95, Apr. 2021. (pdf).
    • I. NecoaraCoordinate gradient descent methods, chapter in book: Big Data and Computational Intelligence
      in Networking, Y. Wu et al. (Eds.), Taylor & Francis LLC – CRC Press, 2017 (pdf).
    • I. Necoara, Yu. Nesterov, F. Glineur, Random block coordinate descent methods for linearly constrained
      optimization over networks
      , Journal of Optimization Theory and Applications, 173(1): 227–254, 2017, (pdf) or (pdf arxiv).
    • A. Patrascu, I. Necoara, Q. Tran-Dinh, Adaptive inexact fast augmented Lagrangian methods for constrained convex optimization , Optimization Letters, 11(3): 609–-626, 2017, (pdf arxiv).
    • N.A. Nguyen, S. Olaru, P. Rodriguez-Ayerbe, M. Hovd, I. NecoaraConstructive solution of inverse parametric linear/quadratic programming problems, Journal of Optimization Theory and Applications, 172(2): 623–648, 2017 (pdf).

    2016

    • I. Necoara, D. Clipici, Parallel random coordinate descent methods for composite minimization: convergence analysis and error bounds, SIAM Journal on Optimization, 26(1): 197-226, 2016 (pdf arxiv).
    • I. Necoara, A. Patrascu, Iteration complexity analysis of dual first order methods for conic convex programming, Optimization Methods and Software, 31(3):645-678, 2016, (pdf arxiv).
    • Q. Tran Dinh, I. Necoara, M. Diehl, Fast Inexact Decomposition Algorithms For Large-Scale Separable Convex Optimization, Optimization, 65(2): 325–356, 2016, (pdf arxiv).

    2015

    • I. Necoara, Yu. Nesterov, F. Glineur, Linear convergence of first order methods for non-strongly convex optimization, Mathematical Programming, 2018, (updated version here: pdf)(or on arxiv).
    • I. Necoara, A. Patrascu, F. Glineur, Complexity certifications of first order inexact Lagrangian and penalty methods for conic convex programming, Optimization Methods and Software, 2015, (arxiv).
    • I. Necoara, A. Patrascu, A. Nedich, Complexity certifications of first order inexact Lagrangian methods for general convex programming, chapter in Developments in Model-Based Optimization and Control, Springer, 2015, (arxiv).
    • A. Patrascu, I. NecoaraRandom coordinate descent methods for l0 regularized convex optimization, IEEE Transactions on Automatic Control, 60(7):1811–-1824, 2015, (arxiv).
    • A. Patrascu, I. NecoaraEfficient random coordinate descent algorithms for large-scale structured nonconvex optimization, Journal of Global Optimization, 61(1):19–46, 2015 (received Best Paper Award for a paper published in Journal of Global Optimization in 2015), (arxiv).
    • I. Necoara, V. Nedelcu, On linear convergence of a distributed dual gradient algorithm for linearly constrained separable convex problems, Automatica J., 55(5):209–-216, 2015, (arxiv).
    • I. Necoara, L. Ferranti, T. Keviczky, An adaptive constraint tightening approach to linear MPC based
      on approximation algorithms for optimization
      , Optimal Control Appl. & Met., 36(5):648–-666, 2015, (pdf).
    • I. NecoaraComputational complexity certification for dual gradient method: application to embedded MPC, Systems and Control Letters, 81(7):49–56, 2015 (pdf).
    • I. Necoara, A. Patrascu, DuQuad: an inexact (augmented) dual first order algorithm for quadratic programming, Tech. Rep., UPB, 2015, (arxiv).

    2014

    • I. Necoara, V. Nedelcu, Rate analysis of inexact dual first order methods: application to dual decomposition, IEEE Transactions on Automatic Control, 59(5): 1232 – 1243, 2014, (arxiv).
    • I. Necoara, A. Patrascu, A random coordinate descent algorithm for optimization problems with composite objective function and linear coupled constraints, Computational Optim. & Applications, 57(2): 307-337, 2014, (arxiv).
    • Q. Tran Dinh, I. Necoara, M. Diehl, Path-Following Gradient-Based Decomposition Algorithms For Separable Convex Optimization, Journal of Global Optimization, 59(1): 59-80, 2014, (arxiv).
    • V. Nedelcu, I. Necoara, Q. Tran Dinh, Computational Complexity of Inexact Gradient Augmented Lagrangian Methods: Application to Constrained MPC, SIAM J. Control and Optimization, 52(5): 3109-3134, 2014, (pdf).

    2013

    • I. Necoara, Yu. Nesterov, F. Glineur, A random coordinate descent method on large-scale optimization problems with linear constraints, Tech. rep, UPB, 2011 (ICCOPT 2013, Lisbon), (pdf).
    • I. NecoaraRandom coordinate descent algorithms for multi-agent convex optimization over networks, IEEE Transactions on Automatic Control, 58(8): 2001-2012, 2013, (pdf).
    • Q. Tran Dinh, I. Necoara, C. Savorgnan, M. Diehl, An inexact Perturbed Path-Following Method for Lagrangian Decomposition in Large-Scale Separable Convex Optimization, SIAM J. Optimization, 23(1): 95-125, 2013, (pdf).
    • I. Necoara, D. Clipici, Efficient parallel coordinate descent algorithm for convex optimization problems with separable constraints: application to distributed MPC, Journal of Process Control, 23(3): 243–253, 2013, (pdf).
    • I. Necoara, V. Nedelcu, Distributed dual gradient methods and error bound conditions, Tech. rep., 2013, (pdf).

    Some old papers

    • I. Necoara, V. Nedelcu, I. Dumitrache, Parallel and distributed optimization methods for estimation and control in networks, Journal of Process Control, 21(5): 756 – 766, 2011, (pdf).
    • P. Tsiaflakis, I. Necoara, J.A.K. Suykens, M. Moonen, Improved dual decomposition based optimization for DSL dynamic spectrum management, IEEE Trans. Signal Processing, 58(4): 2230–2245, 2010.
    • I. Necoara, J. Suykens, An interior-point Lagrangian decomposition method for separable convex optimization, Journal of Optimization Theory and Applications, 143(3): 567–588, 2009,
      (pdf).
    • I. Necoara, J. Suykens, Application of a smoothing technique to decomposition in convex optimization, IEEE Transactions on Automatic Control, 53(11): 2674–2679, 2008,
      (pdf).
    • M. Baes, M. Diehl, I. NecoaraEvery nonlinear control system can be obtained by parametric convex programming, IEEE Transactions on Automatic Control, 53(8): 1963–1967, 2008.


    Research Projects (principal investigator)

    • 10. UEFISCDI, PCE: L2O-MOC (Learning to Optimise: from mathematical foundations to modeling and control), 2022-2024.
    • 9. UEFISCDI, NO Grants: Efficient Learning and Optimization Tools for Hyperspectral Imaging Systems (ELO-Hyp), 2020-2023.
    • 8. EU-H2020, Marie Curie – Innovative Training Networks: ELO-X, 2020-2024.
    • 7. EU-H2020, Marie Curie – Innovative Training Networks: TRADE-OPT, 2020-2024.
    • 6. UEFISCDI, PCE: ScaleFreeNet (Scale-free modeling and optimization techniques for control of complex networks), 2017-2019, see website for this ongoing project.
    • 5. UEFISCDI, Human Resources: MoCOBiDS (Modelling, Control and Optimization for Big Data Systems), 2015-2017.
    • 4. WBI Belgium-Romanian Academy: Programme de cooperation scientifique entre L’Academie roumaine, WBI et le FRS/FNRS, 2016-2018
    • 3. EU, FP7, ICT-STREP: EMBOCON (Embedded Optimization for Resource Constrained Platforms), 2010 – 2013.
    • 2. ANCS, PNII: EMBOCON (Embedded Optimization for Resource Constrained Platforms), 2010-2012.
    • 1. UEFISCDI, Human Resources: METNET (Mathematical Tools for Network Systems), 2010-2013.


    Some talks

    • 8. ICSTCC 2018- International Conference on System Theory, Control and Computing, Sinaia, October 2018, Optimization in control: recent advances and challenges.
    • 7. EMBOPT 2014 – Workshop on embedded optimization, Lucca, September 2014, Iteration complexity analysis of dual first order methods(pdf).
    • 6. HYCON2 Workshop on Distributed Optimization in Large Networks and its Applications, Zurich, July 2013, Coordinate descent methods for huge scale problems(pdf).
    • 5. IMT Lucca, 2013, Rate analysis of inexact dual gradient methods: application to embedded and distributed MPC(pdf).
    • 4. ACSE – Univ. Politehnica Bucharest, December 2012, Decomposition methods for large-scale convex problems: applications in engineering(pdf).
    • 3. ETH Zurich, Oct. 2010, Distributed optimization methods for estimation and control in networks,
      (pdf).
    • 2. Lund University, May 2010, Smoothing Techniques for Distributed Control over Networks(pdf).
    • 1. Supelec, ETH Zurich, 2008, Robust control of a class of hybrid systems(pdf).


    Software

    • Primal-Dual Toolbox for SVM (PD-SVM): Matlab code toolbox for solving large-scale SVM problems – download
    • QP solver (DuQuad): C code toolbox for solving convex QPs with dual first order methods – download
    • Parallel Optimization Toolbox (POPT): C code toolbox for solving large-scale structured QPs – download


    Project PCE, L2O-MOC

    Project PCE, L2O-MOC

    The research leading to these results has received funding from:

    Unitatea Executiva pentru Finantarea Invatamantului Superior, a Cercetarii, Dezvoltarii si Inovarii (UEFISCDI, PNIII-P4-PCE-2021-0720)

    Project title: Learning to optimise: from mathematical foundations to modeling and control (L2O-MOC)

    PN-III-P4-PCE-2021-0720, Contract No. 70/2022.

    Institutul de Statistica Matematica si Matematica Aplicată „Gheorghe Mihoc- Caius Iacob” al Academiei Romane (ISMMA)

    Abstract: Next generation of smart control systems (SCS) are expected to learn models from data and take optimal decisions in real-time, leading to increased performance, safety, energy efficiency, and ultimately value creation. Since, SCS produce large amounts of data, machine learning (ML) aims at extracting information from them. The key step in any ML technique is training, where an optimization problem is solved to tune the parameters of the ML model. In this project we reverse this paradigm, i.e. we use ML for devising efficient optimization algorithms. Our key observation is that the modelling and control approaches for SCS yield optimization problems that are extremely challenging due to their large dimension, stochasticity, nonconvexity, etc. Optimization algorithms addressing such problems usually involve many parameters that need to be hand-tuned after time-consuming experimentation and are prone to ill-conditioning and slow convergence. To address these challenges, L2O-MOC will develop learning-based techniques for devising efficient tuning-free optimization algorithms. A novel universal framework will be developed, which will serve as a solid theoretical ground for the design of new learning paradigms to train optimization methods with mathematical guarantees for convergence. Modelling and control problems for SCS, e.g. those arising in power networks, will provide the datasets for the training.

    Project team:

      ISMMA project team: Prof. Ion Necoara, CSI Gabriela Marinoschi, Prof. Tudor C. Ionescu, Prof. Lucian Toma; PhD students: Daniela Lupu and Nitesh Kumar Singh.

    Expected results:

    • 2022 – Efficient optimization algorithms (majorization-minimization type); Methods for modeling and control of smart systems. Expected results: 1 journal paper, 1 conference paper published. Achieved results: 1 ISI journal paper accepted; 2 conference papers accepted; 2 papers submitted to journals; 2 papers submitted to conferences. Scientific Report 2022
    • 2023 – Efficient optimization algorithms (using relaxation of convexity/smoothness); Methods for modeling and control of smart systems (model reduction methods); Learning to optimise (control-based approach). Expected results: 3 journal papers, 3 conference papers. Achieved results:….; 1 Matlab toolbox. Scientific Report 2023
    • 2024 – Efficient optimization algorithms (stochastic algorithms); Methods for modeling and control of smart systems (model predictive control); Learning to optimise (model predictive control for power systems). Expected results: 3 journal papers, 3 conference papers. Achieved results:…..  Scientific Report 2024

    Publications

    Papers accepted/submitted in ISI journals

    • [J3] L. El Bourkhissi, I. Necoara, Complexity of linearized augmented Lagrangian for optimization with nonlinear equality constraints, submitted to Journal of Global Optimization, December 2022.
    • [J2] I. Necoara, D. Lupu, General higher-order majorization-minimization algorithms for (non)convex optimization, submitted to Siam Journal on Optimization, December 2022.
    • [J1] I. Necoara, N.K. Singh, Stochastic subgradient projection methods for composite optimization with functional constraints, Journal of Machine Learning Research (Q1, IF=3.6), 23(265), 1-35, 2022 (https://www.jmlr.org/papers/volume23/21-1062/21-1062.pdf).

    Papers in progress

    • [P1] …

    Papers in conferences

    • [C4] D. Lupu, I. Necoara, Deep unfolding projected first order methods-based architectures: application to linear model predictive control, submitted to European Control Conference 2023.
    • [C3] Y. Nabou, L. Toma, I. Necoara, Modified projected Gauss-Newton method for constrained nonlinear least-squares: application to power flow analysis, submitted to European Control Conference 2023.
    • [C2] F. Chorobura, D. Lupu, I. Necoara, Coordinate projected gradient descent minimization and its application to orthogonal nonnegative matrix factorization, IEEE Conference on Decision and Control (CDC), accepted, 2022.
    • [C1] T.C. Ionescu, L. El Bourkhissi, I. Necoara, Least squares moment matching-based model reduction using convex optimization, International Conference on System Theory, Control and Computing (ICSTCC), 2022, pp. 325-330, doi: 10.1109/ICSTCC55426.2022.9931837.

    Software packages

    • …..


    Visits: