I am a Postdoctoral Researcher in Mathematics working in the field of optimization, with a
focus on multi-objective optimization problems. My research interests include numerical
optimization, algorithm development, Nash equilibrium problems, and convergence analysis
of iterative methods. I work on trust-region, conjugate gradient, and Newton-type algorithms
for generalized Nash equilibrium and vector optimization problems. My goal is to develop
efficient optimization methods with strong theoretical foundations and practical relevance.
2025Vector OptimizationNonlinear CGWolfe Line Search
Abstract
Recently, Gonçalves et al. proposed extensions of the Liu-Storey nonlinear conjugate gradient methods for vector optimization.
They demonstrated that extending the Liu-Storey method for vector optimization may not result in a descent direction at each
iteration in the vector sense, even if an exact line search is employed. To overcome this drawback, a novel modified
Liu-Storey nonlinear conjugate gradient method with the standard Wolfe line search technique has been presented to compute
the Pareto critical point of a vector optimization problem. The global convergence of this proposed method for solving
nonconvex vector optimization problems is established under suitable conditions. Finally, we provide numerical experiments
to demonstrate the efficacy of the proposed algorithm.
Singh, Abhishek; Ghosh, Debdas; Ansari, Qamrul Hasan
Journal of Optimization Theory and Applications, 201(3), 1333–1363, 2024 (Springer)
2024GNEPInexact Newton MethodQ-quadratic
Abstract
In this article, we present an inexact Newton method to solve generalized Nash equilibrium problems (GNEPs). Two types of
GNEPs are studied: player convex and jointly convex. We reformulate the GNEP into an unconstrained optimization problem
using a complementarity function and solve it by the proposed method. It is found that the proposed numerical scheme has
the global convergence property for both types of GNEPs. The strong BD-regularity assumption for the reformulated system of
GNEP plays a crucial role in global convergence. In fact, the strong BD-regularity assumption and a suitable choice of a
forcing sequence expedite the inexact Newton method to Q-quadratic convergence. The efficiency of the proposed numerical
scheme is shown for a collection of problems, including the realistic internet switching problem, where selfish users
generate traffic. A comparison of the proposed method with the existing semi-smooth Newton method II for GNEP is provided,
which indicates that the proposed scheme is more efficient.
The generalized Nash equilibrium problems (GNEP) are typically challenging to solve by Newtonian methods because the
problems generally have locally nonunique solutions. To overcome these difficulties, we propose an improved nonmonotone
adaptive trust region (INATR) method for constrained optimization problems under fairly loose error-bound conditions.
Also, we solve GNEPs using the INATR method and provide its numerical performances. The INATR method maintains the local
convergence properties of its nonmonotone counterpart, and also it is proven that the proposed INATR method has global
convergence properties. The numerical results indicate that the INATR method performs better compared to the nonmonotone
trust region method.
In this article, we consider a class of Generalized Nash Equilibrium Problems (GNEPs) and solve it using one of the most
effective quasi-Newton algorithms: the BFGS method. The considered GNEP is a player-convex GNEP. As the Armijo-type line
search techniques are cost-effective in finding a step length, compared to Wolfe-type line search techniques, we use the
Armijo–Goldstein line search technique in an improved BFGS method to solve GNEPs. In the BFGS method, the main drawback
of using Armijo-type line search techniques is that it does not inherit the positive definiteness property of the generated
Hessian approximation matrices. Therefore, we tactfully update approximate Hessian matrices so that the updated BFGS-matrices
inherit the positive definiteness property. Accordingly, we prove its global convergence in the GNEP framework. The numerical
performance of the proposed method is exhibited on three commonly used GNEPs and on two internet-switching GNEPs.
Information Sciences, 504, 276–292, 2019 (Elsevier)
2019Interval OptimizationKKTSVM
Abstract
This paper presents an extended Karush-Kuhn-Tucker condition to characterize efficient solutions to constrained interval
optimization problems. We develop the theory from the geometrical fact that at an optimal solution the cone of feasible
directions and the set of descent directions have an empty intersection. With the help of this fact, we derive a set of
first-order optimality conditions for unconstrained interval optimization problems. In the sequel, we extend Gordan’s theorems
of the alternative for the existence of a solution to a system of interval linear inequalities. Using Gordan’s theorem, we
derive Fritz John and Karush-Kuhn-Tucker necessary optimality conditions for constrained interval optimization problems. It is
observed that these optimality conditions appear with inclusion relations instead of equations. The derived Karush-Kuhn-Tucker
condition is applied to the binary classification problem with interval-valued data using support vector machines.
Conferences & Workshops
2023 — Research Internship, National Sun Yat-sen University, Kaohsiung, Taiwan.
2019 — ICCOPT 2019, Weierstrass Institute (WIAS), Berlin, Germany.