DIMITRI P. BERTSEKAS
Dimitri Bertsekas is an applied mathematician, computer scientist, and professor at the department of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology (MIT) in Cambridge Massachusetts.
He is known for his research and fourteen textbooks and monographs in theoretical and algorithmic optimization, control, and applied probability. His work ranges from theoretical/foundational to algorithmic analysis and design for Optimization with applications to data communication, transportation networks, and electric power generation. He is featured among the top 100 most cited computer science authors in the CiteSeer search engine academic database and digital library. In 1995, he co-founded a publishing company Athena Scientific that, among others, publishes most of his books.
I was born in Greece and lived my childhood there. I studied for five years at the National Technical University of Athens Greece (a time that was spent mostly playing poker and chess and dating my future wife Ioanna), for about a year and a half at the George Washington University (at night, while working as a research engineer), and for about two years at MIT where I obtained a doctorate in system science. I also taught for three years at the Engineering-Economic Systems Dept. of Stanford University, and for five years at the Electrical and Computer Engineering Dept. of the University of Illinois at Urbana-Champaign.
I have a strong interest in chess (a passion that had to be moderated when my college math grades started declining precipitously). My photographs have been exhibited at M.I.T. and can be accessed from http://web.mit.edu/dimitrib/www/home.html.
Download books by Bertsekas
- Network Optimization
- Data Networks
- Approximate Dynamic Programming
- Constrained Optimization and Lagrange Multiplier Methods
- Parallel and Distributed Computation: Numerical Methods
- Stochastic Optimal Control: The Discrete-Time Case
Dimitri P. Bertsekas received a combined B.S.E.E. and B.S.M.E. from the National Technical University of Athens, Greece, an M.S.E.E. from George Washington University, and a Ph.D. in system science from the Massachusetts Institute of Technology.
Dr. Bertsekas has held faculty positions with the Engineering-Economic Systems Dept. at Stanford University (1971-1974) and the Electrical Engineering Dept. of the University of Illinois, Urbana (1974-1979).
Since 1979 he has been teaching at the Electrical Engineering and Computer Science Department of the Massachusetts Institute of Technology (M.I.T.) where he is currently McAfee Professor of Engineering.
He consults regularly with private industry and has held editorial positions in several journals.
His research at M.I.T. spans several fields, including optimization, control, large-scale computation, and data communication networks, and is closely tied to his teaching and book authoring activities.
He has written numerous research papers, and thirteen books, several of which are used as textbooks in MIT classes.
Professor Bertsekas was awarded the INFORMS 1997 Prize for Research Excellence in the Interface Between Operations Research and Computer Science for his book Neuro-Dynamic Programming (co-authored with John Tsitsiklis), the 2000 Greek National Award for Operations Research, and the 2001 ACC John R. Ragazzini Education Award.
In 2001, he was elected to the United States National Academy of Engineering.
Dr. Bertsekas' recent books are Introduction to Probability (2002), Convex Analysis and Optimization (2003), Dynamic Programming and Optimal Control: 3rd Edition (2007), all published by Athena Scientific.
Awards and honors
Bertsekas was awarded the INFORMS 1997 Prize for Research Excellence in the Interface Between Operations Research and Computer Science for his book Neuro-Dynamic Programming (co-authored with J. N. Tsitsiklis), the 2000 Greek National Award for Operations Research, and the 2001 ACC John R. Ragazzini Education Award for outstanding contributions to education. In 2001, he was elected to the US National Academy of Engineering for "pioneering contributions to fundamental research, practice and education of optimization/control theory, and especially its application to data communication networks." Election citation by National Academy of Engineering
Convex Optimization Theory, Dimitri P. Bertsekas, Athena Scientific 2009
Excerpt from the Preface:
This textbook aims to provide a simple, intuitive, and mathematically rigorous intoduction to convexity theory and its connections to optimization. The book evolved from an earlier (2003) book of the author on the subject (written with collaboration by Angelia Nedić and Asuman Ozdaglar), but has different objectives and substantially different content. A major focus of the 2003 book was to bridge the gap between convex and nonconvex optimization, by placing special emphasis on the convex analysis topics that bear a connection to both. By contrast, the present book provides a simpler, more focused exposition, by concentrating exclusively on convexity, and developing in greater depth core subjects such as polyhedral convexity, subdifferential theory, and conjugate function theory. The methodology and choice of topics are in the spirit and tradition of Fenchel's 1951 lecture notes and Rockafellar's 1970 text on convex analysis: the material and the ideas are similar, but the exposition, while on occasion less comprehensive, is more accessible, and includes much theory and algorithms developed in the long period since the appearance of Fenchel's and Rockafellar's seminal works.
Despite the differences, the styles and levels of mathematical sophistication of the present book and the 2003 book are similar. In particular, both books place primary emphasis on theory, rather than algorithms and applications. Also, both books use geometric visualization as a principal tool for exposition, and also use end-of-chapter exercises to extend the range of exposition with substantial theoretical and algorithmic results. Detailed solutions of all the exercises are internet-posted in the book's www page.
Another common stylistic element of the two books is the close link between the theoretical treatment of convexity and its application to optimization. For example, simultaneously with the development of some of the basic facts about convexity in Chapters 1 and 2, we discuss elementary optimality conditions, and the question of existence of optimal solutions; in Chapter 3, after presenting the theory of hyperplane separation, we develop some of its applications to duality and saddle point theory; in Chapter 4, after the discussion of polyhedral convexity, we discuss its application in linear, integer, and convex programming; and in Chapter 6, after the discussion of subgradients, we discuss their use in optimality conditions and minimization algorithms. We follow this style in the remaining chapters, although having developed in Chapters 1-6 most of the needed convexity theory, the discussion in Chapters 7-8 is more heavily weighted towards optimization.
Some of the novel expository features of the 2003 book have been maintained in the present version as well. In particular, minimax theory and constrained optimization duality have been developed in a unified way, as special cases of the duality between two simple but fundamental geometrical problems: the min common point problem and the max crossing point problem. The min common/max crossing framework is essentially a geometric version of convex conjugate function theory, but is simpler and unencumbered by functional descriptions. In several contexts, including duality, it provides a powerful and insightful analytical machinery, which is far more convenient than conjugate function theory. The min common/max crossing framework is also useful in the analysis and interpretation of conjugate functions, theorems of the alternative, and subdifferential theory.
Another feature shared with the 2003 book is the unified approach for developing conditions for existence of solutions of convex optimization problems, conditions for the minimax equality to hold, and conditions for the absence of a duality gap in constrained optimization. This unification is traced to a simple and fundamental issue: the question whether a nested family of closed sets has a nonempty intersection.
To provide an alternative, more accessible path to convexity, the most advanced sections have been marked with a star, and can be skipped at first reading. The principle followed here is that "non-starred" sections do not make use of material in "starred" sections.
An instructor who wishes to teach a course from the book has a choice between a few different plans:
- Cover the entire book. This requires a full semester course at a fast pace.
- Cover primarily the first six chapters, emphasizing convexity theory, yet maintaining substantial optimization content.
- Cover the entire book, but omit the starred sections (or just summarize them). This is the option followed by the author in his one-semester MIT class (course slides available on the Internet).
Textbooks and monographs
- Dynamic Programming and Optimal Control (1996)
- Data Networks (with Robert G. Gallager 1989)
- Nonlinear Programming (1996)
- Introduction to Probability (with J. N. Tsitsiklis 2003)
- Stochastic Optimal Control: The Discrete-Time Case (with S. E. Shreve 1978), a mathematically complex work, establishing the measure-theoretic foundations of dynamic programming and stochastic control.
- Constrained Optimization and Lagrange Multiplier Methods (1982), the first monograph that comprehensively addressed the algorithmic convergence issues around augmented Lagrangian and sequential quadratic programming methods.
- Parallel and Distributed Computation: Numerical Methods (with J. N. Tsitsiklis 1989), which, among others, established the fundamental theoretical structures for the analysis of distributed asynchronous algorithms.
- Linear Network Optimization (1991) and Network Optimization: Continuous and Discrete Models (1998), which, among others, comprehensively discuss the class of auction algorithms for assignment and network flow optimization, developed by Bertsekas over a period of 20 years starting in 1979.
- Neuro-Dynamic Programming (with J. N. Tsitsiklis 1996), which layed theoretical foundations for suboptimal approximations of highly complex sequential decision-making problems.
- Convex Analysis and Optimization (with A. Nedic and A. Ozdaglar 2002) and Convex Optimization Theory (2009), which provide a new line of development for optimization duality theory, a new connection between the theory of Lagrange multipliers and nonsmooth analysis, and a comprehensive development of incremental subgradient methods.