Mathematical optimization

study of mathematical algorithms for optimization problems
(Redirected from Optimization)

In many branches of science, including mathematics, mathematical optimization is a branch that is about finding a right set of answers that gives the best (optimal) solution to a problem, given some limitations, or criteria.[1][2][3] In the simplest case, this means that a function needs to be minimized or maximized.

Example

change

For example, consider a baker with a set amount of flour, sugar, yeast, etc. They also are given five hours to prepare. Time and ingredients are the constraints, or criteria. The baker is tasked with providing the largest volume of food to feed a hungry crowd. Maximizing the number of servings is the optimization function.

Difficulty

change

Many problems are much harder, though, and solving them analytically is not feasible: in these cases, numerical methods are often used.[4][5] This often is a result of when the decisions that need to be made involve whole numbers. A baker can usually make half of a recipe, but consider a rancher trying to determine how many cattle and sheep to graze. Limitations may include amount of land available or money to purchase the animals. The optimization function is to maximize profit. If they could purchase a fraction of a cow or sheep, then the problem is pretty easy to solve. However, since one can only buy whole sheep or cows, the problem becomes very difficult to solve perfectly, so the best one can do is sometimes to get a "best guess" using numerical methods with the help of a computer.

Solutions

change

The first step is usually to take the derivative of the initial function. From here maxima or minima can be found within a domain by simply finding the critical points. These include domain (x) values where the differentiated function equals 0 or does not exist. Simply finding critical points is not enough to determine maxima or minima. The final step to determining a maximum, minimum, or neither can be one of two methods.

(1) X values lesser and lesser than the critical point must be substituted into the differentiated function to determine if the values change sign. If they go from negative to positive, the critical point is a minimum. If they go from positive to negative, the critical point is a maximum. If neither occurs, the point is neither. This is known as the first derivative test.

(2) The derivative of the differentiated function can be taken, creating a second-order derivative. The critical point must be substituted into the second derivative. If the output is positive, the critical point is a minimum. If the output is negative, the critical point is a maximum. This is known as the second derivative test.

Applications

change

The application of optimization span many fields. A simple example is attempting to find the smallest possible difference in the distance of two objects in two-dimensional space (x and y). In this context, the derivative of the function that gives the difference is taken in order to find the minimum. A more complicated example is in Machine Learning,[6][7] in which the optimization function attempts to find the global minimum of the loss function in order to minimize the difference or loss between the algorithm’s predictions and the actual values. This example is more difficult as Machine learning algorithms often utilize multidimensional data usually in the form of tensors yielding more complicated functions.

change

Today, there are many tools to support optimization studies:

References

change
  1. Snyman, J. A. (2005). Practical mathematical optimization (pp. 97-148). Springer Science+ Business Media, Incorporated.
  2. Intriligator, M. D. (2002). Mathematical optimization and economic theory. Society for Industrial and Applied Mathematics.
  3. Luptacik, M. (2010). Mathematical optimization and economic analysis (p. 307). New York, NY, USA:: Springer.
  4. Nocedal, J., & Wright, S. (2006). Numerical optimization. Springer Science & Business Media.
  5. Bonnans, J. F., Gilbert, J. C., Lemaréchal, C., & Sagastizábal, C. A. (2006). Numerical optimization: theoretical and practical aspects. Springer Science & Business Media.
  6. Sra, S., Nowozin, S., & Wright, S. J. (Eds.). (2012). Optimization for machine learning. MIT Press.
  7. Bottou, L., Curtis, F. E., & Nocedal, J. (2018). Optimization methods for large-scale machine learning. SIAM Review, 60(2), 223-311.
  8. Venkataraman, P. (2009). Applied optimization with MATLAB programming. John Wiley & Sons.
  9. Bhatti, M. A. (2012). Practical Optimization Methods: With Mathematica® Applications. Springer Science & Business Media.