Название: Power Magnetic Devices
Автор: Scott D. Sudhoff
Издательство: John Wiley & Sons Limited
Жанр: Техническая литература
isbn: 9781119674634
isbn:
In considering the properties of the objective function, it is appropriate to begin by defining our parameter vector, which will be denoted as x. The domain of x is referred to as the search space and will be denoted Ω, which is to say we require x ∈ Ω. The elements of parameter vector x will include those variables of a design that we are free to select. In general, some elements of x will be discrete in nature while others will be continuous. An example of a discrete element might be one that designates a material type from a list of available materials. A geometrical parameter such as the length of a motor would be an example of an element that can be selected from a continuous range. If all members of the parameter vector are discrete, the search space is described as being discrete. If all members of the search space are continuous (in the set of real numbers), the search space is said to be continuous. If the elements of x include both discrete and continuous elements, the search space is said to be mixed. It is assumed that the function that we wish to optimize is denoted f(x). We will assume that f(x) returns a vector of dimension m of real numbers, that is, f(x) ∈ ℝm, where m is the number of objectives we are considering. For most of this chapter, we will merely consider f(x) to be a mathematical function for which we wish to identify the optimizer of; however, in Section 1.9, and in the rest of this book for that matter, we will focus on how to construct f(x) so as to serve as an instrument of engineering design.
For this section, let us focus on the case where all elements of x are real numbers so that x ∈ ℝn, where ℝn denotes the set of real numbers of dimension n and where the number of objectives is one (that is, m = 1) so that f(x) is a scalar function of a vector argument. Finally, let us suppose we wish to minimize f(x). A point x* is said to be the global minimizer of f over Ω provided that
(1.2-1)
where ∀ is read as “for all” and Ω\{x*} denotes the set Ω less the point x*. If the ≤ is replaced by <, then x* is referred to as the strict global minimizer.
As stated previously, the function f(x) can have properties that make it easier or more difficult to find the global minimizer. Some of these properties are depicted in Figure 1.3. An example of a feature that makes it more difficult to find the global minimizer is a discontinuity as shown in Figure 1.3(a). Therein Ω = [xmn, xmx] and the discontinuity is at x = xa. In this case, the discontinuity results in a point where the function’s derivative is undefined. Since many optimization algorithms use the derivative of the function as part of the algorithm, such behavior can be problematic. In general, any problem with a discrete or mixed search space will have a discontinuous objective.
Figure 1.3 Function properties.
Another property that can be problematic is the existence of local minima. These are illustrated in Figure 1.3(b). Therein x = xb, x = xc, and x = xd are all local minima; however, only the point x = xc is a global minimizer. Many minimization algorithms can converge to local minimizers and fail to find the global minimizer.
Related to the existence of local extrema is the convexity (or lack thereof) of a function. It is appropriate to begin the discussion of function convexity by considering the definition of a convex set. Let Θ ⊂ ℝn denote a set. A set is considered convex if all the points on the line connecting any two points in the set are also in the set. In other words, if
(1.2-2)
for any two points xa, xb ∈ Θ then Θ is convex. This is illustrated in Figure 1.4 for Θ ⊂ ℝ2.
In order to determine if a function is convex, it is necessary to consider its epigraph. The epigraph of f(x) is simply the set of points greater than or equal to f(x). A function is considered convex if its epigraph is a convex set, as is shown in Figure 1.5. Note that this set will be in ℝn + 1, where n is the dimension of x.
Figure 1.4 Definition of a convex set.
Figure 1.5 Definition of a convex function.
If the function being optimized is convex, the optimization process becomes much easier. This is because it can be shown that any local minimizer of a convex function is also a global minimizer. Therefore the situation shown in Figure 1.3(b) cannot occur. As a result, the minimization of continuous convex functions is straightforward and computationally tractable.
1.3 Single‐Objective Optimization Using Newton’s Method
Let us consider a method to find the extrema of an objective function f(x). Let us focus our attention on the case where f(x) ∈ ℝ and x ∈ ℝn. Algorithms to solve this problem include gradient methods, Newton’s method, conjugate direction methods, quasi‐Newton methods, and the Nelder–Mead simplex method, to name a few. Let us focus on Newton’s method as being somewhat representative.
In order to set the stage for Newton’s method, let us first define some operators. The first derivative or gradient of our objective function is denoted ∇f(x) and is defined as
(1.3-1)
The second derivative or Hessian of f(x)is defined as
(1.3-2)
If x* is a СКАЧАТЬ