Power Magnetic Devices. Scott D. Sudhoff
Чтение книги онлайн.

Читать онлайн книгу Power Magnetic Devices - Scott D. Sudhoff страница 17

Название: Power Magnetic Devices

Автор: Scott D. Sudhoff

Издательство: John Wiley & Sons Limited

Жанр: Техническая литература

Серия:

isbn: 9781119674634

isbn:

СКАЧАТЬ we proceed to do this, note that throughout this work, scalar variables are normally in italic font (for example, x) while vector and matrices are bold nonitalic (for example, x). Functions of all dimensionalities are denoted by nonitalic nonbold font (for example, x(θ)). Brackets in equations are associated with iteration number in iterative methods.

      In considering the properties of the objective function, it is appropriate to begin by defining our parameter vector, which will be denoted as x. The domain of x is referred to as the search space and will be denoted Ω, which is to say we require x ∈ Ω. The elements of parameter vector x will include those variables of a design that we are free to select. In general, some elements of x will be discrete in nature while others will be continuous. An example of a discrete element might be one that designates a material type from a list of available materials. A geometrical parameter such as the length of a motor would be an example of an element that can be selected from a continuous range. If all members of the parameter vector are discrete, the search space is described as being discrete. If all members of the search space are continuous (in the set of real numbers), the search space is said to be continuous. If the elements of x include both discrete and continuous elements, the search space is said to be mixed. It is assumed that the function that we wish to optimize is denoted f(x). We will assume that f(x) returns a vector of dimension m of real numbers, that is, f(x) ∈ ℝm, where m is the number of objectives we are considering. For most of this chapter, we will merely consider f(x) to be a mathematical function for which we wish to identify the optimizer of; however, in Section 1.9, and in the rest of this book for that matter, we will focus on how to construct f(x) so as to serve as an instrument of engineering design.

      For this section, let us focus on the case where all elements of x are real numbers so that x ∈ ℝn, where ℝn denotes the set of real numbers of dimension n and where the number of objectives is one (that is, m = 1) so that f(x) is a scalar function of a vector argument. Finally, let us suppose we wish to minimize f(x). A point x* is said to be the global minimizer of f over Ω provided that

      (1.2-1)

      where ∀ is read as “for all” and Ω\{x*} denotes the set Ω less the point x*. If the ≤ is replaced by <, then x* is referred to as the strict global minimizer.

      Another property that can be problematic is the existence of local minima. These are illustrated in Figure 1.3(b). Therein x = xb, x = xc, and x = xd are all local minima; however, only the point x = xc is a global minimizer. Many minimization algorithms can converge to local minimizers and fail to find the global minimizer.

      Related to the existence of local extrema is the convexity (or lack thereof) of a function. It is appropriate to begin the discussion of function convexity by considering the definition of a convex set. Let Θ ⊂ ℝn denote a set. A set is considered convex if all the points on the line connecting any two points in the set are also in the set. In other words, if

      (1.2-2)

Schematic illustration of definition of a convex set. Schematic illustration of definition of a convex function.

      Let us consider a method to find the extrema of an objective function f(x). Let us focus our attention on the case where f(x) ∈ ℝ and x ∈ ℝn. Algorithms to solve this problem include gradient methods, Newton’s method, conjugate direction methods, quasi‐Newton methods, and the Nelder–Mead simplex method, to name a few. Let us focus on Newton’s method as being somewhat representative.

      In order to set the stage for Newton’s method, let us first define some operators. The first derivative or gradient of our objective function is denoted ∇f(x) and is defined as

      (1.3-1)equation

      The second derivative or Hessian of f(x)is defined as

      (1.3-2)equation

      If x* is a СКАЧАТЬ