Название: Finite Element Analysis
Автор: Barna Szabó
Издательство: John Wiley & Sons Limited
Жанр: Физика
isbn: 9781119426462
isbn:
Preface to the second edition
The first edition of this book, published in 1991, focused on the conceptual and algorithmic development of the finite element method from the perspective of solution verification, that is, estimation and control of the errors of approximation in terms of the quantities of interest. Since that time the importance of solution verification became widely recognized. It is a key constituent of predictive computational science, the branch of computational science concerned with the prediction of physical events.
Predictive computational science embraces the formulation of mathematical models, definition of the quantities of interest, code and solution verification, definition of statistical sub‐models, calibration and validation of models, and forecasting physical events with quantified uncertainty. The second edition covers the main conceptual and algorithmic aspects of predictive computational science pertinent to solid mechanics. The formulation and application of design rules for mechanical and structural components subjected to cyclic loading are used for illustration.
Another objective in writing the first edition was to make some fundamentally important results of research in the field of applied mathematics accessible to the engineering community. Speaking generally, engineers and mathematicians view the finite element method very differently. Engineers see the method as a way to construct a numerical problem the solution of which is expected to provide quantitative information about the response of some physical system, for example a structural shell, to some form of excitation, such as the application of loads. Their view tends to be element‐oriented: They tend to believe that sufficiently clever formulation of elements can overcome the various shortcomings of the method.
Mathematicians, on the other hand, view the finite element method as a method for approximating the exact solution of differential equations cast in a variational form. Mathematicians focus on a priori and a posteriori error estimation and error control. In the 1970s adaptive procedures were developed for the construction of sequences of finite element mesh such that the corresponding solutions converged to the exact solution in energy norm at optimal or nearly optimal rates. An alternative way of achieving convergence in energy norm through increasing the polynomial degrees of the elements on a fixed mesh was proven in 1981. The possibility of achieving exponential rates of convergence in energy norm for an important class of problems, that includes the problem of elasticity, was proven and demonstrated in 1984. This required the construction of sequences of finite element mesh and optimal assignment of polynomial degrees.
Superconvergent methods of extraction of certain quantities of interest (such as the stress intensity factor) from finite element solutions were developed by 1984.
These developments were fundamentally important milestones in a journey toward the emergence of predictive computational science. Our primary objective in publishing this second edition is to provide engineering analysts and software developers a comprehensive account of the conceptual and algorithmic aspects of verification, validation and uncertainty quantification illustrated by examples.
Quantification of uncertainty involves the application of methods of data analysis. A brief introduction to the fundamentals of data analysis is presented in this second edition.
We recommend this book to students, engineers and analysts who seek Professional Simulation Engineer (PSE) certification.
We would like to thank Dr. Ricardo Actis for many useful discussions, advice and assistance provided over many years; Dr. Börje Andersson for providing valuable convergence data relating to the solution of an interesting model problem of elasticity, and Professor Raul Tempone for guidance in connection with the application of data analysis procedures.
Barna Szabó and Ivo Babuška
Preface to the first edition
There are many books on the finite element method today. It seems appropriate therefore to say a few words about why this book was written. A brief look at the approximately 30‐year history of the finite element method will not only help explain the reasons but will also serve to put the main points in perspective.
Systematic development of the finite element method for use as an analytical tool in engineering decision‐making processes is usually traced to a paper published in 1956.1 Demand for efficient and reliable numerical methods has been the key motivating factor for the development of the finite element method. The demand was greatly amplified by the needs of the space program in the United States during the 1960s. A great deal was invested into the development of finite element analysis technology in that period.
Early development of the finite element method was performed entirely by engineers. The names of Argyris, Clough, Fraeijs de Veubeke, Gallagher, Irons, Martin, Melosh, Pian, and Zienkiewicz come to mind in this connection.2
Development of the finite element method through the 1960s was based on intuitive reasoning, analogies with naturally discrete systems, such as structural frames, and numerical experimentation. The errors of discretization were controlled by uniform or nearly uniform refinement of the finite element mesh. Mathematical analysis of the finite element method begun in the late 1960s. Error estimation techniques were investigated during the 1970s. Adaptive mesh refinement procedures designed to reduce the errors of discretization with improved efficiency received a great deal of attention in this period.3
Numerical experiments conducted in the mid‐1970s indicated that the use of polynomials of progressively increasing degree on a fixed finite element mesh can be much more advantageous than uniform or nearly uniform mesh refinement.4 To distinguish between reducing errors of discretization by mesh refinement and the alternative approach, based on increasing the degree of the polynomial basis functions, the labels h‐version and p‐version gained currency for the following reason: Usually the symbol h is used to represent the size of the finite elements. Convergence occurs when the size of the largest element (hmax) is progressively reduced. Hence the name: h‐version. The polynomial degree of elements is usually denoted by the symbol p. Convergence occurs when the lowest polynomial degree pmin is progressively increased. Hence the name: p‐version. The h‐ and p‐versions are just special applications of the finite element method which, at least in principle, allows changing the finite element mesh concurrently with increasing the polynomial degree of elements. This general approach is usually called the hp‐version of the finite element method. The theoretical basis for the p‐version was established by 1981.5 The understanding of how to combine mesh refinement with p‐extensions most effectively was achieved by the mid‐1980s.6
The p‐version was developed primarily for applications in solid mechanics. A closely related recent development for applications primarily in fluid mechanics is the spectral element method.7
The 1980s brought another important development: superconvergent methods for the extraction of engineering data from finite element solutions were developed and demonstrated. At the time of writing good understanding exists regarding how finite element discretizations should be designed and how engineering data should be extracted from finite element solutions for optimal reliability and efficiency.
The СКАЧАТЬ