Safety research - new tools for dam analysis

16 November 2005



Yoshihisa Uchita, Hiroaki Noguchi and Victor E. Saouma share their experiences of identifying the state of the art in seismic dam safety, and report on the development of a computational tool to effectively analyse dams, from the simplest through to the most complex structures


Japan is one of the most seismically active regions in the world and as such has taken a leadership role in research and development to mitigate the destructive forces unleashed by a seismic tremor. The Tokyo Electric Power Company (TEPCO) and the Tokyo Electric Power Service Company (TEPSCO) have performed seismic safety assessments of dams. Recognising the computational power currently available and the advances in various related disciplines, TEPCO and TEPSCO have developed a vision of what a modern 3D nonlinear seismic analysis should entail. They seek not only to advance the State of the Art, but as importantly, to transform it into their State of the Practice.

Few organisations have previously embraced such a broad and challenging set of objectives. In the US, the Electric Power Research Institute (EPRI) has indeed funded for five years the third co-author to develop a fracture mechanics based methodology to assess dam safety. However, the impact of this research remained minimal. More recently, the European Community has funded a five year network program on the integrity assessment of a large dam (NW-IALAD, http://nw-ialad.uibk.ac.at/), but its objective was limited to the identification of the state of the art/practice as opposed to focused research.

Building on the American experience developed by the third author, and the seismic expertise and needs of dam owners, an international collaboration was initiated six years ago. Our approach is one which takes a holistic approach to a very (indeed one of the most complex in civil engineering) complex and coupled multi-physics problem. First we identify the state of the art, advance it if need be, publish those results in the scientific literature to share it with others, develop a computational tool which can effectively and efficiently analyse the simplest and most complex structures, and last but not least assess the reliability of this tool through complex model testing.

This paper will share our collective experience in this endeavour.1 However, before we proceed, it should be emphasised that a peculiarity of concrete dams is the simplicity or complexity with which they can be analysed. In the simplest case, a spreadsheet is all that is needed, at the other end of the spectrum a supercomputer (or massively parallel one) is essential. What causes this dichotomy is simple. As for most structures, design is limited to linear elastic analysis, generous safety factors, and simple calculations compounded by engineering common sense. Risk analysis on the other hand, is much more complex. The Federal Energy Regulatory Commission (FERC, http://www.ferc.gov/industries/ hydropower.asp) for instance stipulates that a potential failure mode analysis (PFMA) must be undertaken. Hence, a nonlinear finite element analysis must be conducted to determine whether the dam can sustain a given flood, earthquake, blasting, aging of its concrete or a combination of the above. The stakes hinging on the outcome of such an evaluation are enormous. Our responsibility is to make sure that this does not occur through dam failure.

Testing

Nonlinear finite element analysis is much dependent on the constitutive models (stress-strain or stress-displacements) adopted. These must not only be rigorously derived, but must be based and validated by laboratory tests.

Material testing – This differs for dams from ‘traditional’ material testing for buildings. A major difference is in size of specimens, and in type of application specific tests. Through the EPRI funded research, the third author has investigated the resistance to crack growth of concrete, and concrete-rock joints by testing large specimens as shown in shown in Fig 1a. In this test, a wedge is slowly driven between roller bearing supports to open a crack through a concrete specimen, hence measuring its resistance to crack propagation. Figure 1b shows a similar test in which variation of uplift pressure in terms of crack opening rate is quantified to better understand hydrodynamic loads applied on a dam boundary during an earthquake.

Field measurements – It is often desirable to extract directly in-situ key physical parameters of the concrete; this was attempted, in the EPRI project, through the use of special probe which records radial displacements while it pressurises a borehole. First, laboratory tests were conducted, then field tests performed.

Cyclic loading of joints – Another example of tests particularly relevant to concrete dams is the response of joints subjected to reverse cyclic loads (figure 2). As the earthquake shakes the rock/concrete interfaces, there is a degradation of the interface resistance to crack formation/propagation. These tests (performed through funding of the Italian Ministry of Research), are essential prior to the development of a constitutive joint model to be used in strong earthquakes.

Mathematical Modelling

Mathematical modelling of a double curvature arch dam subjected to a strong earthquake is indeed one of the most challenging civil engineering problems. It is a tightly coupled multi-physics problem. One must understand the 1) thermally induced initial stresses in the arch dam, 2) nonlinear mechanical response of joints and cracks, 3) hydrodynamic forces exerted on the dam, 4) dynamic flow of water inside a crack, 5) wave propagation from the epicenter to the dam, 6) ‘dam-foundation’ interaction, 7) structural model, and last but not least 8) the dynamic structural response of this massive concrete structure. Each one of those will be separately addressed.

Thermal analysis – Whereas a thermal transient analysis may not be particularly challenging, determination of the thermally induced thermal stresses, simulation of the construction process, and of the joint grouting, is a delicate computational task which may not be easily accomplished with commercial codes.

Nonlinear response of joints – Joints and cracks do constitute the primary, if not the only source of nonlinearity in the stress analysis of a dam. Whereas most commercial codes, as well as researchers, focused on the so-called smeared crack model, we felt all along that the discrete crack model should be one adopted. In the smeared model, the mesh is fixed, and Gauss points constitutive models are modified to reflect the presence of a crack. A major limitation of this model, and despite years of research, is the difficulty to capture the localisation of the crack. In the discrete crack model, the mesh topology reflects the presence of the crack/joint and a special interface element is inserted along the crack.

In the context of a dam, where few cracks/joints exist, and which location is known a priori, there is little doubt that the discrete crack model should be adopted. Hence, much of our effort dwelt on improving this model. When used along the concrete/rock discontinuity in a dynamic analysis, this model indeed captures the nonlinear response of the joint through crack opening and normal stress distributions which account for possible (small) cohesive tensile stresses (figure 3).

Hydrodynamic forces – Westergaard’s (or Zangar’s for inclined upstream faces) added mass concept is perfectly satisfactory for gravity dams. However, for an arch dam, one has to model the reservoir. In the context of a time domain nonlinear analysis, the simplest (and yet accurate) way to model the water reservoir is through a displacement based element formulation for nearly incompressible material. Without going into too much detail, these special elements require a selective reduced integration and are then perfectly capable of modelling the fluid reservoir. The only disadvantage of this method is the presence of three degrees of freedom at each node while all that is really needed is the pressure.

Dynamic uplift – During an earthquake, joints open and cracks may form. Currently, there is much controversy as to whether water can flow into the newly formed crack, and what the dynamic uplift is. There is some strong experimental evidence that during crack opening and closure, we have respectively a decrease or an increase of uplift pressure. This model (figure 4) must hence be accounted for while automatically adjusting for the pressure distribution as the crack propagates.

Wave propagation (deconvolution) – Earthquake records (accelerograms) are usually measured or determined at the surface. Yet the analysis requires modelling of the rock foundation (and its mass), at the base of which accelerations are applied. Again, in simpler analyses the surface acceleration is simply applied at the base; however a more rigorous approach would require deconvolution of the signal under the assumption of a linear elastic rock foundation. This is done by first applying the (surface) recorded signal at the base, through a preliminary finite element analysis compute the induced acceleration on the surface, transfer those two accelerations from the time to the frequency domain, and compute the transfer function. In the general three dimensional case, there will be nine transfer functions which constitute the convolution matrix. The inverse of that matrix times the input accelerations will give the deconvoluted one. Hence, when the deconvoluted one is applied at the base of the foundation, the induced acceleration at the surface will be nearly identical to the one recorded (figure 5).

Soil structure interaction (free field) – Whereas many simpler analyses ignore the foundation mass, this must be accounted for in a rigorous nonlinear analysis. A major complication becomes modelling of the so-called free-field as the seismic wave will artificially and numerically be reflected by the lateral edges of the model. This problem has long been recognised. In the context of a finite element (as opposed to boundary element) analysis, this problem is most efficiently addressed through application of boundary conditions which would absorb these waves (thus we often talk of ‘Silent’ boundary conditions, or of radial damping). The classical (and partial) solution to this problem is the application of Lysmer dashpots. However, in this model, the free field is assumed to be rigid. A more rigorous model, developed by Miura, consists of a separate analysis of each of the free fields (2 in 2D and 4 in 3D) for each of the three components of the earthquake record. The determined velocities are then applied as initial boundary conditions to the dam which must also have the traditional lysmer support (figure 6). While conceptually simple, this can be a potentially ‘labor intensive’ task as velocities must be extracted for each time steps from many different files and then inserted in the main analysis input file.

Structural model – Amongst the many ‘fine-grain’ issues in the structural model (and not previously discussed), we list the need to have an initial static analysis, followed by a Restart (and resetting of displacements to zero) of the dynamic analysis with dynamic elastic properties. Concrete and rock should also have different Rayleigh damping coefficients. Different joint models should be used if we expect a pure mode I (simpler) or mixed mode crack propagation. Highly fissured and fragmented rock (as is the case in Japan) should have a nonlinear model, and major rock joints should be modelled.

Dynamic analysis – Whereas until fairly recently, transient analysis was considered to be computationally impossible, this is clearly no longer the case. With modern computers, time history analysis of even small dams is becoming the norm. Time integration can be either implicit or explicit. In our implicit analysis, it rapidly became evident that the classical Newmark b method is not satisfactory, as it will result in high frequency responses once the cracks are activated. Hence, Hughes a method, which provides a numerical damping of the high frequency content, is routinely used. Its beneficial effect is dramatically illustrated by figure 7. As is well known, the major advantage of this method is that it is unconditionally stable.

More recently, we have modified our code to accommodate explicit (which being conditionally stable requires very small time steps) method. A major advantage of the explicit time integration is that the global structural stiffness matrix need not be assembled, and hence the code can be relatively easily parallelised to run on multiple CPU’s. This was done through the MPI library, whereas we used METIS for domain decomposition. It is our strong expectation that complex 3D nonlinear analysis of dam-foundation-reservoir system can most effectively be analysed in parallel on a network of computers connected by 1 GB Ethernet network.

Numerical modelling

Engineers are under big time constraints, and must maximise their performance. Hence, computational tools which can easily: 1) translate their problems into a numerical model; 2) Perform a State of the Art analysis; 3) Facilitate digestion of analyses results and report preparation are essential. With those three objectives in mind, we developed a layered approach. First a computer program, Beaver, was written to allow the dam engineer to geometrically define an existing dam. Thus, arches, joints, foundations, reservoirs, pool elevations, and uplift models are entered. This data is then converted into a boundary representation of the dam, one which can be meshed and prepare a full finite element analysis input data file. This second program, Kumonosu, could be used as a standalone mesh generator. Finally, the kernel of our environment is the finite element code Merlin, and the specially written graphical post-processor (Spider) is used to visualise dam analysis results. A separate tool for the optimum layout of new arch dam, and which is integrated with Beaver, is under current consideration

Dam definition – Double curvature arch dams are notoriously complex geometries to mesh for a finite element program. As such, the first layer between the user and the finite element analysis is Beaver. The user must specify the coordinates of various intersections: arches-foundation, cantilever-foundation and arches-cantilevers. Furthermore, one should characterise the arch segments as circular, linear parabolic, or elliptical. Finally, users should specify loading conditions (such as upstream and downstream pool elevation for various increments); boundary conditions, such as free, restrained, or ‘silent’ (for radiation damping); and material (especially joints) properties. Beaver will in turn generate a boundary representation of the dam, crack and joint definition are all ready for the next program to take over. It should be noted that such a boundary representation could have been directly entered into Kumo, however this direct approach is error prone and lengthy.

Figure 8 shows the control page for the geometry through which general parameters defining the dam are entered. The user must define the arch-abutment and arch-joint intersection through an ‘excel-like’ environment for both the upstream and downstream face (Figure 9).

In an arch dam, it is particularly important that the dam/foundation be properly represented, and it is not always along a horizontal surface, and does not correspond necessarily to an arch (Figure 10).

Foundation must be defined in a flexible manner, allowing the user to specify its extent in all directions, non-uniform topology and presence of joints. Similarly reservoir parameters must be properly defined to reflect the type of fluid/structure interaction (Figure 11).

Pool elevation is not necessarily constant, and may very well vary with time/increments, hence only those ‘wet’ elements need to be subjected to the hydrostatic load; If need be Westergarrd added mass, and dynamic uplift model, can also be defined.

Finally, foundation boundary conditions can be particularly time consuming to specify, especially if energy absorption (radiating) B.C. are to be specified through dashpots which parameters are automatically tuned to either S or P waves (Figure 12).

Once the model has been completely defined, the user can examine the dam physical model, Figure 13, which displays the boundary definition of the dam with hidden line removal, or shaded surface for better visualisation. At this point, the file can be automatically loaded by the next program, Kumonosu, to generate the actual finite element mesh with no further data entry.

Finite element mesh generation – Once the dam has been defined, Kumo ‘takes over’, allowing the user to refine data input, and then generates the input data file for the dam analysis. This part of the analysis process can also be quite complex as details of the boundary conditions, loads, material properties must be defined.

Finite element analysis – The kernel of our computational tools is the finite element code. It is 3D nonlinear dynamic finite element code which has been under continuous development for over 12 years. Being (as all other programs) developed ‘in-house’ we do have the source code and the flexibility to easily modify it to address new needs of the dam engineering profession.

Merlin has a library of over 25 constitutive models, 30 element types and different algorithms for nonlinear analysis (including indirect control/Line Search). In addition to stress analysis, it can also solve transient heat transfer analysis (to determine temperature field for AAR analysis), and steady state seepage analysis (to determine initial uplift pressure in complex geological formation). Implemented in Merlin are all the desirable features previously discussed, and others (such as a comprehensive model for AAR largely based on the experimental work undertaken at the Laboratoire Central des Ponts et Chaussés in France).

Not surprisingly, computational time on a Pentium IV of a 3D nonlinear analysis with dam reservoir can take well over two weeks. Hence, to address this severe constraint, an explicit version of the program was developed, and then parallelised using the MPI library (http://www-unix.mcs.anl.gov/mpi/). Hence, not only seismic analysis of a dam with reservoir could be performed in a matter of hours, but also analysis of dams subjected to impact or explosives could be possible. In turn, the preprocessor can perform domain decomposition using METIS (http://www-users.cs.umn.edu/~karypis/metis).

Graphical post-processor – Whether running a simple or complex analysis, engineers no longer limit themselves to scrutinising pages of output data file. Not only are graphical post-processors essential, but those must be ‘jazzed-up’ to satisfy the wishes of a new generation grown with electronic games. Last but not least, the engineer should be able to ‘data-mine’ the information needed to prepare analysis reports. This was accomplished in our case with our in-house program Spider. As this tool was developed by engineers for engineers, it truly responds to all our needs. Hence, Spider accepts input data from:

• Eigenvalue analysis: display and animation of mode shapes (Figure 14). User selects the mode shape, and spider will display it in static mode, or through an animation. Animation can then be saved into an .AVI file.

• Time History analysis: real time display of displacements and accelerations while the (very long) numerical simulation is under way. Users can select in real time node and degree of freedom for which accelerations are to be displayed (histogram at the bottom, bar chart and numerical values on the top), through a graphical user interface (Figure 15). Furthermore, the user can select two accelerograms and seek the evaluation of the transfer function between the two signals (FFT and transfer function all being internally computed), or plot accelerations and FFT’s (Figure 16). AVI files showing the animation of the dam’s response are also possible. Finally, Spider is also setup to read various analysis results and automatically determine the deconvoluted earthquake accelerogram which should be applied.

• Nonlinear finite element analysis: Provides practically all the major and (minor) features we found in other finite element postprocessors. Aside from the usual displays of meshes, contour lines, contour surfaces, carpet plots, vector plots and principal plots, Spider provides a number of other features. Display of individual groups, possibility to slice the structure; provide different types of displays for a sliced structure, and many more options, figure 17 (p22).

It should be noted that Spider is in no way ‘hard-wired’ to Merlin, and that it can be used as a stand-alone post-processor to other finite element codes. The user has to supply a .pst file, .rtv or .eig (standard post-processor, real time view, or eigenvalue) file written in accordance with Spider’s protocol. Aside from the connectivity and coordinates, user must specify nodal quantities. Those can be scalar, vectors, or tensors of order two along with the corresponding labels. Spider will internally compute eigenvalues (principal stresses/strains) and eigenvectors (principal plots). Also possible is to either provide spider with a set of x-y plots (coming from the analysis), or Spider can internally generate a histogram.

Validation

Society can no longer afford the failure of a major infrastructure before it revises its mathematical predictive models. This is particularly true for dams where increased sophistication in modelling and narrowing factors of safety (economic hardship) imposes upon us to verify the accuracy of the mathematical models through controlled laboratory testing. All model testing must satisfy Buckingham’s laws of similitude; for most ordinary structures this is seldom a concern – however for dams it is. Hence, for a dam model n times smaller than the corresponding prototype, we must increase the gravitational forces by a factor of n. Furthermore, a 10 seconds earthquake hitting the prototype should be modelled by a 10/n model excitation (hence if n. is 100, that 10 sec. earthquake must be applied in 0.1 sec!).

Whereas we do not, yet, advocate the use of centrifuges to assess the safety of an actual dam, we certainly recognise the values of such a test to validate numerical models. Indeed, this may very well constitute the only safe and reliable way to verify the accuracy of a numerical code for dam engineering.

Preliminary centrifuge tests of dams were performed in Boulder in the early 90’s through our EPRI project. Most recently, we have examined first the development of uplift pressure beneath the dam, and assessed the accuracy of our models (Figure 18).

Building on the experience gained from the previous test, a new test program was initiated to dynamically excite and crack impounded dams inside a centrifuge. Hence, the facility of the centrifuge facility of the Obayashi Corporation was used. A model, gravity dams, was cast with a very shallow foundation (Figure 19). Beside strain gages, the dam was instrumented with crack gages, laser based displacement transducers, and accelerometers. Whereas no attempt was made to model uplift, the impounded dam was to be excited by a series of

harmonic excitations (five cycles each), with increasing magnitudes. Following each excitation, a ‘white-noise’ excitation is applied, and through the determination of the transfer function between base and crest, damage (cracks) was detected.

Again, the objective was not to test a particular dam, but rather to test a generic and representative geometry which could be used to assess our computer program Merlin. As with the uplift investigation, numerical prediction of crack propagation, and crest acceleration was very satisfactory (Figure 20).

Conclusions

Historically, dam engineering has constituted some of the most complex and challenging engineering problems. Hence, solutions initially developed for dams were subsequently extended to other engineering disciplines. Furthermore, given what is at stake, we can no longer satisfy ourselves with simple engineering approaches, instead we must take advantage of the latest developments and lead the way in civil engineering research and development. This is being achieved through an International collaboration between Japan and the US, and it is hoped that this brief exposure to our research and development could entice others to renew an interest in dam engineering.


Author Info:

Yoshihisa Uchita is the Dam Specialist Manager of Dam Engineering at the Tokyo Electric Power Company; Hiroaki Noguchi is the manager of the Hydroelectric Engineering Department at the Tokyo Electric Power Service Company; and Victor E. Saouma is a Professor of Civil Engineering at the University of Colorado in Boulder

The assistance of Dr. Jan Cervenka, Dr. Eric Hansen, Dr. Gary Haussman, Dr. Eric Puntel and Dr Daniel Rypl in various aspects of the software development is gratefully acknowledged.

This project would not have been possible without the continuous guidance and software evaluation of Mr. Takashi Shimpo and Mr. Yoshinori Yagome.

Figure 6 Figure 6
Figure 13 Figure 13
Figure 4 Figure 4
Figure 19b Figure 19b
Figure 17b Figure 17b
Figure 17d Figure 17d
Figure 19a Figure 19a
Figure 17c Figure 17c
Figure 15a Figure 15a
Figure 18b Figure 18b
Figure 5 Figure 5
Figure 18a Figure 18a
Figure 1a Figure 1a
Figure 16a Figure 16a
Figure 1b Figure 1b
Figure 8 Figure 8
Figure 17a Figure 17a
Figure 2a Figure 2a
Figure 2b Figure 2b
Figure 7 Figure 7
Figure 9 Figure 9
Figure 11 Figure 11
Figure 12 Figure 12
Figure 14a Figure 14a
Figure 14b Figure 14b
Figure 2c Figure 2c
Figure 3 Figure 3
Figure 15b Figure 15b
Figure 16b Figure 16b
Figure 10 Figure 10
Figure 20 Figure 20


Privacy Policy
We have updated our privacy policy. In the latest update it explains what cookies are and how we use them on our site. To learn more about cookies and their benefits, please view our privacy policy. Please be aware that parts of this site will not function correctly if you disable cookies. By continuing to use this site, you consent to our use of cookies in accordance with our privacy policy unless you have disabled them.