A critique of embankment dam analysis

17 September 2001



Advances in geotechnical engineering and mathematical modelling have led to improved estimates of potential dam performance, but these are still approximations. As Robert B Jansen explains, deficient input data is unlikely to give an accurate representation of real embankment response under extreme seismic loading


The engineering of dams inherently requires the weighing of risks and, in some cases, may be facilitated by probability analysis. Care must be exercised in the extrapolation of statistics over a long term. Conventionally, this may entail projections of records accumulated over a hundred years or so with the objective of estimating what might happen in thousands of years. In examining a history of events for guidance, some of the most valuable information will be drawn from thorough study of individual accidents and failures and of the particular features of the sites and the structures that were involved. The main thrust of risk analysis must be to identify the potentially detrimental conditions and the required preventive or corrective measures.

Earthquake probability

As applied to seismic analysis, probability studies must be tempered by consideration of the nature of quakes, in which rock masses fracture and energy is released when strain capabilities are exceeded. Geologic conditions limit the magnitude of the earthquake that can occur in a given location. Motions radiating from the epicenter are set by the type, speed, depth and direction of fault rupture. Variations in movement may be attributable to wave focusing, rebound, reflection, or resonance. In a fault zone that has been subjected to repeated movements, the accumulation of crushed material may have a significant attenuating effect. There is no assurance that the locations and characteristics of future ground motions will follow historical patterns.

Assessment of possible earthquake recurrence frequencies may be facilitated by study of the rates of strain development and fault slip that might be involved. In the US, such data are more readily available in the west, where deterministic analyses are applicable. This is in contrast to some central and eastern regions of the country where fault systems are hidden by deep overburden, and probabilistic methods therefore have been used. Still, there are extensive files on the local effects of large quakes, including those centred in the New Madrid, Missouri area in 1811-1812 and near Charleston, South Carolina in 1886. These data can be used to estimate potential seismic impact at specific sites without probabilistic extension of the records. Depending upon the mechanism of development of stress and strain at each seismic source in the region, future events originating at that source might not exceed extremes already on record.

Analysis of seismic probability commonly has been based on an empirical equation discussed by Richter in 1958:

Log10N = a – bM

Here, N is the cumulative number of quakes of magnitude M or greater per year in a given seismic source zone, and the terms a and b are constants for each source.

This relationship evidently is not reliably applicable for Richter magnitudes larger than about 5.0 (Krinitzsky 1993), which is the low end of the damaging range. In extension of the available seismic records to distant future times, there is no way to predict where and when major shaking might occur on any of the faults capable of generating such events or to estimate the patterns of stress, strain, rupture, energy release and seismic wave propagation. Although the equation apparently is reasonably representative of ordinary lesser shocks, it is not likely to be valid in the range of interest for design and analysis of dams to withstand extreme ground motions.

In some areas, earthquake records have been extended to prehistoric times by paleoseismology. Offset strata in faults examined by trenching have been dated. In a location on the San Andreas fault zone near Los Angeles, at least a dozen large quakes have been identified as having occurred in a period of about 2000 years, at recurrence intervals ranging from 65 to 270 years. Such investigations have been conducted on faults in many countries, providing a growing volume of data on frequencies of occurrence of major shakes. However, there are many regions, including much of the eastern US, where significant faults are not accessible for study.

Response to earthquake

Earthquakes are usually the most severe test of analytical methods. Practically all calculations of embankment response are estimates, regardless of the relative complexity of the procedures. The use of the terms ‘overestimated’ or ‘underestimated’ implies that the computed result is higher or lower than a true value. However, unless the base of comparison is a verified number, nothing has been proven.

The performance of an embankment dam impacted by earthquake will depend on site configuration and foundation conditions, geometry and materials of the fill; internal and external water pressures; and characteristics of the input seismic motions, including their duration and their frequencies and amplitudes in all directions. These factors will determine the patterns of accelerations and other variations within the dam, and the consequent deformation.

Numerical model

Simulation of embankment performance by a numerical model requires recognition of the variability of the parameters, and the possibly changing capability of the dam itself. Careful sampling and testing are required to represent as closely as possible the true project conditions, including variations in borrow materials, water content, placement and compaction methods, consolidation and drainage.

Because of the many factors involved in embankment dam performance, a complete analysis of the possible responses to imposed loads can be expected to be very complicated. Intricate procedures therefore may appear appropriate and more promising than those that are less laborious. Nonetheless, methods should not be applied unreservedly unless they give consistently dependable results when used by competent

analysts.

Advanced procedures, including the finite element method, must be used with an understanding of their merits and deficiencies. Accuracy depends heavily on the characterisation of the various embankment materials, which is particularly difficult for the seismic case. Analysis may be very sensitive to variations in key parameters. A realistic calculation has to consider the changes beyond the elastic range, including the energy dissipated in deforming and cracking. Seismic analyses based on vibration of small laboratory specimens can only approximate the actual disturbance of an embankment mass during a quake. Mathematical modelling ideally could be enhanced by in situ field tests or by testing of undisturbed samples in the in situ state of stress. Creating a realistic model is difficult because of the complicated relationship between input soil parameters and soil stiffness.

Finite element analysis

The merits of finite element analysis have been demonstrated where variations of constituent materials and of internal water pressures were not major factors, as at some rockfills with asphalt or concrete facings. The method has been applied successfully to many embankments for loadings under construction and under normal operating conditions, as cited by Duncan in 1996. For any loading, the value of the method obviously depends on how closely the model represents the real dam. Duncan referred to some of the possible sources of discrepancies, including:

• Density.

• Water content.

• Quantity and quality of test data.

• Construction sequence.

• Stress-strain relationships.

• Shape of the dam site.

• Field measurements.

The influence of analytical input may range widely. For example, a difference of just a few percent in assumed water content of earthfill as compacted in the dam could result in a major change in computed deformation.

In some cases where computations by state-of-the-art methods have been compared with observations of dam behaviour, they were substantially in error. In 1996, in a non-linear finite element analysis (computer code GEFDYN) of a zoned earthfill shaken by the 1989 Loma Prieta earthquake (Ms = 7.1) in California, the calculated pore pressure rise was ten times the actual rise and the calculated crest settlement was four times the measured settlement. The unrealistic estimated deformation would be at the upper limit of recorded experience for such an embankment if it had been impacted by a quake one magnitude higher. The dam is founded on firm bedrock, at a site approximately 20km from the quake epicenter. It was selected for analysis specifically to test the finite-element approach on an embankment where adequate data were available, including reliable measurements of response to the 1989 event. The calculations were made by dedicated advocates of the method who carefully avoided subjective adjustments that could have been tried, knowing the true behaviour of the dam.

Attempts were made to critique the erroneous results by re-examining the assumptions that had been made regarding the stiffness and the degree of saturation of the embankment. In fact, the analysis was based on a large volume of information from exploration, testing and instrumentation (including observation wells and pneumatic piezometers) developed by several investigators. Because the computed values were too high, they might have been rationalised as ‘conservative’. However, with a different set of assumptions, they might have been too low and therefore could have been called ‘unconservative’. In either case, the implied degree of conservatism of the method would be misleading.

In analysis of embankments, the finite element method has been brought to its present level of acceptance by exhaustive efforts of distinguished researchers over many years. The practising engineer, while recognising the special insights gained from such study, would be well advised to add the insurance of parametric bracketing, plus comparison by simplified approaches drawn from field experiences.

Liquefaction analysis

‘Simplified’ analyses may entail considerable effort. For example, liquefaction analysis based on work by Professor H B Seed and his associates in 1983, as modified from more recent study, involves a dozen procedural steps for each dam/foundation section being examined. Graphs by Seed and others were drawn from data obtained at more than 100 sites in North and South America, Japan and China, mostly in level terrain. The basic curves of demarcation between liquefiable and non-liquefiable soils were expressed as a relation between the standard penetration blow counts (N) and the ratio of the induced shear stress to the initial vertical effective stress. In areas of liquefaction, effort was made to measure N-values in similar deposits in adjoining locations to obtain data representative of pre-earthquake conditions.

In 1995, Fear and McRoberts offered graphs based upon re-analysis of the basic data collected by Seed et al. They stated that the liquefaction threshold might range widely, depending on soil grain-size distribution and site drainage. In a discussion of the Fear-McRoberts paper, Liao (1996) cautioned that, although silty sands usually can be expected to be less susceptible to liquefaction than clean sands, the effects of the fines will depend upon their plasticity. The influence of soil gradation has been researched by Seed (1987) and others.

Continual review

Obermeier (1996), in another discussion of the paper by Fear and McRoberts, pointed to misidentification of the source beds as a possibly significant reason for the wide range in indicated thresholds. Standard penetration testing in a boring next to a sand boil will not necessarily give values representative of the liquefaction source if lateral movement of the affected materials has occurred.

Arango recommended a new set of magnitude scaling factors in 1996 to replace criteria developed by Seed and Idriss (1982). Arango stated that his factors, based on energy concepts, appropriately represent field conditions and ‘avoid the limitations and extrapolations of the laboratory-based derivation by Seed and Idriss’.

All of these considerations serve to emphasise that ‘simplified’ liquefaction analysis is not simple. It does have the merit of a base in field measurements of the effects of real events, drawn from the works of many investigators.

Deformation analysis

Various methods have been advocated for study of deformation potential in response to seismic shaking. The Makdisi-Seed procedure of 1978 involves use of graphs developed by equivalent linear finite element analysis. Makdisi and Seed demonstrated the method by calculating displacement of a sliding mass extending through the full height of the embankment using its maximum average acceleration.

In most cases, deformation tends to concentrate in the upper half of the dam. In the mid 1980s, Professor Seed focused on accelerations at and near the crest, using a graph from his earlier work which, for a given earthquake magnitude, related displacement to the ratio of the yield acceleration to the maximum acceleration of the sliding element (Seed et al, 1985).

Among the empirical approaches drawn from field experiences is the Jansen equation (1990) for estimating settlement of an embankment subjected to acceleration greater than the yield threshold, where liquefaction is not involved:

Sorry this is not available at the moment.

These are essentially the same basic parameters used in the approach by Seed et al in 1985, which in compared cases has given estimated settlements approximating those obtained by the Jansen equation.

Extensive database

The database for the equation covers a wide range of quake magnitudes and frequency contents, as do several other methods based on field measurements (Swaisgood, 1995). Although effects of frequency of vibration and of foundation conditions are difficult to quantify, they are reflected in the envelope of recorded experience.

The available data on significant events, including short epicentral distances and Richter magnitudes as high as 8.2, show that no measured embankment dam crest settlement has been greater than 1.8m, except in a few slumps caused by liquefaction. This is less than the freeboard provided at many reservoirs to control flood and wave levels. The most settlement was observed at dams constructed at least partly on alluvium. Recorded earthquake-induced settlement of embankments founded entirely on rock has not exceeded about 0.9m.

The amplification of the foundation motion as it propagates to the embankment crest can also be taken from field observations for purpose of simplified analysis. As ground acceleration increases, the amplification can be expected to decrease because of increasing energy dissipation. This is seen in records which for given peak base accelerations show an upper limit of amplifications varying from about 5 times for 0.1g, to 2 times for 0.25g, to 1.3 times for 0.5g. Beneath this envelope, the data are widely scattered for peak ground accelerations lower than 0.25g. The less numerous records for higher ground accelerations show a definite convergence toward low amplifications.

Data offers guidance

The data on observed acceleration and deformation patterns provide some guidance in judging the validity of analytical procedures. While these records are limited, they are useful in examination of some of the calculations presented in the literature. For example, a reported finite-element analysis of a new 198m high rockfill dam with earth core indicated that a crest acceleration of nearly 2g and deformation of as much as 9m might result from a nearby M8 quake causing a peak ground acceleration of 0.7g at the damsite. Simplified methods based on real events suggest a less severe response.

In developing early concepts for earthquake analysis, Professor N M Newmark of the University of Illinois considered an idealised sliding block that would undergo total cumulative displacement equal to the sum of incremental movements that occur each time the dynamic impulse exceeds the resistance. Of course, the deforming element of an embankment under seismic vibration is different from a separate block sliding on a rigid inclined plane. As an integral part of the fill, it would be affected by multi-directional actuation and resistance, including boundary impact and drag, as well as by internal distortion and pore pressure changes. This complex behaviour cannot be represented with consistent accuracy by any mathematical model or equation presently available. The uncertainties call for bracketing the possible responses by parametric analysis, which in most cases will give a reasonable measure of an embankment’s capability.

Embankment dam engineering practices need continual review. In the search for analytical improvements, methods should be used with caution until verified by field experience. Deficient input data may warp an analysis, whether it is performed by simple or complicated methods. Embankment engineering must be directed mainly to the protection of earthfill zones at their boundaries and at internal discontinuities, and to analysing potential dam performance in ways that are compatible with observations in situ.




Privacy Policy
We have updated our privacy policy. In the latest update it explains what cookies are and how we use them on our site. To learn more about cookies and their benefits, please view our privacy policy. Please be aware that parts of this site will not function correctly if you disable cookies. By continuing to use this site, you consent to our use of cookies in accordance with our privacy policy unless you have disabled them.