DAM construction started over 2000 years ago and has presented the engineering community with significant challenges throughout the twentieth century. Dams have been built all over the world to satisfy a range of human needs, and the acquired experience has led to improvements in analysis, testing, construction and monitoring techniques, with subsequent progress in dam safety.

Looking at the present stage of development, dam engineering has achieved a high level of sophistication. Continual advances in information and communication technologies have a profound effect on the way engineers tackle problems. Both the technology itself and the expectations of end-users are pushing towards new developments in engineering systems, making use of a variety of innovative tools.

This article takes a closer look at one such tool – expert system technology – which has already emerged in other domains and can be used to improve the overall cycle of dam safety control.

Current approaches to dam safety

A safe dam is one that provides people and property with an acceptable level of protection against its failure or overtopping without failure, and which meets safety criteria set as standard by the engineering profession.

The safe control of a dam is defined as the set of measures taken in various stages of the dam’s life. These consider structural, hydraulic, operational and environmental aspects, with a view to adequate and continuous knowledge of the state of the dam, timely detection of any anomalies and efficacious intervention whenever necessary (RSB, 1990).

A systematic evaluation of the safety of the dam should be carried out throughout its whole life cycle. This can be done by means of a comprehensive inspection on the structure, involving the collection of all available dam records, field inspection, detailed investigations and possibly laboratory testing, followed by its performance assessment and review of the original design and construction records to ensure that they meet current criteria. The level of detail required for its evaluation should be commensurate with its importance, design conservatism and complexity, as well as with the consequences of failure.

Dam monitoring plays a vital role in the concept of dam safety: it refers to the range of techniques and procedures applied in observing the behaviour of a structure from the time it is built, during its initial filling, throughout its useful life and, finally, in abandonment.

The provision of monitoring instruments is an accepted and, indeed, common practice for virtually all new dams. In old dams it is not unusual to find a lack of monitoring schemes but, even then, it is accepted that a basic level of instrumentation should be installed to monitor behaviour.

In the context of new dams, instrumentation data are interpreted in a dual role: to provide an indication of the validity of design assumptions and also to determine an initial datum pattern of performance, against which subsequent observations can be assessed. Thus, the primary functions of the monitoring instruments are quite distinctive depending upon the stage of the dams’ life (Novak et al 1996):

  • Construction control: verification of critical design parameters with immediate looped feedback to design and construction.

  • Post-construction performance: validation of design; determination of initial or datum behavioural pattern.

  • Service performance/surveillance: reassurance of structural adequacy; detection of regressive change in established behavioural pattern; investigation of problems.

  • Research and development: academic research, equipment testing and development.

Automatic monitoring systems

The catastrophic failure of a dam due to reasons other than the direct result of an extreme event is invariably preceded by a period of progressive distress within the dam and/or its foundation. Dam surveillance schema and instrumentation are intended to detect, and possibly to identify, symptoms of distress at the earliest stage.

Instruments strategically placed in a dam are not, themselves, a safeguard against serious incident or failure. Their prime function is to reveal abnormalities or adverse trends in behaviour, and so to provide early warning of possible distress. The number of instruments installed is of lesser importance than the selection of appropriate equipment, its proper installation at critical locations, and intelligent interpretation of the resulting data within an overall surveillance programme. The effectiveness of the latter is determined by many factors, including the legislative and administrative framework within which procedures and responsibilities have been established (Novak et al, 1996; Jansen, 1983).

The first attempt to implement an automatic monitoring system (AMS) on dams dates back to the 1970s. At first automation was restricted to the acquisition, transmission and display of data in a permanently manned operation room. Since then, the continuous advances in electronics and information and communication technologies have led to further developments in this area.

Side by side with the improvements in sensor technology, the systems and methods for data collection and storage have also evolved. Thus, in a short time span, it became possible to fully implement an overall process of AMS, including:

  • Automatic data acquisition system (ADAS) – which includes the sensors that can be remotely monitored, a data-logger for measurement acquisition, the connections between the sensors and the data-logger, and the required software.

  • Data transmission system (DTS) – which enables the data-logger to be remotely controlled and measurements to be transmitted to a processing station.

  • Data processing and management system (DPS) – which checks, prints and displays the data, processes them for anomaly detection, manages alarms, remotely monitors the sensors and transmissions and, eventually, archives data.

Data processing and management systems

Basically, data processing and management systems include two major stages:

  • Data checking, reducing and storage, including the output of data numerically or graphically.

  • Analysis and interpretation of the dam behaviour, including a comprehensive report on its status.

The first stage deals with the preliminary check on the raw values eliminating erroneous data, most probably related to defects in the measuring instruments, measuring errors, errors of transmission, etc. Once the reading data is checked, data reduction follows – data transformed into equivalent engineering quantities – which is also checked against the pre-defined thresholds. Due to the large amount of data produced by an AMS, the incoming data screening is a fairly important task. This stage is closed with the storage of the accepted data.

The second stage involves the analysis of the previously stored data and the interpretation of the dam’s performance resorting to models conceived to represent the dam’s behaviour. This process will permit a judicious judgment on the dam’s state followed by adequate actions, whenever needed. The auto- mation of this stage is still a major challenge (Fanelli, 1992).

Looking towards new tools

Software products are growing in size and complexity to meet the growing demands that users are placing on them.

The amount of code and data and the complexity of the software itself pose a significant management challenge for software engineers.

Artificial intelligence (AI) programs, based on the symbolic processing of information (as opposed to its numerical processing), are finding their way into real systems in a variety of problem areas. Successful AI systems that are transferable to different problem domains fall into several general categories. These include configuration, design, diagnosis, interpretation, analysis, planning, scheduling, intelligent interfaces, database intermediaries, natural language understanding, vision and automated programming. A number of misconceptions surround the field of AI, possibly because commercial AI is still in its infancy and unknown to most people. Rauch-Hindin (1988) provides an interesting reason for this:

  • Contrary to what many people think, AI is not a black art, nor even revolutionary. Instead, it is a software technique that is very ‘do-able’, provided that it is applied to the appropriate problem.

  • Programming is the simplest part of creating most AI applications. The more difficult task is finding some way to represent knowledge in a computer program. If the AI program is an expert system, an equally difficult task is getting the knowledge from the expert in the first place.

  • AI systems do not replace people. They augment them.

  • AI systems provide their economic leverage by performing the types of tasks that occupy a high percentage of highly paid people’s time, rather than by handling ‘far-out’ tasks that are a figment of science fiction. In fact, AI systems cannot perform tasks or solve problems that human do not know how to.

  • AI programs can be written in a conventional programming language.

Most people solve problems by mixing vast quantities of problem-specific knowledge with common sense. Few solve problems using generalised algorithms or resort to trying every possible solution. Instead, people have ways of selecting ‘important’ information, or of finding similarities to problems they have solved before, or of using methods that led to success in the past. The goal of expert systems – also known as knowledge-based systems – and other AI techniques is to express these problem-solving methods in a computer language in a concise, unambiguous and efficient manner. The assumption is that without these techniques a large class of problems could not be computationally solved. For these classes of problems the knowledge, facts and strategies are central to solving the problem. Therefore, the programming goal is to encode and properly apply this knowledge (Nisenfeld, 1989).

Why expert system technology?

Expert systems can be viewed as programs that use models of some real world domain. One of the driving needs for expert system development is to capture critical and scarce expertise and distribute it throughout the organisation. Human expertise is a scarce commodity and expert systems can provide an accessible, available alternative that can also be used as a training tool for beginners.

Engineers responsible for a new dam have opportunities to know its foundation and materials, and to determine and execute their treatment, processing and placement. They know where the site and the structure are strong and where they are weak. For the sake of later analysis, their knowledge and its limits must be thoroughly preserved. Otherwise, as those engineers responsible for the dam age with the structure, crucial knowledge about the dam will be lost. Indeed, an old dam that has outlived its creators may be a puzzle to those who see it for the first time (Jansen, 1983).

The knowledge of an engineer is often characterised as ‘know-how’. To solve a particular problem, engineers make use of scientific knowledge which uses general models that have a great degree of abstraction and can be used in too wide an application. On the other hand, engineers often make use of their experience and common sense by using ‘rules of thumb’ or heuristics. For the engineer, experience and judgement must take over when scientific knowledge lacks or fails.

Expert systems differ from other, more traditional, computer programs because their reasoning is not straightforward. Their tasks have no practical algorithmic solutions and they must often make conclusions based on incomplete, judgmental, speculative, uncertain, or fuzzy information. To reason like a human being expert systems rely not only on factual knowledge, as conventional programs do, but also on uncertain knowledge and observations based on experience and intuition (collectively called heuristics). The facts and heuristics are extracted from experts in a specialised subject area. They are then coupled with methods of analysing, manipulating and applying the encoded knowledge so that the program can make inferences and explain its actions.

Up to now dam safety assessment has relied heavily on the skills and experience of dam engineers. The expertise accumulated by those engineers throughout the years is not always easily transferred to new staff or might even not be transferable through written reports. Expert system technology has an important and growing role here as one of the tools available to dam engineers for the management of safety control (Comerford et al 1992).

Dam safety

Dam safety assessment relies heavily on models, field evaluation and past experience/expertise. Often, the amount of data about a dam and the knowledge necessary for identifying and processing that data are such that it is not feasible to capture all the data within a single recipient location, be it a human expert or a computer system. The problem then is to select and judiciously implement what is most relevant and, ideally, what is strictly relevant to the problem at hand. In this case expertise plays a major role in identifying what is needed, because of its very nature expertise defines empirical associations between components of knowledge, heuristic rules and procedures required to manipulate them (Franck and Krauthaummer, 1989).

Basically, three different levels of knowledge must be encapsulated within a safety control system:

  • General background knowledge – it corresponds in the conceived framework to general scientific knowledge, such as structural mechanics, hydraulics and geo-technical engineering.

  • Specific knowledge on the dam – each dam is a unique system, its characteristics must be identified to build up the specific knowledge on the dam.

  • Expert knowledge – it forms a fundamental part of the system, combining general background knowledge and specific knowledge about the dam, which will enable a reasoning process leading to an overall evaluation of the whole system.

The figure above illustrates the whole cycle of dam safety assessment. The surveillance process starts with the data acquisition on site, either by manual or automatic acquisition means. The original raw data undergoes a first level check against basic parameters – such as the instrumentation range – and the sensor readings are translated to engineering quantities. On a subsequent second level, such ‘raw’ engineering quantities are validated against pre-defined mathematical models. Once the data is considered ‘reliable’, the actual safety assessment may take place. Traditionally, very experienced engineers who have accumulated a great amount of experience about dam engineering, and most specifically about each particular dam being assessed, carry out this last task of the safety assessment process.

The automation of the third level of the safety assessment is a major challenge to the engineers and managers who deal with safety issues. The use of AI technology is a step forward in the automation process.

The safety assessment, resorting to an automatic system, must be activated whenever an observed quantity falls outside the expected range, ie outside pre-established threshold limits. Defining such expected ranges requires knowledge on how a specific structure will respond to a given combination of loads, such as environmental actions.

Expected values are defined by means of a reference model which may have either a stochastic or deterministic nature. In the first case, the model is based on the knowledge of the history of the dam which implies having a set of reliable data referring to a considerable number of years. In the case of a deterministic model, the expected response is derived from a discrete model (eg a finite element model) which requires a precise definition of the structure geometry and appropriate rheologic laws of the materials involved (Bonaldi et al, 1982).

Whenever the difference between the measured value and the forecast reference value exceeds the fixed limits, such change in the dam’s behaviour must be explained. Any such deviation from a statistical model might lead to the conclusion that, either the dam has changed the expected behaviour or that an abnormal loading condition, not experienced in the past, has occurred. On the other hand, a deviation from a deterministic model might be explained by the fact that the set of hypothetical laws used to build the model no longer applies or may be affected by an eventual change in the structural properties.

Once an abnormal behaviour is detected it needs to be explained and corrective actions immediately proposed.

Diagnostic reasoning and monitoring tasks are domains where both prediction and explanations are needed, and these two characteristics rely on a good understanding of the cause-effect relationship.

The causal network concept (see panel) is helpful when organising knowledge accumulated by experts. Knowledge gathered by specialists might be very broad and must be organised hierarchically. For instance, the knowledge related to an arch dam might differ a great deal from a gravity dam. Each dam is a unique structure – its site, materials and foundation have their own distinct features.

Software components

Software systems for structural monitoring activities should enable access to all information available and provide supp- ort for a complete assessment of structural safety. In fact, the information gathered during the dam’s whole life should be as comprehensive as possible. However, retrieving specific information for a selective study in an efficient manner may be a hard, if not impossible, task.

The answer lies in the combined use of tools, techniques and methodologies for managing the development process and structuring the data flow so as to make it manageable. The management and reuse of data is a topic that is widely understood and for which many techniques and products have been recently developed. Database management systems are, of course, the primary tool used to keep data independent of programs and to share data across different users.

User-friendliness is the key attribute of successful software tools and products. End-users demand it, look for it and are willing to pay handsomely for it. The easier a product is to use the more valuable it is to an organisation because people, regardless of their intrinsic skills, can achieve more in less time.

In developing an expert system to assess the safety of a dam, it is necessary to resort to an information system where relevant information may be widely available. The figure above depicts in very broad terms the overall system architecture framing the developed knowledge-based system.

A central module – a small program in visual basic – carries out the interaction among the various components and development platforms available to the user. Whenever required, the user may transfer control from the central module to any other component – to perform the required action – and back to it when the action is completed.

Due to the prototypical nature of the venture described in this article (Portela, 1999), together with the traditional hardware/software platforms established in this domain and the identified knowledge representation and inferential requirements, the first implementation of the system was based on a commercial shell for stand-alone usage. In this case Intellicorp’s KAPPA-PC was used.

Data management

The data is managed by the master module – CASTOR – and includes data about design, construction, operation, geology, geotechnics, hydrology, appurtenant works, foundation characteristics, materials, dam features, water characterisation, observation plans and also the relevant legislation, documentation, drawings and photos.

The activation of the expert system SISAS follows the detection of a consistency problem verified on the selected control variables, which were tested against threshold values previously defined by reference models. The evaluation process implemented in SISAS resorts to a set of causal networks and pre-defined scenarios to deliver a diagnosis of the dam’s state, which identify the most probable scenario related to the measurement(s) which have an abnormal value.

Vilarinho das Furnas dam, a double curvature asymmetrical arch dam in Portugal, was chosen as the case study where the expert system prototype would be tested and calibrated. The system installed in the Vilarinho das Furnas dam monitors reservoir level; air and concrete temperatures; uplift pressure; concrete deformability; horizontal and vertical displacements of dam body and foundation; movements of discontinuities; strain and stress in concrete; foundation strain and seepage. Temperature measurements and physical-chemical analysis of the reservoir water are carried out regularly. A total of 557 instruments are installed.

It is well known that in a normal operation phase the surveillance of a dam performance is based on a limited subset of instruments. Emphasis must be placed on carefully selecting the parameters that represent the overall dam behaviour. Priority must be giving to instruments that can supply integral information, such as the pendulum which can provide valuable information covering a significant area of the dam.

The set of instruments selected in Vilarinho das Furnas as main indicators for automatic control of the dam behaviour comprises: plumblines, rockmeters, joint meters, piezometers and drains.

The present example identifies a scenario related to the dam foundation, namely the ageing of the drainage system (icold 1994). The three main causes of deterioration in drainage systems of dams include inadequate design or poor quality of construction; climate conditions; and clogging of drain holes.

During the processing of SISAS a series of interfaces are presented to the user, allowing complete interaction with the system. For instance interfaces pointing to the tendencies observed in the drainage system and piezometers and the results of the analysis of visual inspection data. The main data on design, construction, reservoir filling, geology/geotechnics, main actions, reservoir water and concrete are also presented. The data are part of the context that supports the system on the reasoning process and search for the adequate scenario.

The final diagnostic is presented for the user either by interactive interfaces or by written reports. Once the scenario is identified the system provides the user with recommendations to solve the problem as well as possible explanations. This characteristic of expert systems is of utmost importance – users may find support in such systems only if it is possible for them to understand why a given diagnosis has been formulated.

As a consequence of drain holes clogging, an increase in uplift pressure may occur and, in this case, the stability of the structure might be endangered. Immediate remedial measures need to be taken. It is clear that maintenance of drainage systems is a very important measure to avoid clogging of drain holes and so they must be cleaned regularly to prevent this happening. It should also be noted that this scenario may occur concurrently with other scenarios of deterioration of the foundation, such as erosion and solution. As an expert dam engineer would most probably do, the system also verifies all possible causes or related scenarios.

Dam expert versus expert system

A key issue when developing a knowledge-based system is the need to calibrate and validate the system, which has to be done by evaluating its performance against that of human experts.

In this system, since it addresses the identification of failure scenarios in large dams, one cannot simply wait for an eventual failure to occur in the Vilarinho das Furnas dam to endorse the effort. So it was decided a previous incident would be used to evaluate the behaviour of the system. Since Vilarinho das Furnas was, fortunately, not a very rich example of a problematic large arch dam, an abnormal uplift pressure problem was the most relevant observed problem and was selected for the validation exercise.

In 1983 abnormally high uplift pressure values were identified in the piezometers installed in the central zone of the Vilarinho das Furnas dam foundation. The analysis of the data related to the water seepage in the foundation exhibited a tendency to decrease in time. Specialist dam engineers conducted a thorough analysis of the situation. The recommendations derived from the studies included thorough cleaning – or redrilling, if necessary – of the drainage system.

This situation was simulated by the expert system, which presented the scenario of clogging of the drainage system and recommended a drain cleaning operation. The system was tested against other scenarios for the dam and it reached the same conclusion to those achieved by the human dam experts.

Conclusions

Dams, just like any manmade structure, age. And with increasing age comes the potential for deterioration. Improvements in technical standards and information and communication technologies are being made all the time. Such development affects the need to continuously upgrade and sophisticate existing safety control procedures, methods and even philosophy.

It has been shown in this article that surveillance can still be improved and can go beyond the minimum that is required. Utilising innovative tools can make important contributions to improving the overall safety control assessment of dams.

Automation of the complete safety control cycle is still a major challenge and may offer advantages against more traditional procedures. But it should be remembered that these systems are meant to support dam managers in their decision-making, and are not to be seen as replacement of sound human and engineering judgements.

Casual networks…

The set of causal networks developed in this project resembles the reasoning process of a dam expert when presented with evidence of anomalies in the monitored quantities. The main causal networks developed are related to drainage, uplift pressure, horizontal and vertical displacements and joint movements.
A scenario is defined here as any situation that must be considered to understand the dam behaviour and evaluate the safety conditions of the works. A scenario may be associated to normal operating conditions or to an exceptional occurrence, which means a scenario is not necessarily associated to abnormal behaviour of the structure. Three groups of scenarios were defined: general scenarios (related mainly with the loads, design and construction), scenarios related to the dam foundation and scenarios related to the dam body.
For each scenario a set of symptoms is established to confirm the relation between the abnormal measurement and the identified scenario, which will lead to a diagnosis. The set of conditions established for each scenario is based on the cause-effect relationship which may also affect the properties of the materials; and it may be reflected in the monitored quantities in a specific way.
Each scenario is also linked to more specific symptoms, which were organised in 11 groups related to: drainage, uplift pressure, displacements, joint movements, strain/stress, visual inspection, design/construction, loads, reservoir, concrete, and geology/ geotechnics.
To support the identification of the appropriate scenario the system resorts to correlation matrices, representing the interaction among the measured quantities (effects), the actions and the scenarios. This is done on the basis of qualitative and subjective observations, supported by the acquired engineering knowledge.
The scenarios are linked to a set of symptoms and evidence that will support the system in the presentation of the final diagnosis of the dam’s state.