Customer references - SETNE relies on the Topkapi supervisor for its the...

SETNE relies on the Topkapi supervisor for its thermal power plant in Moselle

Industry

SETNE, a subsidiary of Charbonnages de France (CDF), operates the Emile Huchet thermal power plant in Moselle, in the east of France. With a capacity of 1,200 MW of electricity, the equivalent of a nuclear power plant unit, this unit uses pulverised coal as primary energy.

Location
Moselle, France

The installations of the thermal power plant are divided into two parts: the production itself (burners and generators) and the supply. The control of the production equipment is carried out through a DCS (Digital Control and Command System). Until 1994, control of the handling / supply part was ensured by a system based on redundant SOLAR computers.


The reliability of the system is fundamental, because the supply of electricity to EDF is planned by contract, and failure to meet forecasts is subject to very heavy penalties. Faced with the risks posed by the obsolescence of the SOLAR system (availability of spare parts, loss of skills), the unit managers decided to set up a redundant supervision system on PC computers running Windows in order to rely on a widespread and scalable standard.


Securing a supervision system by redundancy


When you want to secure a system by redundancy, you can, as is still often the case today, simply juxtapose two independent systems operating in parallel. In practice, this means installing two independent server stations for data processing. The operator operator workstations, commonly referred to as client workstations, are normally addressed to the main server workstation, which is responsible for data processing.


This is what happens when the main station fails: 

  • A certain number of manipulations must be performed on the client workstations so that they address the secondary server, and no longer the main server.
  • The data of the secondary station is not perfectly up to date. Only the data from the PLCs are up to date, but not the data specific to supervision: fault acknowledgement information, internal instructions (threshold for triggering a fault signal, e.g. local processing commands, etc.).
  • Story data are not perfectly identical since, apart from not containing all the internal information for supervision on the substation, the asynchronism of the communication or clocks means that events can be dated with slightly different values.


When normal operation is restored on the primary server, the history information stored on the substation during the failure period is not available. The internal variables specific to supervision (the context) are also not restored.
It is also worth mentioning the difficulty of maintaining two identical systems in parallel, as changes most often have to be copied or recopied manually from one system to the other, which very frequently generates errors.


Such systems, which can roughly be described as cold redundancy, already have the enormous advantage of not being completely "in the dark" in the event of a failure of the main station: a back-up tool is available to "see and act" on the process. Unfortunately, this does not guarantee that all operations will be carried out with perfect knowledge of what has been done before and the continuity of the recordings (traceability) will not be properly ensured.


This is what led those in charge to impose strong technical constraints at the time of the 1994 renewal:

  • Implementation of two redundant communication networks between the supervision and the PLCs. Clearly separated cable routing ;
  • Unicity of the data: a single source of information valid at time T is used to generate the data that is broadcast on the redundant components;
  • Unicity and completeness of the application context backup: current status, instructions, acknowledgements kept permanently updated on the workstations;
  • The uniqueness of the context must not lead to the supervision variables being shifted to the PLC: when the uniqueness of the variables is ensured by placing them in the PLCs, changes in the supervision application may lead to changes in the PLC programs, which must then be reloaded, which leads to production stoppages.

     

Topkapi, the right solution for this supervision system


The choice of the supervision system was then oriented towards AREAL's Topkapi software platform, which met the technical requirements while offering very simple principles for implementation: 

  • centralised configuration from a single workstation ;
  • distribution of processing between different server workstations that support each other;
  • redundancy parameterisation limited to declaration for each PLC in main and secondary processing station.

All the rest of the application is set up as a normal single-user application;

  • automatic merging of historical and context data when returning to normal operation ;
  • automatic switchover of operator stations to the active server (for the operator the change of server is perfectly transparent).

 
Contrary to what one would tend to think, the extra cost of hot redundancy respecting these principles is very limited compared to cold redundancy described above. Indeed, in both cases, most of the extra cost compared to a non-redundant solution comes from the need to install a second supervisory station. In recent applications, it has been shown that it is more economical to install hot redundancy than to use a single-user system based on a secure PC with RAID technology. The maintenance of the systems is itself facilitated by the fact that the supervision application is seen as unique and that there is no need to administer two mirror instances of a conventional application.


Another important aspect of the project is that the supervision system put in place should allow the parallel operation of the old and unew systems de supervision, with the Topkapi software being placed as a "spy" on the existing communication network. This technique, well mastered by AREAL, is also used for projects of lesser importance; the probationary test phases can thus be carried out with complete peace of mind.
 

 

An evolving and sustainable supervision system

 

The system originally set up at the Emile Huchet Centre consisted of 9 Topkapi operator stations under Windows (PC with 486 processors), 13 PLCs and 8,000 variables. It then evolved on a regular basis. It now has 16 stations; the Ethway protocol on Ethernet was chosen to replace Modbus to equip the redundant network with PLCs; a fibre optic network was installed on this extensive industrial site (approximately 2 km x 2 km) and electrically very disturbed. This ,redundant network allows the transit of PLC data and inter-PC links on the same medium, but also allows centralised PLC programming and configuration of Topkapi stations (concept of a single application with distributed and redundant processing).


All operations are now supervised from the control room, including the installations commissioned since 1994: ash drying and Composite Product Preparation Unit (UPPC). The latter unit allows the recovery of combustion ashes (capacity of 800 tonnes/day) for the preparation of cement, road products, limestone improvers for agriculture, and other products. In this installation, the Topkapi software pilots the mixing, weighing, storage and shipping equipment. Specific applications have been developed to meet the particular needs of recipes and shipments: the opening of Topkapi allows it to receive very easily the grafting of complementary treatments.

 

Today, the supervision system of this plant is perfectly maintainede and in phase with other lesrecent systems. We sometimes tend to forget this, but it is not always enough to choose software that works at a given moment; we must take into account the subsequent maintenance costs ('cost of ownership') and the ability to evolve without calling into question previous investments (upward compatibility).