N.L. Vijaykumar, S. Stephany, H.F. de Campos Velho, A.J. Preto A.G. Nowosad, (2002): A Neural Network Implementation for Data Assimilation using MPI, HPC-2002 (International Conference on Applications of High-Performance Computers in Engineering), 23-25 September, Bologna, Italy.

Abstract: Data assimilation is a procedure that uses observational data to improve the prediction made by an inaccurate mathematical model, as is the case of numerical weather prediction, air quality problems and numerical oceanic simulation. In the case of atmospheric continuous data assimilation there are many deterministic and probabilistic methods. Deterministic methods include dynamic relaxation, variational methods and Laplace transform, whereas probabilistic methods include optimal interpolation and Kalman Filtering. Dynamic relaxation assumes the prediction model to be perfect, as does Laplace transform. Variational methods and optimal interpolation can be regarded as minimum-mean-square estimation of the atmosphere. In Kalman filtering the analysis innovation is computed as a linear function of the misfit between observation and forecast. The use of a Multilayer Perceptron Neural Network was proposed in order to emulate Kalman Filtering method aiming at the reduction of the processing time. The training phase of this neural network is controlled by a supervised learning algorithm. Adjustment of the network learning is conducted by a backpropagation algorithm. Classical, hardware-independent optimizations were performed in the sequential code and led to a significant reduction in the processing time for a given set of parameters. Fortran 90 language intrinsics eliminated inefficient hand-coded subroutines. A former attempt to parallelize the code and run it in a 4-processor shared memory machine, made use of HPF (High Performance Fortran) directives imbedded in the optimized code. This work presents an attempt to parallelize the related code through a message passing paradigm, particularly the MPI (Message Passing Interface) standard. Calls to the MPI communication library were imbedded in the optimized code in order to assign chunks of data to individual processors. Besides, the imbedding of HPF directives in the MPI version is expected to further improve the performance of the code.