ABSTRACT

This workshop is part of the collaborative project between the CNPq/Brazil - Inria/France which involves Brazilian and French researchers in the field of computational science and scientific computing. The general objective of the workshop is to setup a Brazil-France collaborative effort for taking full benefits of future high-performance massively parallel architectures in the framework of very large-scale datasets and numerical simulations. To this end, the workshop proposes multidisciplinary lectures ranging from exploiting the massively parallel architectures with high-performance programming languages, software components, and libraries, to devising numerical schemes and scalable solvers for systems of differential equations.

SUMMARY OF THE PROJECT

The prevalence of modern multicore technologies has made massively parallel computing ubiquitous and offers a huge theoretical potential for solving a multitude of scientific and technological challenges. Nevertheless, most applications and algorithms are not yet ready to utilize available architecture capabilities. Developing large-scale scientific computing tools that efficiently exploit these capabilities will be even more challenging with future exascale systems. To this end, a multi-disciplinary approach is required to tackle the obstacles in manycore computing, with contributions from computer science, applied mathematics, and engineering disciplines.

Such is the framework of the collaborative project between the CNPq - Inria which involves Brazilian and French researchers in the field of computational science and scientific computing. The general objective of the project is to setup a Brazil-France collaborative effort for taking full benefits of future high-performance massively parallel architectures in the framework of very large-scale datasets and numerical simulations. To this end, the project has a multidisciplinary team with computer scientists who aim at exploiting the massively parallel architectures with high-performance programming languages, software components, and libraries, and numerical mathematicians who aim at devising numerical schemes and scalable solvers for systems of Partial Differential Equations (PDEs). The driving applications are related to important scientific questions for the society in the following 4 areas: (i) Resource Prospection, (ii) Reservoir Simulation, (iii) Cardiovascular System, and (iv) Astronomy

The researchers are divided in 3 fundamental groups in this project: (i) Numerical schemes for PDE models; (ii) Scientific data management; (iii) High-performance software systems.

Aside research goals, the project aims at making overall scientific results produced by the project available to the Brazilian and French scientific communities as well as to graduate students, and also establishing long-term collaborations beyond the current project. To this end, another objective of the project is the integration of the scientific results produced by the project within a common, user-friendly computational platform deployed over the partners' HPC facilities and tailored to the 4 aforementioned applications.

PRACTICAL INFORMATION

The Third Brazil-France workshop will take place at the Inria Bordeaux - Sud-Ouest Reserach Center (200 avenue de la Vieille Tour - 33405 Talence Cedex1). The research center is about 15 km far away from the Bordeaux-Mérignac airport2.

PARTICIPANTS


Brazilian Participants


Resource Prospection

Pedro Dias (LNCC)
Alvaro Coutinho (High Performance Computing Center and Department of Civil Engineering, COPPE/UFRJ)
Renato Elias (High Performance Computing Center and Department of Civil Engineering,COPPE/UFRJ)
Marta Matoso (Computer Science Department, COPPE/UFRJ)
Philippe Navaux (UFRGS)
Rodrigo Kassik (Ph.D. Student at UFRGS)
Josias Silva (Laboratory for Computer Methods in Engineering and High Performance Computing Center and Department of Civil Engineering, COPPE/UFRJ)




Reservoir Simulation

Alexandre Madureira (LNCC)
Frédéric Valentin (LNCC)
Diego Paredes (Pos-doctoral student at LNCC)
Benaia Lima (Multidisciplinary Institute, UFRRJ and High Performance Computing Center and Department of Civil Engineering, COPPE/UFRJ)




Cardiovascular System

Antonio Tadeu Gomes (LNCC)
Bruno Schultz (LNCC)
Bernardo Gonçalves (Ph.D. student at LNCC)
Rossana Andrade (UFC)




Astronomy

Vinicius Freire (Ph.D. student at UFC)
Fabio Porto (LNCC)




Guests

Alexandre de Assis Lima (Computer Science Department, COPPE/UFRJ)



×


French Participants



Resource Prospection

Hélène Barucq, Inria Bordeaux - Sud-Ouest, MAGIQUE-3D project-team
Marie Bonnasse, Inria Bordeaux - Sud-Ouest, MAGIQUE-3D project-team
Lionel Boillot, Inria Bordeaux - Sud-Ouest, MAGIQUE-3D project-team
Théophile Chaumont, Inria Bordeaux - Sud-Ouest, MAGIQUE-3D project-team
Julien Diaz, Inria Bordeaux - Sud-Ouest, MAGIQUE-3D project-team
Marie-Hélène Lallemand, Inria Sophia Antipolis - Méditerranée, NACHOS project-team
Stéphane Lanteri, Inria Sophia Antipolis - Méditerranée, NACHOS project-team
Claire Scheid, Inria Sophia Antipolis - Méditerranée, NACHOS project-team




Reservoir Simulation

Pierre Ramet, Inria Bordeaux - Sud-Ouest, HIEPACS project-team
Luc Giraud, Inria Bordeaux - Sud-Ouest, HIEPACS project-team




Cardiovascular System

Olivier Coulaud, Inria Bordeaux - Sud-Ouest, HIEPACS project-team
François Pellegrini, Inria Bordeaux - Sud-Ouest, BACCHUS project-team




Astronomy

Reza Akbarinia, Inria Sophia Antipolis - Méditerranée, ZENITH project-team
Miguel Liroz Gistau, Inria Sophia Antipolis - Méditerranée, ZENITH project-team
Patrick Valduriez, Inria Sophia Antipolis - Méditerranée, ZENITH project-team




Representatives from INRIA's Management Teams

Héléne Kirchner, Director of International Relations department
Pierre-Alexandre Bliman, International Relations
Jean Roman, Director of the Inria Bordeaux - Sud-Ouest research centre




Guests

Jean-Marc Denis, Bull
Jean-François Lavignon, Bull
Jao Santos, Bull
Felipe Velloso, Bull
Xavier Vigouroux, Bull
Henri Calandra, Total
Jean-François Mehaut, NANOSIM team, Joseph-Fourier University, CEA and Laboratoired d´Informatique de Grenoble
Luiz Edson Padoin, UFRGS
Bruno Raffin, LIG (Grenoble Informatics Laboratory), Inria Grenoble - Rhône-Alpes, MOAIS project-team
Jean-Marc Vincent, LIG (Grenoble Informatics Laboratory), Inria Grenoble - Rhône-Alpes, MESCAL project-team



×

PROGRAM

Thursday 05/09




LIST OF ABSTRACTS

Brazilian Participants



The development of robust and reliable numerical methods combined with a significant increasing of computer performance has allowed, in the last years, the shifting of computational simulation from preliminary, and very often marginal, stages towards a central role played in design and analysis of complex systems. That is particular true within the realm of oil and gas industry, and reservoir modeling or under-standing the physics of deposition leading to sedimentary basins through computer formulations [1] constitute representative examples. On the other hand, understanding the impact of the unavoidable presence of uncertainties along with the modeling in the output of the computer simulations has given rise to the consolidation of a new area, often referred to as Uncertainty Quantification (UQ). This area encompasses di erent disciplines, ranging from applied mathematics analysis methods to high performance numerical algorithms.
Computational simulation is usually based on solvers for di erential equations for simulating certain properties in a space and time domain. When such simulators are applied to uncertainty analysis, as well as in inverse problems, a signi cant number of points are needed to reach a certain reliability level. A viable approach to these problems is to use some results provided by the solver, to create a statistical meta-model (a surrogate), with low computational cost, capable of emulating the solver response to a larger set of points, reducing the computational requirements. Ecient and robust surrogates can be built within a Bayesian framework, like in the method introduced in [2]. In this work, we make use of di erent UQ methods for gain insight into the impacts of uncertainty on physical parameters involved in seismic characterization, considering both backward and forward model-ing. This will be done through the analysis of synthetic seismograms obtained through computer simulation using the finite element method. We are interested in the estimation of output uncertainties, both due to uncertainty of input data and due to a limited number of observations of computer simulations.
We present preliminary numerical results using a scenario designed to capture, not all, but important features of wave propagation in heterogeneous uncertain media. The output statistics are computed using the classical Monte Carlo method, the sparse stochastic collocation technique and a Bayesian approach. The results will show that the Bayesian approach [2] obtains output statistics with much smaller number of deterministic solver runs. Finally, we will show how UQ for seismic can profit from scienti c work ow management techniques currently under research within HOSCAR project [3,4].
[1] I. Bilionis, N. Zabaras, A. Konomi and G. Lin, Multi-output separable Gaussian process: towards an efficient, fully Bayesian paradigm for uncertainty quantification, J. Comput. Phys., Vol. 241, Pages 212-239, 2013.
[2] G.M. Guerra, S. Zio, J.J. Camata , F. A. Rochinha, R. N. Elias, P.L. B. Paraizo and A.L.G. Coutinho, Numerical simulation of particle-laden ows by the residual based variational multis-cale method, Intern. J. Numer. Meth. Fluids, DOI: 10.1002/ d.3820.
[3] C. Engine, A data-centric and algebraic scientific workflow engine3, June 24, 2013.
[4] E.S.Ogasawara, D. de Oliveira, P. Valduriez, J. Dias, F. Porto, M. Mattoso, An algebraic approach for data-centric scientific workflows, VLDB 4(12): 1328-1339, 2011.

×

In database integration context, entity resolution is the problem of identifying instances from different databases that represent the same real-world entity. Current astronomy surveys present some important challenges in the entity resolution area in big data, where the spatial position of objects is very important: the cross-matching of catalogs. The latter tries to identify sky objects registered in different catalogs with slightly different properties but representing the same real object. Objects properties include position, magnitude and color. The cross-matching among catalogs is usually applied in peer-to-peer fashion, between two different catalogs, and generates a single output catalog identifying common objects between surveys. The algorithm selects matches considering the shortest distance between objects using a spatial radius \x" defined by the user. However, when we want to compute a matching among three or more catalogs, a more careful process must be applied, as one shall not consider matching transitively and the ordering with which catalogs are chosen may produce different results. In this initial phase of my Phd work, we are formalizing a model for matchings disambiguation in the context of n catalogs, such that the cross-matching produces more accurate matching (i.e. those that closely coincides with reality), independently of the order in which they are selected.

×

For a long time, data I/O and storage has been a source of performance contention for HPC applications: the limited bandwidth to transfer data in and out of processing nodes and storage systems counters the increasing computing power provided by bigger clusters with faster processors and accelerators. Scaling applications to a large number of processors in this context is no longer only a matter of good domain decomposition: data organization and access must be an ever-present consideration to enable scalability. Understanding the behavior of applications and file systems (how much data is necessary to it's execution, when it is accessed, how it is distributed) is important to guide optimizations on the parallel file system in order to speed up the execution of the application and I/O requests. To study the application behaviour, we add instrumentation to the applications' I/O libraries (such as MPI-IO) in order to extract traces of the I/O operations. These traces are used to produce characterization of the target applications in several aspects concerning it's usage of the file system. With this characterization in hand, we propose micro-benchmarks that will allow us to stress different aspects of the distributed file system and study optimizations in the I/O system to increase performance and scalability of parallel applications.

×

Scientific Workflow Management Systems (SWfMS) [3] allow for scientists to specify workflows consisting of activities (i.e., program invocations) and their data dependencies. Scientific workflows can be executed using local computing resources and High Performance Computing (HPC) environments such as computing clusters, grids and clouds. Although several SWfMS provide mechanisms for executing large-scale scientific workflows in distributed environments [4,8,9] most of them perform the workflow execution in an "offine" way, according to Ailamaki et al. [1]. Existing approaches provide results and provenance data [5] that can only be analyzed after processing the entire dataset within the workflow. However, as the experiment complexity, the volume of data and the need for computing power are on the rise, scientists need mechanisms for monitoring, analyzing partial results, and taking action during workflow execution.
CFD analyses, for example, take several factors into consideration: geometry, viscosity, mesh partitioning, time step size, wall time and the frequency at which the results are stored; just to name a few. According to the initial setup, the simulation may produce huge amounts of data. Based on the produced outcome, scientists may see they need to explore the simulation diFFerently. They may need to refine the mesh, change time step size or store more or less results during specific simulation time intervals. Nowadays, scientists simply run the simulation again from the beginning. However, if they have a significant feedback of what is currently happening, they can take actions during the execution and profit from a better resource utilization. Besides, steering the execution of a workflow may help scientists to achieve the desired outcome faster.
In this work, we discuss algorithms and techniques that may give scientists the possibility to steer their experiments taking advantage of querying provenance data at real-time. When scientists run their workflows, provenance records keep track of everything that has already happened, what is currently happening and what still needs to be executed in the workflow. Thus we present our ongoing approaches to handle what we believe are the three main issues related to steering in scientific workflows: (i) monitoring of execution, (ii) data analysis at runtime, and (iii) dynamic interference in the execution.
For monitoring and notification, SciLightning [2] noti es scientists about events that are important through mobile devices and social networks (e.g., Facebook, SMS, and Twitter) and opens a communication channel between the mobile device and the remote (e.g. cloud) execution. For data analysis, Prov-Viz [6] allows for querying and traversing the provenance database, to stage out selected data and visualize them in a local machine or on tiled wall displays. We also show our ongoing approach to interfere in the execution using a provenance API for steering Chiron [7], our algebraic workflow engine.
[1] A. Ailamaki, Managing scienti c data: lessons, challenges, and opportunities, Proceedings of the 2011 ACM SIGMOD International Conference on Management of Data, 1045-1046, 2011.
[2] F. Cota, F., V. Silva, D. Oliveira, K. Ocana, J. Dias, E. Ogasawara, M. Mattoso, Capturing and querying workflow runtime provenance with PROV: a practical approach, Proceedings of the International Workshop on Managing and Querying Provenance Data at Scale, 2013.
[3] E. Deelman, D. Gannon, M. Shields, I. Taylor, Workflows and e-Science: an overview of workflow system features and capabilities, Future Generation Computer Systems, 25(5):528-540, 2009.
[4] E.Deelman, G. Mehta, G. Singh, M.-H. Su, K. Vahi, Pegasus: mapping large-scale workflows to distributed resources, Work ows for e-Science, Springer, 376-394, 2007.
[5] J. Freire, D. Koop, E. Santos, E., Silva, C.T., Provenance for computational tasks: a survey, Computing in Science and Engineering, v.10(3):11-21, 2008.
[6] F. Horta, J. Dias, K. Oca~na, D. Oliveira, E. Ogasaw, M. Mattoso, Poster: using provenance to visualize data from large-scale experiments, Poster Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis, 2012.
[7] E. Ogasawara, J. Dias, V. Silva, F. Chirigati, D. Oliveira, F. Porto, P. Valduriez, M. Mattoso, M. Chiron, A parallel engine for algebraic scientific workflows, Concurrency and Computation, 2013.
[8] D. Oliveira, K. Ocaña, F. Baião, M. Mattos, A provenance-based adaptive scheduling heuristic for parallel scientific workflows in clouds, Journal of Grid Computing, 10(3):521-552, 2012.
[9] M. Wilde, M., Hategan, J.M. Wozniak, B. Cli ord, D.S. Katz, I. Foster, Swift: a language for distributed parallel scripting, Parallel Computing(37(9)):633-652, 2011.

×

In this work, we propose a novel space-adaptivity process for Multi-scale Hybrid-Mixed methods [1], based on the a-posteriori error estimator presented in [2] This new approach avoids modifications of the original mesh and allows local adaptation in order to capture the effect of subscales (boundary layers and/or physical multiscales features) through independent local problems. Following a theoretical motivation, we present several numerical experiments showing the properties of the a-posteriori error estimator and the behavior of the new adaptivity process when used for approximate solutions to problems in porous media.
[1] C. Harder and D. Paredes and F. Valentin, A family of multiscale hybrid-mixed finite element methods for the Darcy equation with rough coecients, J. Comput. Phys., Vol. 245, pp. 107-130, 2013.
[2] R. Araya and C. Harder and D. Paredes and F. Valentin, Multiscale hybrid-mixed methods, submitted to SINUM.

×

Several studies have been carried out to check possibilities and limitations of clouds in providing support to scientific applications. The major part of these studies is dedicated to the behavior of scientific applications, most of them characterized by large amounts of information processing and massive use of computational resources. In this context clouds emerge in providing additional resources, or in minimizing cost in the acquisition of new resources.
The use of clouds in support to scientific applications have inherited characteristics, different from the commercial ones. The virtualization technologies, are the basic elements of clouds' infrastructure, and despite of their significant advances they still present limitations when confronted with the needs of high computational power and communication, demanded by several scientific applications. However, for better use these virtualized resources it is necessary a better and deeper understanding of the characteristics of the cloud architecture(s) being used. The studies being carried out by our group of Distributed Scientific Computing at the National Laboratory for Scientific Computing (ComCidis/ LNCC) is corroborated by other research groups, and suggest that different virtualization layers and hardware architecture used in the cloud infrastructure, in uence the performance of scientific applications.
This influence leads to the concept of affinity, i.e., which group of scientific applications has a better performance associated to the virtualization layer and hardware architecture beeing used. These aspects involve:

to reduce cloud environment limitations in support to scienti c applications;

to provide the basis for the development of new cloud scheduling algorithms;

to assist the acquisition of new resources and cloud providers, looking for performance and resource usage optimization.

Currently, the ComCiDis group is developing a set of research projects aiming to understand the relationship among: scientific applications, virtualization layers and infrastructure, based on its private development cloud platform named Neblina. The Neblina platform enables to prospect new technologies and solutions in optimizing the use of cloud environments for scientific applications.

×

This work proposes a new family of Multiscale Hybrid-Mixed (MHM) finite element methods for advective-reactive dominated problems on coarse meshes. The MHM method is a consequence of a hybridization procedure. It results in a method that naturally incorporates multiple scales while providing solutions with high-order precision for the primal and dual (or flux) variables. The local problems are embedded in the upscaling procedure and are completely independent, meaning they can be naturally obtained using parallel computation facilities. Also, the MHM method preserves local conservation properties from a simple post-processing of the primal variable. The analysis results in a priori estimates showing optimal convergence in natural norms and provides a face-based a posteriori estimator. Numerical results verify theoretical results as well as a capacity to accurately incorporate heterogeneous and high-contrast coefficients, and to approximate boundary layers. We conclude that the MHM method is naturally shaped to be used in parallel computing environments and appears to be a highly competitive option to handle realistic multiscale singular perturbed boundary value problems with precision on coarse meshes.

×

One of the most important tools used in petroleum and gas exploration is seismic modeling. Seismic reflection is a geophysical technique that uses the propagation of compressional waves - similar to an earthquake - to achieve a geological structural profile. From the reflectors present in the seismic profile is possible to infer the nature and the sedimentary layers structural characteristics. In other words, it is performed a mapping of the earths subsurface.
This work aims to present an overview on the 3D seismic modeling, from the mathematical and physical theory to the computational practice. It is presented key aspects of seismic surveys and its importance to the oil industry in the search for petroleum reservoirs. Seismic modeling is performed using a complex model composed of several faults called Overthrust, which was developed by the consortium between the Society of Exploration Geophysicists (SEG) and the European Association of Geoscientists & Engineers (EAGE), with the purpose to represent a real geology. This model is from a complex faulted geological region described as deformations resulting from stresses generated by forces acting on the rocks. It depicts a thrusted stratigraphy overlying an earlier extensional and rift sequence [1].
There is a continuous interest for increased computational performance in seismic modeling and migration based on discretization of the 2-way wave equation. We presented preliminary numerical results and the computational demands using a scenario designed to capture, not all, but important features of wave propagation in heterogeneous media. The outputs were computed using the classical Finite Difference Method with 4th order of approximation for spatial derivatives and 2nd order in time.
[1] House, L., Fehler, M., Barhen, J., Aminzadeh, F. and Larsen, S. A national laboratory-industry collaboration to use SEG/EAGE model data sets. The Leading Edge, 15, 135-136, 1996.

×

There is a tendency for a growing number os cores per CPU in modern parallel architectures [1,2]. EdgeCFD is a Finite Element software that was developed to take advantage of hybrid, distributed and threaded memory computers [3], however, its algorithms still explores features of non-hybrid systems, where MPI or OpenMP could be used alone. In order to efficiently run in emerging massively multicore architectures, algorithms must change to reduce intra-node communication while increasing the rate of data sharing through memory buses. Moreover, visualization and data storage are also becoming a big bottleneck for future and more complex simulations[4]. This talk exposes such concerns while giving simple alternatives to mitigate these problems in the context of EdgeCFD choices.
[1] Kannan, R., Harrand, V., Lee, M., and A. J. Przekwas, Highly Scalable Computational Algorithms on Emerging Parallel Machine Multicore Architectures: Development and Implementation in CFD Context, Int. J. for Num. Methds in Fluids, 2013.
[2] Sahni, O., Zhou, M., Shephard, K. Jansen, Scalable implicit finite element solver for massively parallel processing with demonstration to 160K cores, Proceedings of the ACM/IEEE Conference on High Performance Computing, SC 2009, November 14-20, Portland, Oregon, USA, 2009.
[3] Elias, R. N., Camata, J. J., Aveleda, A. A. and Coutinho A. L. G. A., Evaluation of Message Passing Communication Patterns in Finite Element Solution of Coupled Problems, LNCS6449 (1)306-313, High Performance Computing for Computational Science - VECPAR, 2010.
[4] Elias, R, Braganholo V, Clarke, J., Mattoso, M., Coutinho A., Using XML with Large Parallel Datasets: Is There Any Hope ?, Proceedings of the 21st International Conference on Parallel Computational Fluid Dynamics, Par-CFD 2009, Moffett Field, CA, USA, 2009.

×

In this talk I'll talk a about hybrid methods, and concentrate in particular on the powerfull ideas developed by Araya, Harder, Paredes and Valentin. I'll describe some theory and applications under developments.

×

The execution of scientific workflows over huge catalog data from astronomy databases combines the parallelization of the workflow with access to partitioned data in databases. The widely used MapReduce model, that supports scalable and robust parallel execution of tasks, needs to be revised so that a balance between the degree of the scientific workflow parallelization and the extension of data distribution is met. In particular, high degree of parallelism incur in high number of large intermediate files, whose I/O may jeopardize the parallel strategy. In this work we evaluate this scenario and propose a model to drive the design of an efficient execution

×

Problems involving incompressible viscous flow coupled with advective-diffusive transport model a wide range of phenomena of great interest in science and engineering. With the introduction of stabilization techniques, finite elements become an important tool in Computational Fluid Dynamics (CFD). Since 2007, the High Performance Computer Center (NACAD, in Portuguese) has been developing the EdgeCFD software, based on stabilized finite element formulations (SUPG/PSPG, VMS) for the solution of problems of incompressible viscous coupled or not to the solution of the advective-diffusive transport of a scalar. Several techniques, algorithms and methods have been explored/incorporated in search of efficient solution comprising the most diverse architectures of high performance processing. The search of more and more realistic and detailed solutions requires three-dimensional and transient coupled solutions and fine grids, imposing a considerable increase of computational costs and numerical difficulties. Thus, the purpose of this research is to accelerate EdgeCFD by using a multi-level preconditioner able to accelerate the iterative driver present in EdgerCFD. Moreover, the preconditioner must enable the efficient use of the edge-based data structure, minimizing the computational effort and providing EdgeCFD global process with more speed and accuracy.

×

In the modeling and simulation life cycle in large-scale science and engineering, hypotheses are firstly expressed as mathematical equations, and then transformed into a computational model that is ventually run to produce predictive data about an studied phenomenon. The hypothesis database is an innovative database technology under development at LNCC/DEXL for managing large-scale simulations, from the hypotheses to predictive data associated to them. The goal is to equip scientists with a query language to keep track of their large-scale research. We illustrate the current state of our work in the problem of modeling and simulation of the human cardiovascular system carried out at LNCC/HemoLab.

×

We will present the GPPD group and their research activities on Thread Placement, I/O optimization, power comsumption and Accelerator Architectures. We will discuss some research done in the group, about load imbalance as a major obstacle to obtain maximum efficiency. In a synchronous parallel application, the total execution time is dictated by the heaviest loaded processor. As an example there are different kinds of load imbalance in a climate model: static (e.g.: topography), dynamic predictable (e.g.: short-wave radiation) and dynamic unpredictable (e.g.: thunderstorms).

×

During the last 2 years we have been developing a new Web-based computational environment called SPiNMe (Software Productivity in Numerical Methods) for the specification and implementation of numerical methods, focusing initially on Finite Element Methods. The main features of SPiNMe may be outlined as follows:
- Flexibility without lack of comparability. The SPiNMe environment allows the various features underlying numerical method definitions to be explored;
- Productivity. The SPiNMe environment provides a way for researchers to rapidly prototype their methods;
- Long term compatibility. The SPiNMe environment only requires from researchers the use of a local web browser.
To attain these features, we stick to three key design decisions for the SPiNMe environment. First, a single set of numerical library implementations is provided to the researcher for the implementation of his/her numerical method. Second, the provided numerical libraries can be parametrized either via input data or via "plug-in" code representing the particularities of a specific numerical method. Finally, plug-in code is implemented in a high-level language: therefore, researchers can focus more on prototyping their methods than on coding typical idioms of lower-level languages. Currently, we've been using Lua as such higher-level language.
As we'll present in this talk, we've so far succeeded in employing a specific FEM C++ library called NeoPZ for providing SPiNMe with support for a range of FEM method families. Nevertheless, we've recently come across a lot of difficulties in implementing within NeoPZ the specific family of Multiscale Hybrid-Mixed (MHM) methods. Such difficulties have led to the employment of a series of workarounds which might ultimately cause the erosion of NeoPZ's current architecture. Adding to such difficulties the fact that we'd like to stress the ability of the SPiNMe platform to use different numerical libraries, we've decided on the implementation of a new FEM library specifically crafted for exploring the loosely-coupled strategy of MHM methods to solve global and local problems. Sticking to the idea of high productivity, we've adopted Erlang as the base implementation for the communicating processes of the new FEM library. In this talk we present the advantages of adopting Erlang for such processes and how they can be integrated with numerical computing processes (e.g. implemented with NeoPZ) that ultimately solve the global and local problems.

×

French Participants



Seismic applications require to solve wave equations in heterogeneous media. Thus we choose to focus on the Helmoltz wave equations resolution in isotropic heterogenous media using Galerkin discontinuous methods (DG). To do that we select three DG methods: the DG method with centered flux, the DG method with upwind flux and the hybridizable DG method in order to compare the different results. The principal issue is to obtain the better solution reducing at the maximum time and memory costs. The first step of our work was to complete a program for solving Helmoltz wave equations using DG methods with centered flux and upwind flux. To test the program and compare results the two first results, I used two test-cases: the test-case of plane wave and the one of the diffraction by a circle. The second step is currently to develop the hybridizable DG formulation for Helmoltz wave equations in time-harmonic domain.

×

Reverse Time Migration technique produces underground images using wave propagation. A discretization based on the Discontinuous Galerkin Method unleashes a massively parallel elastodynamics simulation, an interesting feature for current and future architectures. However, exploiting effectively the recent supercomputers is a delicate issue. In this work, we propose to combine two recent HPC techniques to achieve a high level of efficiency: the use of runtimes (StarPU and PaRSEC) to easily exploit the hardware capabilities, and the integration of accelerators (Intel Xeon Phi). Preliminary results are presented.

×

Seismic imaging requires the resolution of the helmholtz equation in highly heterogeneous media. When dealing with complex velocity models, meshing the domain to fit the heterogenities leads to prohibitive computational costs. Therefore, we develop discretisation methods able to take into account fine scale heterogenities on a coarse mesh, i.e. multiscale methods. So far, we have studied the Residual Free Bubble method (RFBM) and heterogenities-adapted quadrature formula for the acoustic heterogeneous helmholtz equation. An abstract analysis of the adapted quadrature method as well as a comparison of quadrature and RFB methods, based on numerical experiment, will be presented.

×

Discontinuous Galerkin methods are becoming more and more popular for numerical simulation of wave propagation. They can be easily coupled with explicit time scheme since the resulting mass matrix is block-diagonal and they allow for the use of high order polynomials and hp-adaptivity. However, if one wants to really take advantage of the high order space discretization, DG schemes should be coupled with high-order time schemes. it is necessary to use a high-order time discretization. We propose a new high order time scheme, the so-called Nabla-p scheme. This scheme does not increase the storage costs since it is a single step method and does not require the storage of auxiliary unknowns. Numerical results show that it requires less storage the ADER scheme for a given accuracy and that the computational costs are similar.

×

Everyone is focusing on power consumption for HPC data centers. This is not the right question. The right question is what is the good cost model of a (HPC) data center?.
The evolution of the hardware has been based on the same paradigm for decades: hardware decreases. For instance, very expensive vector processors have been dethroned by commodity processors that will be dethroned by low cost processors coming from embedded world. Furthermore, the Open Hardware licensing model gets momentum and takes benefits from all improvement from community (MTBF,power consumption).
At the same time, new data centers are more and more greedy. The goal is to limit as much as possible the energy consumption. The motivation is not related to the impact on climate change or to the CO2 footprint, but related to cost reduction as kW/ cost is increasing at a very high rate.
As a direct consequence, billing model on datacenter users and customers based on /CPU/h is becoming obsolete and will be replaced with a /kw/h model. This change has two impacts: Datacenter operators pay more and more attention to the reduction of the infrastructure power consumption overhead (PUE); while users and customers will pay more attention to the reduction of the power consumption of their application. This new paradigm requires new tools, new methodologies, and create new questions. For instance, is a quicksort algorithm more efficient from a power consumption standpoint than a merge/sort one? If no, how to optimize the algorithm for reducing its electrical footprint?
In this presentation are presented the main challenges to solve and how Bull is addressing them.

×

High-order numerical methods allow accurate simulations of ground motion using unstructured and relatively coarse meshes. For seismic wave propagation computations over large domains or when realistic geological media such as sedimentary basins are considered, the basic assumption of linear 16 elasticity is no more valid since it results in a severe overestimation of amplitude or duration of the ground motion especially in basins where incident waves are trapped.
In this study, we solve the first-order velocity-stress system and suppose that the medium is linear isotropic and viscoelastic, thus considering intrinsic attenuation. The associated stress-strain relation in the time domain is a convolution, which is numerically intractable. Therefore, we use the method proposed by Day and Minster [1] to turn the convolution into differential equations and we consider the rheology of a generalized Maxwell body (GMB) with n relaxation frequencies [2]. This results in a velocity-stress system which contains additional equations (6n in 3D and 3n in 2D) for the anelastic functions including the strain history of the material.
Among all high-order numerical methods dedicated to the solution of hyperbolic systems, the discontinuous Galerkin (DG) finite element method has been extensively studied during the last decades in many domains of application. This method can be viewed as a clever combination of the finite element (FE) and the finite volume (FV) methods. As in FE methods, a space of basis and test functions are defined but locally on each element of the mesh, allowing, as in FV methods, discontinuities at the interfaces resulting in numerical fluxes. Ideally, the DG method shares almost all the advantages of the FE and FV methods: adaptivity to complex geometries, easily obtained high-order accuracy, hp-adaptivity, and natural parallelisation. We present a high-order DG method for the viscoelastic system which constitutes an extension to the method proposed in [3] for linearly elastic media. Our method is based on a centered numerical flux and a leap-frog time-discretization leading to a non dissipative combination. The method is suitable for complex triangular unstructured meshes. The extension to high order in space is realized by Lagrange polynomial functions, defined locally on each element, and do not necessitate the inversion of a global mass matrix since an explicit scheme in time is used. In order to demonstrate the accuracy of the scheme for viscoelastic media, we perform several 2D numerical tests. Moreover, in preparation for an extension to the 3D case, CPU cost increase is also reviewed and, in particular, the dependence of the solution accuracy on some parameters such as the number of relaxation frequencies or the degree of the polynomial interpolation.
[1] S. M. Day and J. B. Minster. Numerical simulation of attenuated wavefields using a Padé approximant method. Geophys. J. R. astr. Soc., 78, 105-118, (1984).
[2] H. Emmerich and M. Korn. Incorporation of attenuation into time-domain computations of seismic wave fields. Geophysics, 52, 1252-1264, (1987).
[3] S. Delcourte and L. Fezoui and N. Glinsky-Olivier. A high-order discontinuous Galerkin method for the seismic wave propagation. ESAIM: Proceedings, 168, 70-89, (2009).

×

We present performance evaluation and analysis of well-known HPC applications and benchmarks running on low-power embedded platforms. The performance to power consumption ratios are compared to classical x86 systems. Scalability studies have been conducted on the Mont-Blanc Tibidabo ARM-based cluster. We have also investigated optimization opportunities and pitfalls induced by the use of these new platforms, and proposed optimization strategies based on auto-tuning.

×

This talk presents the parallel remeshing capabilities of PaMPA, a middleware library dedicated to the management of unstructured meshes distributed across the processors of a parallel machine. PaMPA performs parallel remeshing by selecting independent subsets of elements that need remeshing and running a user-provided sequential remesher (e.g. MMG3D) on these subsets. This process is repeated on yet un-remeshed areas until all of the mesh is remeshed. The new mesh is then repartitioned to restore load balance. We present experimental results where we generate high quality, anisotropic tetrahedral meshes comprising several hundred million elements from initial meshes of several million elements.

×

The growing number of cores as well as the increasingly complex memory hierarchy of modern compute nodes require adapted algorithms and run-time environments to take efficiently benefit of their large compute power with reasonable programming efforts. We will present the KAAPI runtime that support efficient and memory hierarchy aware scheduling of tasks (defined statically or extracted on-demand) on NUMA architectures as well as on multi-GPU systems. We will also discuss how Kaapi could leverage OpenMP by providing an efficient run-time for legacy code as well as the next generation of OpenMP code that will take benefit of the new features expected with version 4. We will rely on several examples to show the benefits of our approach, including our work with CEA to boost the performance of the transient dynamics simulation code Europlexus.
The multi-core trend also lead to compute capabilities that are growing more rapidly than storage and I/O performance. To cope with this issue, the huge amount of data produced by large HPC simulations need to be filtered and processed as close as possible to where they are generated. One approach, often called in-situ analysis, consists in tightly coupling analysis code with simulation code. The goal is to loosen the pressure on the I/O system and to take advantage of the reserved compute nodes for performing efficiently part of the post-processing. This promising approach requires dedicated software infrastructures ensuring a minimal impact on the simulation. We will present the in-situ FlowVR library and experiments with the dynamics molecular simulation code Gromacs.
These works are results from the Inria team Moais, located at Grenoble. Moais is focused on parallel programming and HPC. Moais has a long history of collaboration with Brazillian researchers in particular at UFRGS and USP on various HPC related topics.

×

The Licia, Laboratoire de Calcul Intensif et d'Informatique Ambiante, is an international laboratory created by CNRS, INRIA, and the Universities of Grenoble. The main objective of the Licia is to promote new scientific collaborations in computer science between the Lig French Laboratory (Laboratoire d'Informatique de Grenoble) and the computer science department of the Federal University of Rio Grande do Sul (UFRGS). Because of the historical relationship between Porto Alegre and Grenoble the Licia also supports international collaborations between France and Brazil (GDRI Web Science,Capes/Cofecub projects, etc.) The main collaboration domain of the Licia is HPC. It concerns large scale infrastructures (a cluster in Porto Alegre is a Grid5000 node), multi-cores architectures, runtime and schedulers, and software environment for the development of parallel applications (performance analysis, visualization, etc.).

×

SPONSORS