SLinCA@Home

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

Lua error in package.lua at line 80: module 'strict' not found.

SLinCA@Home
300px
Developer(s) IMP NASU
Initial release September 14, 2010 (2010-09-14)
Development status Alpha
Operating system Linux, Windows
Platform BOINC, SZTAKI Desktop Grid, XtremWeb-HEP, OurGrid
Type Grid computing, Volunteer computing
Website dg.imp.kiev.ua

SLinCA@Home (Scaling Laws in Cluster Aggregation) is a research project that uses Internet-connected computers to do research in fields such as physics and materials science.

Introduction

SLinCA@Home is based at the G. V. Kurdyumov Institute for Metal Physics (IMP) of the National Academy of Sciences of Ukraine (NASU) in Kiev, Ukraine's capital city. It runs on the Berkeley Open Infrastructure for Network Computing (BOINC) software platform, the SZTAKI Desktop Grid platform, and the Distributed Computing API (DC-API) by SZTAKI. SLinCA@Home hosts several scientific applications dedicated to research into scale-invariant dependencies in experimental data and computer simulation results.

History

The SLinCA@Home project was previously launched in January 2009 as part of the EGEE project in the European Union's Seventh Framework Programme (FP7) for the funding of research and technological development in Europe. During 2009–2010 it used the power of a local IMP Desktop Grid (DG), but from December 2010 it has been using the power of volunteer-driven distributed computing in solving the computationally intensive problems involved in research into scale-invariant dependencies in experimentally obtained and simulated scientific data. It is now operated by a group of scientists from IMP NASU in close cooperation with partners from IDGF and the 'Ukraine' Distributed Computing team. From June 2010 SLinCA@Home has been under the framework of the DEGISCO FP7 EU project.

Current status

Currently, SLinCA@Home is considered to be in alpha-test, due to gradual upgrades of the server and client parts.

By informal statistics at the BOINCstats site (as of 16 March 2011), over 2,000 volunteers in 39 countries have participated in the project; it is the second most popular BOINC project in Ukraine (after the Magnetism@Home project, which is now inactive).[1] About 700 active users contribute about 0.5–1.5 teraFLOPS[2] of computational power, which would rank SLinCA@Home among the top 20 on the TOP500 list of supercomputers – if this were June 2005.[3]

Currently, one application (SLinCA) is running publicly using IMP Desktop Grid (DG) infrastructure (SLinCA@Home); three others (MultiScaleIVideoP, CPDynSG, and LAMMPS over DCI) are being tested internally at IMP.

Scientific Applications

The SLinCA@Home project was created to perform searches for and research into previously unknown scale-invariant dependencies using data from experiments and simulations.

Scaling Laws in Cluster Aggregation (SLinCA)

SLinCA
Developer(s) IMP NASU
Initial release July 24, 2007 (2007-07-24)
Development status Active
Written in C, C++
Operating system Linux (32-bit), Windows (32-bit)
Platform BOINC, SZTAKI Desktop Grid, XtremWeb-HEP, OurGrid
Type Grid computing, Volunteer computing
Website {{#property:P856}}

The SLinCA (Scaling Laws in Cluster Aggregation) application was the first one ported to the DG infrastructure by the Physics of Deformation Processes Lab at IMP NASU. Its goal is to find scale-invariant laws in kinetic scenarios of monomer aggregation in clusters of different kinds in multiple scientific domains.

The processes of agent aggregation into clusters are investigated in many branches of science: defect aggregation in materials science, population dynamics in biology, city growth and evolution in sociology, etc. Experimental data exist confirming evolving structures, which tends to be hierarchical on many scales. The available theories give rise to many scenarios of cluster aggregation and the formation of hierarchical structures, and predict various scaling properties. However, there are huge databases of experimental data, which require powerful computational resources for hierarchical processing. A typical simulation of one cluster aggregation process with 106 monomers takes approximately 1–7 days on a single modern CPU, depending on the number of Monte Carlo steps (MCS).

Deploying SLinCA on a Grid computing infrastructure, utilising hundreds of machines at the same time, allows harnessing sufficient computational power to undertake simulations on a larger scale and in a much shorter timeframe. Running the simulations and analysing the results on the Grid provides the required significant computational power.

The technical characteristics of running the Desktop Grid-enabled version of the SLinCA application based on the IMP Desktop Grid infrastructure (SLinCA@Home) are:

  • One work-unit per one CPU core (2.4 GHz) usually requires ~2–4 hours, less than 60 MB of memory, and less than 40 MB of hard drive space.
  • Checkpointing is not available, but is under test.
  • The timing of work-unit progress is nonlinear.

SLinCA: Scientific Results

The previous scientific results of the SLinCA application were obtained on EGEE computing resources at CETA-CIEMAT and XtremWeb-HEP Laboratoire de l'accélérateur linéaire test infrastructures[clarification needed] were reported in March 29–30, 2009 during the poster session at the 4th EGEE training event and 3rd AlmereGrid Workshop, in Almere, Netherlands.[4]

SLinCA: Plans

Current plans for the SLinCA application are for stable checkpointing, some new functionality, and supporting NVIDIA GPU computing for faster computation; the last is predicted to make SLinCA from 50% to 200% faster.

Multiscale Image and Video Processing (MultiScaleIVideoP)

MultiScaleIVideoP
Developer(s) IMP NASU (wrapper for DCI), Mathworks (MATLAB libraries)
Initial release January 11, 2008 (2008-01-11)
Development status Alpha
Written in C, C++, 4GL MATLAB
Operating system Linux (32-bit), Windows (32-bit)
Platform MATLAB, BOINC, SZTAKI Desktop Grid, XtremWeb-HEP
Type Grid computing, Volunteer computing
Website {{#property:P856}}

Optical microscopy is usually used for structural characterization of materials in a narrow range of magnification, a small region of interest (ROI), and without changes during microscopy. But many crucial processes of damage initiation and propagation take place dynamically in a timescale ranging from 10−3 s to 103 s and distance scales from micrometers (solitary defects places[clarification needed]) to centimeters (for correlated linked networks of defects). Multiscale Image and Video Processing (MultiscaleIVideoP) is designed to process the recorded changes in materials under mechanical deformation in a loading machine (e.g., a diamond anvil cell). The calculations include many parameters of the physical process (e.g., rate, magnification, illumination conditions, and hardware filters) and image processing parameters (e.g., size distribution, anisotropy, localization, and scaling parameters); thus, the calculations are very slow, requiring more powerful computational resources. Deploying this application on a grid computing infrastructure, utilising hundreds of machines at the same time, allows harnessing sufficient computational power to perform image and video processing on a larger scale and in a much shorter timeframe.

The technical characteristics when running the Desktop Grid-enabled version of the MultiScaleIVideoP application at the IMP are:

  • One work-unit per one CPU core (2.4 GHz) usually requires ~20–30 minutes, less than 200 MB of memory, and less than 500 MB of hard drive space.
  • Checkpointing is not available, but is under test.
  • The timing of work-unit progress is linear.

MultiScaleIVideoP: Scientific Results

The scientific results of the MultiScaleIVideoP application were obtained on EGEE computing resources at CETA-CIEMAT and XtremWeb-HEP Laboratoire de l'accélérateur linéaire test infrastructures[clarification needed] were reported in March 29–30, 2009 during the poster session at the 4th EGEE training event and 3rd AlmereGrid Workshop, in Almere, Netherlands.[5]

In January 2011, further scientific results for experiments on cyclic constrained tension of aluminum foils under video monitoring were reported.[6]

MultiScaleIVideoP: Plans

Current plans for the MultiScaleIVideoP application are for stable checkpointing, some new functionality, and supporting NVIDIA GPU computing for faster computation; the last is predicted to make MultiScaleIVideoP from 300% to 600% faster.

City Population Dynamics and Sustainable Growth (CPDynSG)

CPDynSG
Developer(s) IMP NASU
Initial release April 14, 2010 (2010-04-14)
Development status Alpha
Written in C, C++
Operating system Linux (32-bit), Windows (32-bit)
Platform BOINC, SZTAKI Desktop Grid
Type Grid computing, Volunteer computing
Website {{#property:P856}}

In the social sciences, it has been found that the growth of cities (or municipalities, lands, counties, etc.) can be explained by migration, merges, population growth, and similar phenomena. For example, from the literature one can found that the city population distribution in many countries is consistent with a power law form in which the exponent t is close to 2. This finding is confirmed qualitatively by data on the populations of various cities during their early histories. The population of essentially every major city grows much faster than each of their countries as a whole over a considerable timespan. However, as cities reach maturity, their growth may slow or their population may even decline for reasons unrelated to preferential migration to still-larger cities. Different theories give varying growth rates, asymptotics,[clarification needed] and distributions of such populations. It is important to compare the various theories with each other, compare the theories with observations, and make predictions of possible population dynamics and sustainable growth for various subnational, national, and multinational regions. The City Population Dynamics and Sustainable Growth (CPDynSG) application allows investigating the correspondences between model predictions and the vast volume of available long-term historical data.

The technical characteristics when running the Desktop Grid-enabled version of the CPDynSG application at the IMP are:

  • One work-unit per one CPU core (2.4 GHz) usually requires ~20–30 minutes, less than 20 MB of memory, and less than 50 MB of hard drive space.
  • Checkpointing is not available, but is under test.
  • The timing of work-unit progress is linear.

CPDynSG: Scientific Results

In June–September 2010 some results from porting CPDynSG to the Distributed Computing Infrastructure (DCI) using BOINC and the SZTAKI Desktop Grid were obtained, specifically analyses of city size distributions in several Central and Eastern European countries. The distinctive isolation[clarification needed] of the city size distribution in Hungary was noted. A very high similarity in the evolution of city size distributions in Ukraine and Poland was discovered. These results were reported during the Cracow Grid Workshop'10 (October 11–13, 2010) in oral and poster presentations.[7] The poster presentation was awarded the "Best Poster of the Cracow Grid Workshop'10" prize.

CPDynSG: Plans

Current plans for the CPDynSG application are for stable checkpointing, some new functionality, and supporting NVIDIA GPU computing for faster computation; the last is predicted to make CPDynSG from 50% to 200% faster.

Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) over DCI

LAMMPS over DCI
Developer(s) IMP NASU (wrapper for DCI), Sandia National Laboratories (LAMMPS itself)
Initial release June 4, 2010 (2010-06-04)
Development status Alpha
Written in C, C++
Operating system Linux (32-bit), Windows (32-bit)
Platform BOINC, SZTAKI Desktop Grid
Type Grid computing, Volunteer computing
Website {{#property:P856}}

One important topic in materials science currently is the development of new nanoscale functional devices. However, controlled fabrication of such requires careful selection and tuning of the critical parameters (e.g., elements, interaction potentials, and external influences such as temperature) of atomic self-organization in designed patterns and structures for nanoscale functional devices. Thus, molecular dynamics simulations of nanofabrication processes with brute-force searches through different combinations of parameters are of interest. For this, the very popular open-source package "Large-scale Atomic/Molecular Massively Parallel Simulator" (LAMMPS) by the Sandia National Laboratories was selected as a candidate for porting to DCI using the Desktop Grid. In other words, LAMMPS with "parameter sweeping" parallelism can be ported to DCI on DG. Usually, it requires powerful computational resources to simulate nano-objects with many parameters. The typical simulation of an investigated nanostructure under one set of physical parameters — for instance, a single crystal of metal (such as aluminium, copper, or molybdenum) with 107 atoms using embedded atom potentials for as little as 1–10 picoseconds of simulated physical process — takes approximately 1–7 days on a single modern CPU. Deploying LAMMPS on a grid computing infrastructure, utilising hundreds of machines at the same time, allows harnessing sufficient computational power to undertake the simulations in a wider range of physical parameter configurations and a much shorter timeframe.

The technical characteristics when running the Desktop Grid-enabled version of LAMMPS at the IMP are:

  • One work-unit per one CPU core (2.4 GHz) usually requires ~2–48 hours, less than 500 MB of memory, and less than 1 GB of hard drive space.
  • Checkpointing is not available, but is under test.
  • The timing of work-unit progress is linear.

LAMMPS over DCI: Scientific Results

In September–October 2010 results were obtained and reported in an oral presentation during the International Conference on “Nanostructured materials-2010”, in Kiev, Ukraine.[8]

LAMMPS over DCI: Plans

Current plans for the LAMMPS over DCI application are for stable checkpointing, some new functionality, and supporting NVIDIA GPU computing for faster computation; the last is predicted to make LAMMPS over DCI from 300% to 500% faster.

An additional goal is migration to the OurGrid platform for testing and demonstrating potential mechanisms of interoperation between worldwide communities with different DCI paradigms. The OurGrid platform is targeted at the support of peer-to-peer desktop grids; these are in nature very different from volunteer computing desktop grids such as the SZTAKI Desktop Grid.

Partners

SLinCA@Home collaborates with:

Awards

File:CGW 2010 best poster award.jpg
IDGF member Yuri Gordienko receives the 2nd best poster award at CGW'10

See also

References

  1. Lua error in package.lua at line 80: module 'strict' not found.
  2. SLinCA@Home Server Status
  3. Lua error in package.lua at line 80: module 'strict' not found.
  4. Lua error in package.lua at line 80: module 'strict' not found.
  5. Lua error in package.lua at line 80: module 'strict' not found.
  6. Lua error in package.lua at line 80: module 'strict' not found.
  7. 7.0 7.1 Lua error in package.lua at line 80: module 'strict' not found.
  8. Lua error in package.lua at line 80: module 'strict' not found.
  9. Lua error in package.lua at line 80: module 'strict' not found.

External links