• Some Slides
  • Customers
    • Repair Shop
    • Saar
    • Sopp
  • About Me

Alexander Vondrous

echo "Hello Word!"

Month: March 2015

Software Chindogu

29. March 2015 by sasa Leave a Comment

Coding in a train is usually a thing for itself but during winter time it is much harder because your fingers are cold and rigid. Once it got really cold and I needed a method to heat my fingers up. How to programmatically heat fingers? The good thing was, that I had an old IBM Thinkpad with a hot air exhaust, which can be activated. The magic trick to activate the fan is to give the CPU an endless busy waiting task, such that I awkwardly typed with my frozen fingers a piece of heating code, which is not more than a parallel endless loop iterating over an integer.

#include "omp.h"
int main (int argc, char *argv[]) {
  int i;
  #pragma omp parallel
  {
    while (1) {
      i++;
    }
  }
  return 0;
}

This piece of code is unusual and falls pretty well into the definition of a Chindogu, which is a japanese art form. It is unusual because the program does not have a digital output.

If your fingers are feeling cold, feel free to download the source code from GitHub:
Octocat

Posted in: Allgemein Tagged: chindogu, heat, omp, parallel, unusual

Sit, Walk, Drive or Fly?

22. March 2015 by sasa Leave a Comment

There are a lot ways to transfer files from A to B but which one is the right choice for $x$ bytes over a distance of $y$ kilometers? The actual problem was to transfer about 80 GB from my office to the office of a colleague, which was around 1 km away. We had a similar discussion about the way to exchange data as Randall Munroe (xkcd.com) drew in his xkcd 949 “File Transfer”.

File transfer is still not easy.

My solution was to compute transfer rates for different solutions, such that I can decide if I can sit or if I have to walk, drive or fly. By flying I mean pidgins with attached hard disk drive or the newer version of them, drones with hard disk drives. Take a look at the result and feel free to play around with it.

Compute transfer rates: Sit, walk, drive or fly?

Posted in: Allgemein Tagged: drive, file transfer, fly, sit, transfer rate, walk

1D vs 2D domain decomposition for parallel execution on regular grids

11. March 2015 by sasa Leave a Comment

This post is a brief summary of the paper on parallel computing with regular grids (link).

It describes the advantage of 3D domain decomposition over 1D or 2D domain decomposition for distributed memory computing with a stencil code because of the better surface to volume ratio (computation to communication ratio).

Many simulations are performed on a regular 3D grid. Seismic wave propagation, fluid flow or grain growth are a view examples, which are computed on regular grids. It is often referred to stencil codes because a stencil is applied to compute the next state in time for each grid cell. 1D and 2D domains are favorable, because of small stencils and small memory foot prints, which leads to a fast computation and less cache misses. Unfortunately, many phenomena need to be investigated in 3D. Lets stay in the  2D world till the end because it is possible explain the effects on speedup and efficiency already in 2D.

One very accessible example for a stencil code on a 2D grid is Conway’s Game of Life, where a cell is either alive 1 or dead 0. Four simple rules determine if a cell dies, starts to live ;-) or stays in the same state based on a 9-point stencil as depicted in the following figure.

9-point-stencil

In this example I have one 2D grid for the current time step $n$ and one for the next time step $n+1$. The field with time step n contains a initial state, which is randomly filled with alive (1) and dead (0) cells. To compute the state of all cells in the grid for time step $n+1$, the 9-point stencil has to be applied for all cells in in time step $n$. The following figure depicts the two fields (time step $n$ and $n+1$) and shows the application of the 9-point stencil on a cell. Already processed cells in time step $n+1$ are colored in blue.

gol-stencil3

Now lets take a look at a 2D grid consisting of $3200\times3200$ cells to get into domain decomposition for Conway’s Game of Life 9-point stencil example.

I use 16 CPU cores to speedup the computation, such that I cut the domain along the $x$ axis into 16 subdomains as you can see in the following figure. One domain (red) is picked out to show the size of the domains.

In order to compute one subdomain independently on one processor, it is necessary to introduce one additional line of cells at the cutting edge. In other words, a boundary layer or ghost layer has to be introduced. The following two figures depict the state of the uncut domain and the state after cutting into subdomains with additional ghost layers.

sub-domain-before2

sub-domain-after2

The final part is to update the ghost cells after each time step, which requires communication. Now communication is the dark side of computing. Its the reason for the caches on the CPUs. If you want to utilize a cluster of over 10.000 CPU cores, communication over a network is necessary, such that the boundary exchange or boundary updates are performance critical.

To estimate the performance of a $3200\times 3200$ cells domain lets assume the computation of one 9-point stencil $t_s$ requires 1 time unit and the communication time$t_c$ of one cell requires also 1 time unit, then the runtime $T_{s}$ would look something like this for the sequential case with 1000 time steps.

$T_{s}=3200^2 \cdot t_s \cdot 1000$.

The parallel runtime with 16 cores $T_p$ is

$T_{p}=(3200 \cdot 200 \cdot t_s + 3200 \cdot 2 \cdot t_c)\cdot 1000$.

The speedup $S$ would  be

$S=\frac{T_s}{T_p}=\frac{3200^2 \cdot t_s \cdot 1000}{(3200 \cdot 200 \cdot t_s + 3200 \cdot 2 \cdot t_c)\cdot 1000}=\frac{3200\cdot t_s}{200\cdot t_s + 2 \cdot t_c}=\frac{3200}{202}\approx 15.84$,

which sounds not bad. Now lets increase the cores to 128.

$S=\frac{T_s}{T_p}=\frac{3200^2 \cdot t_s \cdot 1000}{(3200 \cdot 25 \cdot t_s + 3200 \cdot 2 \cdot t_c)\cdot 1000}=\frac{3200\cdot t_s}{25\cdot t_s + 2 \cdot t_c}=\frac{3200}{27}\approx 118.52$

A Speedup of about 119 is also nice with 128 CPU cores. Lets go to the end of the line with 3200 CPU cores.

$S=\frac{T_s}{T_p}=\frac{3200^2 \cdot t_s \cdot 1000}{(3200 \cdot 1 \cdot t_s + 3200 \cdot 2 \cdot t_c)\cdot 1000}=\frac{3200\cdot t_s}{1 \cdot t_s + 2 \cdot t_c}=\frac{3200}{3}\approx 1066.67$

This now looks a little weird. 3200 Cores create a speedup of about 1067. What happened with the other cores. Are they sitting idle? Yes, communication is the bottleneck. This example and the performance model are simple and some assumptions are made, such that I strongly advice to make measurements before you conduct development. If your Measurement show the same or similar scaling behavior, you should consider multi dimensional decomposition as one option. One step to optimize would be to hide communication as much as possible but to reduce the communication to a minimum tackles the problem directly.

To reduce communication, the cutting surface (red cells from the last figure) have to be reduced by cutting the domain in squares instead of salami slices as shown in the following picture.

800x800-red

The main difference between the square in the last picture and the salami slices in the earlier picture is not the number of cells inside the domain, its the cutting surface. The square has a cutting line of $800\times 4 = 3200$ and the salami slice has $3200 \times 2 = 6400$ cells, which is the double amount. This will not affect the speedup much for small numbers of CPU cores but for large numbers.

Lets see what happens for 3200 cores.

$S=\frac{T_s}{T_p}=\frac{3200^2 \cdot t_s \cdot 1000}{(3200 \cdot t_s +\frac{3200}{\sqrt{3200}} \cdot 4 \cdot t_c)\cdot 1000}=\frac{3200\cdot t_s}{1 \cdot t_s + \frac{4}{\sqrt{3200}}\cdot t_c}=\frac{3200}{1.07}\approx 2988.67$

Now this looks much nicer with a speedup of about 2989 instead of 1067 with 3200 CPU cores.

The scaling behavior between 1D decomposition 2D decomposition is significantly different. If you have a 3D volume, the decomposition dimension matters even more.

Posted in: Allgemein Tagged: domain decomposition, finite differences, high performance computing, message passing, MPI, parallel computing, regular grid

Recent Posts

  • Benefits of “Row Recycling”
  • Scary Halloween Lights
  • Indication to Celebrate Birth as Most Important Birthday
  • PhD Survival Kit
  • Ready, Steady, GOOOOO

Archive

  • March 2021
  • October 2018
  • August 2018
  • August 2015
  • March 2015
  • May 2014
  • January 2014
  • December 2013
  • November 2013

Recent Comments

  • MT Varghese on About Me
  • Tom Zimmerman on Indication to Celebrate Birth as Most Important Birthday
  • sasa on Indication to Celebrate Birth as Most Important Birthday
  • Tom Zimmerman on Indication to Celebrate Birth as Most Important Birthday

Copyright © 2022 Alexander Vondrous.

Omega WordPress Theme by ThemeHall