{"id":396,"date":"2021-03-15T07:57:14","date_gmt":"2021-03-15T06:57:14","guid":{"rendered":"http:\/\/alexander.vondrous.de\/?p=396"},"modified":"2021-09-06T22:33:49","modified_gmt":"2021-09-06T21:33:49","slug":"benefits-of-row-recycling","status":"publish","type":"post","link":"https:\/\/alexander.vondrous.de\/?p=396","title":{"rendered":"Benefits of &#8220;Row Recycling&#8221;"},"content":{"rendered":"\n<h3 class=\"wp-block-heading\">TLDR<\/h3>\n\n\n\n<p>Make updates, don&#8217;t delete rows just to insert them with small changes.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">The Crime Scene<\/h3>\n\n\n\n<p>In the shadows of the night, a batch job starts his task at 2 am to clean data. A table with millions of entries and more columns then healthy for any developer gets queried for new entries of the last working day. Each cleaning step comes in chunks of 1 to 10 rows. From here on out, there are three different ways a chunk of rows can develop. Either the number of rows in a chunk increases, stays as it is or decreases. In any way, the content is slightly modified. This batch jobs takes about 2 to 3 hours to complete processing all the chunks sequentially.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">The Culprit<\/h3>\n\n\n\n<p>To process all chunks, all changed rows of the last working day get fetched and processing starts:<\/p>\n\n\n\n<ol><li>Start Transaction<\/li><li>Get all rows of a chunk<\/li><li>Delete all rows of the chunk from the database<\/li><li>Modify the row content and row count<\/li><li>Insert the modified rows of the chunk into the database<\/li><li>End Transaction<\/li><\/ol>\n\n\n\n<p>The delete-modify-insert pattern works well for a certain size of table but can get slow as soon as the content of the table gets out of hand. An update could be faster than delete-modify-insert.&nbsp;<\/p>\n\n\n\n<blockquote class=\"wp-block-quote\"><p>Old developers wisdom: Make measurements, especially if in doubt.<\/p><\/blockquote>\n\n\n\n<p>Delete and insert operations are expensive because the index has to be updated. It also depends strongly on the tables configuration and database settings. Some databases can handle it very well but in this case it was the reason for a moderate performance.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">One Solution: Row Recycling<\/h2>\n\n\n\n<p>To optimize chunk processing, keep the rows of a chunk in the database to the moment, when it is clear, how the new chunks look like in terms of row count and content. Three scenarios are possible:<\/p>\n\n\n\n<ol><li>number of rows stays the same<\/li><li>number of rows decreases<\/li><li>number of rows increases<\/li><\/ol>\n\n\n\n<p>In any case, row content is slightly modified. With that knowledge, it is possible to create a model to understand possible performance benefits. Lets get the hands dirty. The property of interest is speedup $S$, which we could gain from recycling rows to get rid of the delete-modify-insert schema. Speedup is defined as the time the old program (old: delete-modify-insert) takes $T_{o}$ by the time the new program (new: lets call it recycle) $T_{n}$ takes.&nbsp;<\/p>\n\n\n\n<p>$S = \\frac{T_{o}}{T_{n}}$<\/p>\n\n\n\n<p>The time a delete-modify-insert $T_{o}$ takes, consists of the time $t_d$ it takes to delete one of $n$ elements and the time $t_i$ to insert one of $m$ elements.<\/p>\n\n\n\n<p>Wait a minute: Where is the time to fetch, modify and send the data?<\/p>\n\n\n\n<p>There are several reasons to remove those considerations. The very first and obvious reason is, the &#8220;recycling speedup&#8221; I want to show looks much <del>better<\/del> clearer ;-). Another reason is to keep the model simple for this post. The most reasonable reason is to neglect the&nbsp;latency bound operation becaues the database and batch process are on the same machine. Its an easy task to add more and more reality\/complexity to the model which would make a really long post with dozens of parameters. Simplicity is good because one can use available tools at hand (e.g. Mathematica, &#8230;) to analyze certain aspects of reality, without the need to develop a professional simulator, which is fun though.<\/p>\n\n\n\n<p>Now lets get the equations going and define $T_{o}$<\/p>\n\n\n\n<p>$T_{o} = n\\cdot t_d + m\\cdot{} t_i$<\/p>\n\n\n\n<p>The situation for $T_{n}$ is a little bit more complex but still managable, because the three cases (row count increases, stays and decreases) from above are used. The time to update a row is $t_u$.<\/p>\n\n\n\n<p>$T_{n} = \\begin{cases} (n-m) \\cdot{} t_d &amp; + &amp; m\\cdot{}t_u, &amp; \\text{if $m&lt;n$} \\\\<br>&nbsp;&amp; &amp; m\\cdot{}t_u, &amp; \\text{if $m=n$} \\\\<br>(m-n) \\cdot{} t_i &amp; + &amp; n \\cdot{}t_u, &amp; \\text{if $m&gt;n$} \\end{cases}$<\/p>\n\n\n\n<p>I simplified the equation (some Voodoo) with one Assumption: The time it takes to delete a row is the same time it takes to insert a row. This is heavy and it was true for that &#8220;crime&#8221;. Imagine the length and width of the database table to get such performance behaviour.<\/p>\n\n\n\n<p>$T_n = |n-m|\\cdot{}t_d + \\text{min($m,n$)}\\cdot{}t_u$<\/p>\n\n\n\n<p>Lets put everything into the cooking pot and determine the Speedup.<\/p>\n\n\n\n<p>$S = \\frac{T_{o}}{T_{n}} = \\frac{n\\cdot t_d + m\\cdot{} t_d}{|n-m|\\cdot{}t_d + \\text{min($m,n$)}\\cdot{}t_u} = \\frac{n+m}{|n-m| + \\text{min($m,n$)}\\frac{t_u}{t_d}}$<\/p>\n\n\n\n<p>To understand a little bit better, what this means, lets take another perspective on this equation with surface plots. The x-axis is $n$ and the y-axis is $m$. The quotient $\\frac{t_u}{t_d}$ is set to 1.0 (update and delete time is equal). The z-axis is representing the Speedup $S$. Here the Mathematica line:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>Plot3D&#91;(x+y)\/(Abs&#91;x-y]+Min&#91;x,y]*1), {x, 0, 10}, {y, 0, 10}]<\/code><\/pre>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-large\"><a href=\"https:\/\/alexander.vondrous.de\/wp-content\/uploads\/2021\/03\/001-000.png\"><img decoding=\"async\" loading=\"lazy\" width=\"360\" height=\"286\" src=\"https:\/\/alexander.vondrous.de\/wp-content\/uploads\/2021\/03\/001-000.png\" alt=\"\" class=\"wp-image-588\" srcset=\"https:\/\/alexander.vondrous.de\/wp-content\/uploads\/2021\/03\/001-000.png 360w, https:\/\/alexander.vondrous.de\/wp-content\/uploads\/2021\/03\/001-000-300x238.png 300w\" sizes=\"(max-width: 360px) 100vw, 360px\" \/><\/a><figcaption>The plot is symmetrical to the first bisector and has a maximum speedup of about 2.<\/figcaption><\/figure><\/div>\n\n\n\n<p>Now this looks good. In the worst case the speedup of row recycling is as good as without recycling (delete-insert). Things start to look even better if the time for an update is smaller than for a delete\/insert. Lets assume an update is 10 times faster than a delete.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>Plot3D&#91;(x+y)\/(Abs&#91;x-y]+Min&#91;x,y]*0.1), {x, 0, 10}, {y, 0, 10}]<\/code><\/pre>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-large\"><a href=\"https:\/\/alexander.vondrous.de\/wp-content\/uploads\/2021\/03\/000-100.png\"><img decoding=\"async\" loading=\"lazy\" width=\"360\" height=\"291\" src=\"https:\/\/alexander.vondrous.de\/wp-content\/uploads\/2021\/03\/000-100.png\" alt=\"\" class=\"wp-image-587\" srcset=\"https:\/\/alexander.vondrous.de\/wp-content\/uploads\/2021\/03\/000-100.png 360w, https:\/\/alexander.vondrous.de\/wp-content\/uploads\/2021\/03\/000-100-300x243.png 300w\" sizes=\"(max-width: 360px) 100vw, 360px\" \/><\/a><figcaption>The same as earlier but the maximum speedup is around 15.<\/figcaption><\/figure><\/div>\n\n\n\n<p>Now this shark fin is a good result. Lets push things one order of magnitude and assume an update is 100 times faster than a delete\/insert.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>Plot3D&#91;(x+y)\/(Abs&#91;x-y]+Min&#91;x,y]*0.01), {x, 0, 10}, {y, 0, 10}]<\/code><\/pre>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-large\"><a href=\"https:\/\/alexander.vondrous.de\/wp-content\/uploads\/2021\/03\/000-010.png\"><img decoding=\"async\" loading=\"lazy\" width=\"360\" height=\"290\" src=\"https:\/\/alexander.vondrous.de\/wp-content\/uploads\/2021\/03\/000-010.png\" alt=\"\" class=\"wp-image-586\" srcset=\"https:\/\/alexander.vondrous.de\/wp-content\/uploads\/2021\/03\/000-010.png 360w, https:\/\/alexander.vondrous.de\/wp-content\/uploads\/2021\/03\/000-010-300x242.png 300w\" sizes=\"(max-width: 360px) 100vw, 360px\" \/><\/a><figcaption>Wow the result is, that one has to increase the bounding box &#x1f609;.<\/figcaption><\/figure><\/div>\n\n\n\n<p>Ok ok but what happens if the time of an update is 10 times slower than an delete\/insert?<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>Plot3D&#91;(x+y)\/(Abs&#91;x-y]+Min&#91;x,y]*10), {x, 0, 10}, {y, 0, 10}]<\/code><\/pre>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-large\"><a href=\"https:\/\/alexander.vondrous.de\/wp-content\/uploads\/2021\/03\/010-000.png\"><img decoding=\"async\" loading=\"lazy\" width=\"360\" height=\"287\" src=\"https:\/\/alexander.vondrous.de\/wp-content\/uploads\/2021\/03\/010-000.png\" alt=\"\" class=\"wp-image-585\" srcset=\"https:\/\/alexander.vondrous.de\/wp-content\/uploads\/2021\/03\/010-000.png 360w, https:\/\/alexander.vondrous.de\/wp-content\/uploads\/2021\/03\/010-000-300x239.png 300w\" sizes=\"(max-width: 360px) 100vw, 360px\" \/><\/a><figcaption>The surface is also symmetrical along the first bisctor and in the worst case, we are about 10 times slower than delete-insert.<\/figcaption><\/figure><\/div>\n\n\n\n<p>This does not look very good but every coin has two sides. Its somehow clear, if an update takes 10 times the time of a delete or insert, that the update centric approach ist slower. Heads up. This is not the end. Something interesting happens if the time for an update is two times the time of a delete\/insert. <\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>Plot3D&#91;(x+y)\/(Abs&#91;x-y]+Min&#91;x,y]*2.0), {x, 0, 10}, {y, 0, 10}]<\/code><\/pre>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-large\"><a href=\"https:\/\/alexander.vondrous.de\/wp-content\/uploads\/2021\/03\/002-000.png\"><img decoding=\"async\" loading=\"lazy\" width=\"360\" height=\"287\" src=\"https:\/\/alexander.vondrous.de\/wp-content\/uploads\/2021\/03\/002-000.png\" alt=\"\" class=\"wp-image-584\" srcset=\"https:\/\/alexander.vondrous.de\/wp-content\/uploads\/2021\/03\/002-000.png 360w, https:\/\/alexander.vondrous.de\/wp-content\/uploads\/2021\/03\/002-000-300x239.png 300w\" sizes=\"(max-width: 360px) 100vw, 360px\" \/><\/a><figcaption>Its a plane &#x1f928;.<\/figcaption><\/figure><\/div>\n\n\n\n<p>This is very interesting, since there is no speedup at all. This shows, that it is possible that an optimization can have no speedup at all. Imagine the confusion analysing such behaviour after a performance optimization.<\/p>\n\n\n\n<p>All in all its imperative to understand and proof performance bottlenecks <strong>before<\/strong> conducting an optimization like &#8220;row recycling&#8221;. The delete-insert method has benefits if the update takes way to long but the update is clearly benefitial if the update time is equal to the delete\/insert time for<em> this<\/em> use case.<\/p>\n\n\n\n<p>There are more things to explain around this use case, more things to discuss and more things to analyse. If you like, let me know.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>TLDR Make updates, don&#8217;t delete rows just to insert them with small changes. The Crime Scene In the shadows of the night, a batch job starts his task at 2 am to clean data. A table with millions of entries and more columns then healthy for any developer gets queried for new entries of the &#8230; <span class=\"more\"><a class=\"more-link\" href=\"https:\/\/alexander.vondrous.de\/?p=396\">[Read more&#8230;]<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[1],"tags":[46,34,47],"_links":{"self":[{"href":"https:\/\/alexander.vondrous.de\/index.php?rest_route=\/wp\/v2\/posts\/396"}],"collection":[{"href":"https:\/\/alexander.vondrous.de\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/alexander.vondrous.de\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/alexander.vondrous.de\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/alexander.vondrous.de\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=396"}],"version-history":[{"count":64,"href":"https:\/\/alexander.vondrous.de\/index.php?rest_route=\/wp\/v2\/posts\/396\/revisions"}],"predecessor-version":[{"id":652,"href":"https:\/\/alexander.vondrous.de\/index.php?rest_route=\/wp\/v2\/posts\/396\/revisions\/652"}],"wp:attachment":[{"href":"https:\/\/alexander.vondrous.de\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=396"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/alexander.vondrous.de\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=396"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/alexander.vondrous.de\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=396"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}